Building Yocto BSP from Source

This section describes how to build the Microchip BSP from Yocto Project sources.

1. Development environment

The development environment must support at least the packages required by the Yocto Project: https://docs.yoctoproject.org/ref-manual/system-requirements.html

On Ubuntu 20.04 LTS or later, the required packages can be installed like this:

$ sudo apt-get install -y \
    build-essential \
    chrpath \
    cpio \
    diffstat \
    file \
    gawk \
    git \
    lz4 \
    python3 \
    python3-pip \
    texinfo \
    wget \
    zstd

An alternative is to use the Docker image provided by Microchip which contains Ubuntu 24.04 LTS and all of the necessary packages.

See Using Docker for details on how to configure and use a Docker image.

2. BSP structure

This is the structure used by the Yocto BSP:

./openembedded-core

Core Yocto Project build system and metadata

./meta-mchp/meta-mchp-ncs/

Main BSP layer containing machine configurations, recipes, and classes

./meta-mchp/meta-mchp-ncs/conf/machine/

Machine configuration files for supported devices

./meta-mchp/meta-mchp-ncs/recipes-kernel/

Linux kernel recipe

./meta-mchp/meta-mchp-ncs/recipes-bsp/

Bootloader recipes (U-Boot, TFA)

./meta-mchp/meta-mchp-ncs/recipes-mchp/

Microchip-specific tools and utilities

./meta-mchp/meta-mchp-ncs/recipes-mchp/images/

Image recipes

./build/tmp/deploy/images/

Build output directory containing generated images

The BSP layer follows the standard Yocto Project layer structure described here: https://docs.yoctoproject.org/dev-manual/layers.html

3. Supported Machines

The Yocto BSP supports the following machines:

lan966x

LAN966x EVBs (ARMv7a Cortex-A8)

lan969x

LAN969x EVBs (ARMv8a Cortex-A53) - default machine

sparx5

Sparx5 EVBs (ARMv8a Cortex-A53)

rpi4cm

Raspberry Pi 4 Compute Module with LAN966x/LAN969x daughterboards

vsc7514

VSC7514 (Ocelot) EVBs (MIPS32r2)

4. Toolchain

The Yocto BSP builds its own cross-compilation toolchain as part of the standard build process. Unlike Buildroot, there is no need to download and install a separate toolchain - Yocto automatically generates the appropriate toolchain for the target architecture during the build.

The toolchain is built into the build/tmp/sysroots-components/ directory and is used internally by BitBake to compile all recipes for the target platform.

4.1. Application Development SDK

Yocto can also generate standalone SDKs (Software Development Kits) that can be installed on development machines for application development outside of the Yocto build environment. The SDK includes:

  • Cross-compilation toolchain

  • Libraries and headers for the target

  • Development tools and utilities

Building and using SDKs is beyond the scope of this documentation. For detailed information on generating and using Yocto SDKs, see the official Yocto Project documentation: https://docs.yoctoproject.org/sdk-manual/

5. Building

The Yocto BSP uses the standard BitBake build system. The build environment is set up using the oe-init-build-env script.

Yocto builds should be done as a non-root user. There is no need to be root to configure and build.

5.1. Environment Setup

Before building, you must initialize the build environment. Navigate to your Yocto build directory (the parent directory containing openembedded-core and meta-* layers):

$ source openembedded-core/oe-init-build-env

This will create a build directory and configure your shell environment.

5.2. Building Images

The Yocto BSP provides several image targets:

mchp-standalone-image

Full-featured image with networking tools and switch management utilities (recommended)

mchp-standalone-dev-image

Developer image with additional profiling and debugging tools

mchp-standalone-mini-image

Minimal BusyBox-based image

mchp-standalone-gpt-image

GPT-partitioned variant for eMMC/SD cards

mchp-standalone-dev-gpt-image

GPT-partitioned developer image variant

mchp-standalone-ubi-image

UBI (NAND flash) variant

For a complete list of supported image targets per machine, their contents, and available build artifacts, see meta-mchp/meta-mchp-ncs/conf/templates/default/conf-notes.txt.

To build the main standalone image (default machine is lan969x):

$ source openembedded-core/oe-init-build-env
$ bitbake mchp-standalone-image

To build for a specific machine:

$ source openembedded-core/oe-init-build-env
$ MACHINE=lan966x bitbake mchp-standalone-image

You can also set the machine in build/conf/local.conf to avoid specifying it each time:

MACHINE = "lan966x"

5.3. Building Individual Recipes

To build only the Linux kernel:

$ source openembedded-core/oe-init-build-env
$ MACHINE=lan966x bitbake linux-mchp

To build only U-Boot:

$ source openembedded-core/oe-init-build-env
$ MACHINE=sparx5 bitbake u-boot-mchp

5.4. Cleaning and Rebuilding

To clean and rebuild a specific recipe:

$ source openembedded-core/oe-init-build-env
$ bitbake <recipe-name> -c cleansstate
$ bitbake <recipe-name>

To force a rebuild after making changes to a recipe:

$ source openembedded-core/oe-init-build-env
$ bitbake <recipe-name> -c compile -f
$ bitbake <recipe-name>

5.5. Build Output

The build output is located in build/tmp/deploy/images/ followed by the machine name. This directory contains:

  • FIT images (kernel + device trees + initramfs)

  • Root filesystem images (ext4, ubi, etc.)

  • Bootloader binaries

  • Firmware packages

6. Artifacts

The following table shows which build targets are supported for each machine:

Machine Supported Targets

lan969x

mchp-standalone-image
mchp-standalone-dev-image
mchp-standalone-gpt-image
mchp-standalone-dev-gpt-image

lan966x

mchp-standalone-image
mchp-standalone-dev-image
mchp-standalone-gpt-image
mchp-standalone-dev-gpt-image

sparx5

mchp-standalone-image
mchp-standalone-dev-image
mchp-standalone-gpt-image
mchp-standalone-dev-gpt-image
mchp-standalone-ubi-image

rpi4cm

mchp-standalone-image
mchp-standalone-dev-image
mchp-standalone-gpt-image
mchp-standalone-dev-gpt-image

vsc7514

mchp-standalone-mini-image

For detailed information about image contents and complete artifact listings, see meta-mchp/meta-mchp-ncs/conf/templates/default/conf-notes.txt.

After the build completes, the following artifacts are available in build/tmp-mchp-glibc/deploy/images/<machine>:

Artifact Type File Name Availability

FIT Image

fitImage-<machine>.bin

All

Bare FIT Image

Image.itb

All

FIP

fip.bin

LAN969x, LAN966x

SquashFS

mchp-standalone-image-<machine>.rootfs.squashfs

All

Ext4 Disk Image

mchp-standalone-gpt-image-<machine>.rootfs.ext4

All except VSC7514

GPT Disk Image

mchp-standalone-gpt-image-<machine>.rootfs.gpt

All except VSC7514

UBI Disk Image

mchp-standalone-ubi-image-<machine>.rootfs.ubifs

Sparx5

U-Boot Image

u-boot-<machine>.bin

All

U-Boot Image

u-boot-<machine>.bin-emmc

Sparx5

U-Boot Image

u-boot-<machine>.bin-nand

Sparx5

U-Boot Env

u-boot-mchp-initial-env-<machine>

All

U-Boot Env

u-boot-mchp-initial-env-<machine>-emmc

Sparx5

U-Boot Env

u-boot-mchp-initial-env-<machine>-nand

Sparx5

DT Overlays

overlays/*.dtbo

LAN969x, LAN966x

Not all combinations of machine and target produce the same set of artifacts.

7. Selecting artifacts to build

The Yocto BSP uses custom FIT (Flattened Image Tree) images that bundle the kernel with multiple device tree binaries (DTBs) for different board variants.

The artifact ITB-Rootfs is built by default for most images.

The FIT image format is implemented in the ncs-kernel-fitimage.bbclass class and allows:

  • Single kernel image containing multiple device tree binaries (DTBs)

  • Boot configurations for different board variants

  • Embedded initramfs

  • Optional RSA2048 signing for secure boot

Machine configurations define which DTBs to include via the KERNEL_DEVICETREE and KERNEL_DEVICETREE_BUNDLE variables.

8. Changing the build

The next sections explain how you can change the build or its content.

8.1. Adding new packages

To add a new package to the Yocto BSP, you can either:

  1. Use an existing recipe from OpenEmbedded layers

  2. Create a new recipe in the BSP layer

For creating new recipes, follow the Yocto Project documentation: https://docs.yoctoproject.org/dev-manual/new-recipe.html

To add a package to an image, edit the image recipe in meta-mchp-ncs/recipes-mchp/images/:

IMAGE_INSTALL:append = " <package-name>"

For machine-specific additions:

IMAGE_INSTALL:append:lan966x = " otp pmac"

8.2. Modifying the kernel

The kernel uses KCONFIG_MODE = "alldefconfig" with machine-specific defconfigs.

To modify the kernel configuration:

  1. Run the kernel menuconfig:

    $ source openembedded-core/oe-init-build-env
    $ MACHINE=lan966x bitbake linux-mchp -c menuconfig
  2. Make your changes and save

  3. Create a configuration fragment:

    $ bitbake linux-mchp -c diffconfig
  4. Add the fragment to your kernel recipe or layer

Alternatively, you can create a configuration fragment file and add it to the kernel recipe’s SRC_URI:

SRC_URI:append = " file://my-kernel-config.cfg"

8.3. Modifying device trees

Device tree source files are built as part of the kernel. To modify device trees:

  1. The device tree sources are located in the kernel source tree

  2. Make your changes to the .dts or .dtsi files

  3. Rebuild the kernel:

    $ source openembedded-core/oe-init-build-env
    $ bitbake linux-mchp -c compile -f
    $ bitbake linux-mchp

To add or remove device trees from the build, modify the KERNEL_DEVICETREE variable in the machine configuration file:

KERNEL_DEVICETREE = " \
    microchip/lan966x_evb.dtb \
    microchip/lan966x_evb_aqr.dtb \
"

8.4. Multi-Machine Recipe Pattern

Some recipes like symreg.bb build machine-specific variants from the same source using EXTRA_OECMAKE flags:

EXTRA_OECMAKE:sparx5 = "-DBUILD_SPARX5=ON"
EXTRA_OECMAKE:lan966x = "-DBUILD_LAN966X=ON -DBUILD_LAN9645X=ON"
EXTRA_OECMAKE:lan969x = "-DBUILD_LAN969X=ON"
EXTRA_OECMAKE:rpi4cm = "-DBUILD_LAN966X=ON -DBUILD_LAN969X=ON"

The recipe builds machine-specific binaries and creates a symlink: symreg -> symreg_${MACHINE}.

8.5. Adding a New Machine

To add a new machine to the Yocto BSP:

  1. Create a machine configuration file in meta-mchp-ncs/conf/machine/<machine-name>.conf

  2. Define the required variables:

    SOC_FAMILY = "microchip"
    MACHINEOVERRIDES =. "microchip:"
    DEFAULTTUNE = "cortexa8thf-neon"
    SERIAL_CONSOLES = "115200;ttyS0"
  3. Set the kernel provider:

    PREFERRED_PROVIDER_virtual/kernel = "linux-mchp"
  4. List device trees:

    KERNEL_DEVICETREE = " \
        microchip/my-board.dtb \
    "
    
    KERNEL_DEVICETREE_BUNDLE = " \
        microchip/my-board.dtb:my-board \
    "
  5. Add the machine to COMPATIBLE_MACHINE in relevant recipes (linux-mchp, u-boot-mchp, etc.)

9. Using Docker

The Yocto BSP can be built using a Docker image provided by Microchip which contains Ubuntu 24.04 LTS and all necessary build packages.

You can avoid installing all build dependencies by using the dr helper script: https://github.com/microchip-ung/docker-run

Clone the repo and copy the dr script into a folder that is within your path, e.g. /usr/local/bin:

$ git clone https://github.com/microchip-ung/docker-run
$ cd ./docker-run
$ sudo cp dr /usr/local/bin/dr
$ sudo chmod a+x /usr/local/bin/dr

9.1. Docker Configuration

The dr script requires a configuration file, .docker.env, which must be present somewhere between the current directory and the root of the file system.

The .docker.env file must add user and uid in the environment to allow the container to run commands as a regular user (not root). Docker containers execute as root by default, but Yocto recommends running all commands as a regular user.

Create a .docker.env file with the following content:

# Docker image configuration
MCHP_DOCKER_NAME="microchiptech/yocto-builder"
MCHP_DOCKER_TAG="latest"

# Mount options - user/uid required for non-root execution
MCHP_DOCKER_OPTIONS="--tmpfs /tmp:exec -e BLD_USER=$(id -un) -e BLD_UID=$(id -u)"
The .docker.env file can also map the user home folder into the container to access other software repositories stored on the local machine, such as the Linux kernel.

You will need to install Docker to use the dr script: https://docs.docker.com/engine/install/

Using the dr script enables automatic download of the Docker image from a container repository.

9.2. Building with Docker

Alternative syntax using quotes is recommended to avoid shell interpretation issues:

To build an image using Docker:

$ dr "source openembedded-core/oe-init-build-env; bitbake mchp-standalone-image"

To build for a specific machine (e.g., LAN966x):

$ dr "source openembedded-core/oe-init-build-env; MACHINE=lan966x bitbake mchp-standalone-image"

Example for Sparx5:

$ dr "source openembedded-core/oe-init-build-env; MACHINE=sparx5 bitbake mchp-standalone-image"

9.3. Interactive Docker Shell

For manual commands and development work, you can start an interactive shell:

$ dr bash

Inside the container:

$ source openembedded-core/oe-init-build-env
$ MACHINE=sparx5 bitbake mchp-standalone-image

Note that you are the same user inside the container as outside, and the current folder is the same inside and outside the container.

Type exit to leave the Docker container.

9.4. Mounting Additional Directories

The default configuration maps volumes to enable access to the current directory. If you need to mount other folders in the container (such as local source repositories), add them to the MCHP_DOCKER_OPTIONS variable in .docker.env:

MCHP_DOCKER_OPTIONS="--tmpfs /tmp:exec -e BLD_USER=$(id -un) -e BLD_UID=$(id -u) -v /path/to/sources:/path/to/sources"

Now you can access the /path/to/sources folder both inside and outside the container.

10. Development Workflow

10.1. Modifying Recipes

After modifying a recipe, rebuild it with:

$ source openembedded-core/oe-init-build-env
$ bitbake <recipe-name> -c compile -f
$ bitbake <recipe-name>

10.2. Running Specific BitBake Tasks

BitBake recipes consist of tasks that can be run individually:

$ source openembedded-core/oe-init-build-env
$ bitbake <recipe-name> -c <task>

Common tasks include: * fetch - Download sources * unpack - Extract source archives * patch - Apply patches * configure - Run configuration scripts * compile - Build the software * install - Install to staging directory * deploy - Deploy to the deploy directory

10.3. Disabling BitBake Tasks

To skip task execution (like do_kernel_configcheck):

do_<taskname>[noexec] = "1"

Place this after inherit statements to override inherited tasks.

11. Key Components

11.1. Linux Kernel

  • Recipe: linux-mchp_6.%.bb

  • Location: meta-mchp-ncs/recipes-kernel/linux/

  • Uses custom FIT image class for bundled kernel + DTB images

  • Supports multiple device trees per machine

11.2. Bootloaders

Three U-Boot versions for different platforms:

  • u-boot-mchp_v2019.10.bb - VSC7514 (MIPS)

  • u-boot-mchp_v2021.04.bb - Raspberry Pi CM4

  • u-boot-mchp_v2024.04.bb - LAN966x, LAN969x, Sparx5

11.3. Trusted Firmware (ARM64 only)

For ARM64 platforms (lan966x, lan969x, sparx5), the tfa recipe downloads pre-built FIP (Firmware Image Package) from GitHub and patches U-Boot as BL33 payload.

11.4. Switch Management Tools

The BSP includes various switch management tools:

  • vcap - VLAN Capability instance management

  • symreg - Register viewer/modifier via UIO (machine-specific builds)

  • pmac - PMAC table configuration (lan966x only)

  • qos-utils - QoS configuration

  • cfm/mrp - Layer 2 redundancy protocols

11.5. Real-Time Engine Stack (lan966x only)

  • mera - Microchip Ethernet RTE API for LAN9662

  • osal - OS abstraction layer

  • rtlabs-pnet - Profinet stack (depends on mera + osal)

12. Distribution Configuration

The mchp-ncs-linux distribution uses:

  • Init system: BusyBox with mdev (not systemd/udev)

  • Device management: Static /dev (USE_DEVFS=0)

  • Package format: IPK

  • Minimal DISTRO_FEATURES: No X11, Wayland, Bluetooth, WiFi - focused on embedded networking

13. Build Optimization

The BSP includes several optimizations:

  • Shallow Git: Enabled (BB_GIT_SHALLOW=1) to reduce clone sizes

  • Download mirror: Microchip Artifactory serves pre-fetched sources

  • Shared state cache: SSTATE_DIR for faster rebuilds across machines

  • Parallel builds: Configured via BB_NUMBER_THREADS and PARALLEL_MAKE

14. Additional Resources