In this Blog post I will explore the process of setting up a minimal rust project for embedded baremetal development. The focus is on getting up and running including setting up a debugging flow for instruction stepping fully integrated into my editor of choice (emacs).
As of 2025 many popular microcontrollers such as the esp32 already have a big rust community and some easy templates to get up and running with rust. But even less popular devices already have basic infrastructure in place, though setting up a project to do development on them is a bit more of an exploratory experience. Still a lot of the heavy lifting has already been solved. I still remember when first started doing some embedded rust development in 2019, there was little to build upon, so getting started involved bootstrapping a peripheral access crate and start writing a hardware abstraction layer. That is to say that within the last six years the ecosystem for embedded rust projects has evolved impressively.
The device this blog post is about is the ch32v203
RISCV microcontroller developed by WCH. Searching for “ch32-hal rust” on Google quickly finds a hal with embassy support: ch32-hal. Great!
Let us look at the examples folder. Within the folder for the device I some basic examples are provided:
examples/ch32v203/src/bin
├── adc.rs
├── blinky.rs
├── flash_sections.rs
├── sdi_print.rs
├── spi-lcd-st7735.rs
└── uart.rs
That looks promising, apart from the hello world of embedded programming blinky
there are some more complex examples for running an LCD display, doing analog measurements and getting up and running with UART.
1// uart.rs
2
3#![no_std]
4#![no_main]
5#![feature(type_alias_impl_trait)]
6#![feature(impl_trait_in_assoc_type)]
7
8use ch32_hal::usart;
9use embassy_executor::Spawner;
10use embassy_time::{Duration, Timer};
11use hal::gpio::{AnyPin, Level, Output, Pin};
12use hal::usart::UartTx;
13use {ch32_hal as hal, panic_halt as _};
14
15#[embassy_executor::main(entry = "qingke_rt::entry")]
16async fn main(spawner: Spawner) -> ! {
17 let p = hal::init(Default::default());
18
19 let mut led = Output::new(p.PB8, Level::Low, Default::default());
20
21 let mut cfg = usart::Config::default();
22 let mut uart = UartTx::new_blocking(p.USART1, p.PA9, cfg).unwrap();
23
24 loop {
25 Timer::after_millis(1000).await;
26
27 uart.blocking_write(b"hello world from embassy main\r\n");
28
29 led.toggle();
30 }
31}
The UART example is a good starting point for a small project, there is an entry point from which asyncronous tasks could be spawned and printing program output via serial console is enabled. But editing examples within the hal examples is not the way forward.
Time to setup a small rust project. For this the examples folder also contains most of what is required. The README does not contain a lot of information apart from what PINs are serial pins and links to two different development boards. But there are Cargo.toml, build.rs, .cargo/config.toml and memory.x files. The main hal crate directory also contains a rust-toolchain.toml
file. Let’s take these pieces to put them together in a standlone project.
ch32-project
├── .cargo
│ └── config.toml # configure default rust target and flashing commands
├── Cargo.lock
├── Cargo.toml # project definiton and dependencies
├── README.md
├── build.rs # build script to link using devices memory layout
├── memory.x # memory layout and default handler config
├── rust-toolchain.toml # configure default `nightly` compiler (embedded rusts ecosystem relies on nightly features)
├── src
│ └── main.rs
Some little adjustments are required for this to work. Within the Cargo.toml the reference to the devices hal needs to be adjusted from path to git dependency since obviously this code does not live within the hal crate anymore. If the hal is part of a larger
crate workspace multiple such adjustments might be required. The main.rs is here is just the UART example from above. Since the ch32x
devices use the RISCV instruction set there is no need to install a custom rust compiler or other shenanigans, also since
this is baremetal code there are no system libraries that we would need to link against. So as long as we have installed the riscv32imc-unknown-none-elf target via rustup
we are good to go and can compile the code using cargo build --release
. On my first
attempt I omitted the --release
flag and ran into a linker error because the resulting binary was to big and could not fit into the flash size of the device. This can be solved by adding some optimizations to the dev
profile in `Cargo.toml.
Now that we have a binary it is time to get it onto the dev board. Oh no instructions on how to accomplish this are thin, nothing can be found in the README. In cases like this it is worth looking into the .cargo/config.toml:
1[build]
2target = "riscv32imc-unknown-none-elf"
3
4[target."riscv32imc-unknown-none-elf"]
5rustflags = [
6# "-C", "link-arg=-Tlink.x",
7]
8# runner = "riscv64-unknown-elf-gdb -q -x openocd.gdb"
9# runner = "riscv-none-embed-gdb -q -x openocd.gdb"
10# runner = "gdb -q -x openocd.gdb"
11# runner = "wlink -v flash"
12
13runner = "wlink -v flash --enable-sdi-print --watch-serial --erase"
14# runner = "wlink -v flash"
There are multiple example runners configured with only one being active. Runners are commands used by cargo run
to run the program after compilation has finished. Often times the developers of the hal change the runner command to the one that works for
their setup and commenting the others out. In this case a program called wlink
is used to flash to the target device. This is a rust
program that will use the proprietary wlink
usb debugger to flash the program to the device. When I started tinkering with
this I did not yet possess a wlink
debugger (more on that later). For reference the dev board I am using is the BluePill-Plus-CH32 which can be flashed via USB when put into flashing mode
using the RST and BOOT. So how do I flash via USB? A little bit of searching and I found wchisp an open source reimplementation of official WCHIPSTOOL
of the MCU manufacturer. It is marked as still work in progress but
flashing my development board works like a charm.
At this point I have the code running on the device an can see the led flashing and have serial output when I attach an external serial to USB converter to the serial pins. At this point I could just start developing the project code and use print line debugging if
required. But depending on the project that might be undesirable. And if your development device only has limited amounts of USB ports having an additional serial converter attached only to receive the debug output might seem excessive. In this case I want to go
further and get actual state introspection capabilities. To do this a hardware debug adapter is required. It is a device that can be attached to a normal host system via USB and connected to an MCU via an debug interface. There are multiple debug interfaces found
in the wild so depending on the device you need a compatible debug adapter in order for this to work. Common debug interfaces include jtag
and in the ARM world swd
, sadly neither of those are used by the ch32x
family of devices. These devices expose a debug
interface called SDI
. I really did not find a lot of information about this interface and as far as I can tell the only debugger that can use this is the proprietary wlink
device mentioned early. Now there might be more information out there, but there seem to
be multiple protocols called SDI
and the ones I found did not seem to fit what I am looking for, besides I want to get stuff done and not obsess about protocols. So now I do need to get my hands on one of them 🙄.
Using a proprietary debug adapter and a protocols that is not as common has some follow up challenges as well, because the support for this debug adapter is not up-streamed into common open source tools for on chip debugging, like openocd
! The doc for probe.rs
does mention wlink
but I am more familiar with openocd
. (Though exploring probe.rs
is on my endless list of things I should look into). Anyway so for now I will stick to openocd
+ gdb
. This is also the setup which is supported with MountRiverStudio
the
official development studio for wch
based devices for C development.
To nicely integrate this into my development flow I opted to setup the tools in a containerized fashion allowing me to work on multiple machines avoiding manual toolchain setup annoyances and maybe even make it easier for other people to collaborate. So first I need to write a container file and install everything that we need:
1FROM debian:bookworm
2
3RUN apt update -y && apt upgrade -y && \
4 apt install git libjaylink-dev libusb-1.0-0 unzip curl libhidapi-hidraw0 xz-utils -y
5
6RUN cd /root && \
7 curl -L -o mrs-toolchain.tar.xz "https://github.com/ch32-riscv-ug/MounRiver_Studio_Community_miror/releases/download/1.92-toolchain/MRS_Toolchain_Linux_x64_V1.92.tar.xz" && \
8 mkdir mrs-toolchain && \
9 tar -xvf mrs-toolchain.tar.xz -C mrs-toolchain --strip-components=1 && \
10 mv mrs-toolchain/OpenOCD/bin/openocd /usr/local/bin && \
11 mv mrs-toolchain/OpenOCD/share/openocd /usr/local/share && \
12 rm -rf mrs-toolchain mrs-toolchain.tar.xz && \
13 # Use up to date xpack toolchains for gdb
14 curl -L -o xpack-riscv-toolchain.tar.gz "https://github.com/xpack-dev-tools/riscv-none-elf-gcc-xpack/releases/download/v14.2.0-3/xpack-riscv-none-elf-gcc-14.2.0-3-linux-x64.tar.gz" && \
15 mkdir xpack-toolchain && \
16 tar -xvf xpack-riscv-toolchain.tar.gz -C xpack-toolchain --strip-components=1 && \
17 mv xpack-toolchain/bin/* /usr/local/bin && \
18 mv xpack-toolchain/lib/ /usr/local && \
19 mv xpack-toolchain/lib64/ /usr/local && \
20 mv xpack-toolchain/libexec /usr/local && \
21 mv xpack-toolchain/riscv-none-elf /usr/local && \
22 rm -rf xpack-toolchain xpack-riscv-toolchain.tar.gz
23
24RUN mkdir -p /root/.config/gdb && echo "set auto-load safe-path /" >> /root/.config/gdb/gdbinit
25
26ENTRYPOINT [ "/usr/bin/bash" ]
There are multiple things going on here:
-
I need an openocd with support for the
wlink
debugger and itsSDI
debug protocol and while I did find some forks of openocd that claim to have support I could not get them to compile correctly. So instead I opted for installing the binary version fromMountRiverStudio
directly. Luckily there is a community organization on github, that provides tarball downloads for the tools directly. -
I need an RISCV compatible gdb. The
MountRiverStudio
tools do include gdb but its version is below14
which is the minimum required version ofgdb
forDAP
support. DAP (Debug Adapter Protocol) is a close cousin to LSP (Language Server Protocol) and enable editor integration of debugging capabilities. It is a lot more comfortable to set breakpoints in my editor and step through the code instead of writing gdb scripts and running them from my terminal. So instead I opted to install thexpack
version of the RISCV development tools, which includesgdb
in a more recent version. The command on line24
enablesgdb
to run .gdbinit scripts on startup, since gdb is running in a container and will only have access to the parts of the system exposed to it, I do not see a lot of potential problems with enabling this feature.
Next I need to be able to run these tools comfortably within their containers from my shell or my editor. Enter some wrapper scripts (A small shoutout to an awesome friend of mine how introduced me to this approach and showing me that bash
does have its uses 😉):
ch32-project/bin
├── build-wch-tools-container.sh
├── gdb
└── openocd
These scripts live within a bin
directory within my repository. This allows me to do export PATH=$PWD/bin:$PATH from my project directory before opening my editor. This way these scripts will be used instead of the openocd
or gdb
installed on my system.
1#!/usr/bin/env bash
2
3set -euo pipefail
4
5CONTAINER_NAME="localhost/wch-dev-tools:latest"
6CONTAINER_TOOLS_BASEDIR="$(dirname "$(readlink -f "$0")")"
7
8pushd "$CONTAINER_TOOLS_BASEDIR"
9podman build -t "$CONTAINER_NAME" -f "../wch-tools.Containerfile" .
10popd
The first script is just used to build the container initially and give it a name, this way I do not need to host the container on docker hub, though I still need to pull the base container from there initially. I like to use podman instead of docker. Though the process would probably look exactly the same using docker instead. The more intersting part is the wrapper script used to run the containerized tools.
1#!/usr/bin/env bash
2
3set -euo pipefail
4
5CONTAINER_IMAGE="localhost/wch-dev-tools:latest"
6CONTAINER_TOOLS_BASEDIR="$(dirname "$(readlink -f "$0")")"
7
8function _fatal {
9 echo -e "\e[31mERROR\e[0m $(</dev/stdin)$*" 1>&2
10 exit 1
11}
12
13declare -a PODMAN_ARGS=(
14 "--rm" "-i" "--log-driver=none"
15 "--network=host"
16 "-v" "$PWD:$PWD:rw"
17 "-w" "$PWD"
18)
19
20for device in /dev/bus/usb/*/*; do
21 if udevadm info "$device" | grep -q "ID_VENDOR=wch.cn" && \
22 udevadm info "$device" | grep -q "ID_MODEL=WCH-Link"; then
23 DEBUGGER_DEV_PATH="$device"
24 break
25 fi
26done
27
28if [[ -z "${DEBUGGER_DEV_PATH:-}" ]]; then
29 echo "Could not find hardware debugger … Exiting!" 1>&2
30 exit 1
31else
32 # add jlink to podman device
33 PODMAN_ARGS+=("--device=$DEBUGGER_DEV_PATH")
34fi
35
36[[ -t 1 ]] && PODMAN_ARGS+=("-t")
37
38if ! podman image exists "$CONTAINER_IMAGE"; then
39 #attempt to build container
40 "$CONTAINER_TOOLS_BASEDIR/build-wch-tools-container.sh" 1>&2 ||
41 _fatal "faild to build local image, cannot continue! … please ensure you have an internet connection"
42fi
43
44podman run "${PODMAN_ARGS[@]}" --entrypoint openocd "$CONTAINER_IMAGE" "$@"
The script above is the wrapper script used to run openocd
. Most of what this script does in gather the arguments to pass to podman in a variable, then check if the container already exists. If it does not exist it will be built using the previously discussed script. Then podman is invoked with all the required arguments starting openocd as entrypoint and passing any command line arguments along. The interesting part here is what arguments podman receives. In line 13-18 the PODMAN_ARGS
variable is declared ind filled
with the default parameters:
-
Specifying –network=host lets the container run without a network namespace enabling the container to communicate with other services running on the host. Though for openocd it would have been possible to just passing along the ports that openocd opens for gdb to connect to using –port. So this option is a bit overkill in this case.
-
The project directory will be mounted within the container, it is assumed the command is invoked from the project root directory as opposed to some other directory. Also the working directory is set to it so running openocd would behave the same like running it from the host system directly.
The tricky part is finding the wlink
debugger device itself passing access to it to the container. This happens in line 20-34, using udevadm info the device properties of all attached usb devices are checked for the corresponding vendor and model id of the debug adapter. If no debugger device is found there is no point in continuing so in that case the script exits with an error. When using podman it is necessary to have setup udev
rules giving your user access to the device, otherwise openocd will not be able to use the device and will throw a permission error. This would also be required when not containerizing the setup unless you explicitly run openocd as root.
1#!/usr/bin/env bash
2
3set -euo pipefail
4
5CONTAINER_IMAGE="localhost/wch-dev-tools:latest"
6CONTAINER_TOOLS_BASEDIR="$(dirname "$(readlink -f "$0")")"
7
8function _fatal {
9 echo -e "\e[31mERROR\e[0m $(</dev/stdin)$*" 1>&2
10 exit 1
11}
12
13declare -a PODMAN_ARGS=(
14 "--rm" "-i" "--log-driver=none"
15 "--network=host"
16 "--pid=host"
17 "-v" "$PWD:$PWD:rw"
18 "-w" "$PWD"
19)
20
21[[ -t 1 ]] && PODMAN_ARGS+=("-t")
22
23if ! podman image exists "$CONTAINER_IMAGE"; then
24 #attempt to build container
25 "$CONTAINER_TOOLS_BASEDIR/build-wch-tools-container.sh" 1>&2 ||
26 _fatal "faild to build local image, cannot continue! … please ensure you have an internet connection"
27fi
28
29podman run "${PODMAN_ARGS[@]}" --entrypoint riscv-none-elf-gdb-py3 "$CONTAINER_IMAGE" "$@"
The wrapper script for gdb
looks similar but is actually less complicated because all that really needs to happen is to start the correct version of gdb. In this case the --network=host
options is mandatory since otherwise the gdb container would not be
able to connect to the openocd session. Apart from that I added the --pid=host
option to disable process id isolation in the container, the reason being that protocols like DAP or LSP tend to communicate process ids and some server implementations do not like it
if the communicated process id does not exist. The reason for using riscv-none-elf-gdb-py3
is that the DAP support of gdb
is implemented in python, so versions without python support enabled can not be used when using DAP.
Now that the wrapper scripts can be transparently be used instead of the native tools the next step is to configure the tools to work together as expected. For this an openocd
and gdb
config file need to be written.
set _CHIPNAME ch32v203
set _TARGETNAME $_CHIPNAME.cpu
#bindto 0.0.0.0
adapter driver wlinke
adapter speed 6000
transport select sdi
sdi newtap $_CHIPNAME cpu -irlen 5 --expected-id 0x00001
target create $_TARGETNAME.0 wch_riscv -chain-position $_TARGETNAME
$_TARGETNAME.0 configure -work-area-phys 0x20000000 -work-area-size 10000 -work-area-backup 1
set _FLASHNAME $_CHIPNAME.flash
flash bank $_FLASHNAME wch_rsicv 0x00000000 0 0 0 $_TARGETNAME.0
init
I cobbled this together from the openocd
documentation and wch examples found in the MountRiverStudio distribution of openocd
. An important note here is that if you are not using --network=host
when spawning the container the bindto instruction needs to be uncommented. The reason being that if the container is running in its own namespace then openocd listing on localhost within the network namespace will mean that it will not be reachable from the host.
target extended-remote :3333
set remotetimeout 2000
file target/riscv32imc-unknown-none-elf/release/ch32-project
monitor reset halt
With the .gdbinit
file in place starting openocd and afterwards gdb works as expected. The last step is to configure my editor to start gdb when initiating debugging. I will not configure it to start openocd before hand, mostly because it is not worth the
extra hassle. And just manually starting openocd is not that big of a deal to me.
I am using doom emacs, which sets up dape
as the plugin for DAP support. It brings withit a lot of pre-configured debugging targets, but for using gdb
with rust and connecting to openocd
a custom configuration in required.
1(after! dape
2 (add-to-list
3 'dape-configs
4 `(gdb-dap-openocd
5 ensure (lambda (config)
6 (dape-ensure-command config)
7 (let* ((default-directory
8 (or (dape-config-get config 'command-cwd)
9 default-directory))
10 (command (dape-config-get config 'command))
11 (output (shell-command-to-string (format "%s --version" command)))
12 (version (save-match-data
13 (when (string-match "GNU gdb \\(?:(.*) \\)?\\([0-9.]+\\)" output)
14 (string-to-number (match-string 1 output))))))
15 (unless (>= version 14.1)
16 (user-error "Requires gdb version >= 14.1"))))
17 modes ()
18 command-cwd dape-command-cwd
19 command "gdb"
20 command-args ("--interpreter=dap")
21 :request nil
22 :program nil
23 :args []
24 :stopAtBeginningOfMainSubprogram nil))
25)
This is quit unspectacular, it is mostly just copied from the normal dape
configuration for gdb
. But setting all default arguments to nil. The reason being that when using the :program
arg, dape would run into an error terminating the gdb session. Just setting all options to none a let gdbinit
handle loading symbols and connection to openocd
just works.
And that is it, now when I set a breakpoint in my editor and start a debugger session i can step through the code and introspect the state instruction for instruction. I still have to flash the program before using this but that is not a big hurdle to overcome.
Anyway I hope you enjoyed this little post. If you have thoughts/questions feel free to reach out to me. (Note: I should probably delete the link to my X Account because I am no longer using it … Though I never really got into X, formally twitter anyway)
I hope to find the time to write some more little posts on embedded rust programming as this project continues. Happy hacking 😉