Neural networks and different machine studying processes are sometimes related to highly effective processors and GPUs. However, as we’ve seen on the web page, AI can also be transferring to the very edge, and the BitNetMCU open-source project additional showcases that it’s attainable to coach and run inference low-bit quantized neural networks utilizing low-end RISC-V microcontrollers equivalent to the cheap CH32V003.
As a reminder, the CH32V003 relies on the QingKe 32-bit RISC-V2A processor, which helps two ranges of interrupt nesting. It is a compact, low-power, general-purpose 48MHz microcontroller that has 2KB SRAM with 16KB flash. The chip is available in a TSSOP20, QFN20, SOP16, or SOP8 package deal.
To run machine studying on the CH32V003 microcontroller, the BitNetMCU project does Quantization Aware Training (QAT) and fine-tunes the inference code and mannequin construction, which makes it attainable to surpass 99% check accuracy on a 16×16 MNIST dataset with out utilizing any multiplication directions. This efficiency is spectacular, contemplating the 48 MHz chip solely has 2 kilobytes of RAM and 16 kilobytes of flash reminiscence.
The coaching information pipeline for this project relies on PyTorch and consists of a number of Python scripts that may run on any microcontroller. These embody:
trainingparameters.yaml configuration file to set all of the parameters for the coaching mannequin
coaching.py Python script trains the mannequin, then shops it within the mannequin information folder as a .pth file (weights are saved as floats, with quantization taking place on the fly throughout coaching).
exportquant.py Quantized mannequin exporting file converts the saved skilled mannequin right into a quantized format and exports it into the C header file (BitNetMCU_model.h)
Optional test-inference.py script that calls the DLL (compiled from the inference code) for testing and evaluating outcomes with the unique Python mannequin
The inference engine (BitNetMCU_inference.c) is carried out in ANSI-C, which you need to use with the CH32V003 RISC-V MCU or port to every other microcontroller. You can check the inference of 10 digits by compiling and executing BitNetMCU_MNIST_test.c. The mannequin information is within the BitNetMCU_MNIST_test_data.h file, and the check information is within the BitNetMCU_MNIST_test_data.h file. You can examine the code and observe the directions within the readme.md file discovered on GitHub to present Machine Learning on the CH32V003 a strive.
Dennis Mwihia is a technical author specializing in IoT, PCBs, SBCs, and single-board microcontrollers. He has labored with a number of corporations in these areas and has over 5 years of analysis, writing, and software program improvement expertise.
Support CNX Software! Donate through cryptocurrencies, turn into a Patron on Patreon, or buy items on Amazon or Aliexpress
https://www.cnx-software.com/2024/05/08/bitnetmcu-machine-learning-ch32v003-risc-v-mcu-high-accuracy/