Raspberry Pi CM4 Setup Guide in 2026

I2S MEMS Microphone + Google Coral USB Accelerator + Keyword Spotting

Posted on December 28, 2025 by Mikel Bahn
RPi CM4 + I2S Mic + Coral TPU Setup in 2026

Raspberry Pi CM4 Setup Guide in 2026

I2S MEMS Microphone + Google Coral USB Accelerator + Keyword Spotting

Overview

This guide shows the complete setup of a Raspberry Pi Compute Module 4 with an I2S MEMS microphone and Google Coral USB Accelerator for real-time keyword recognition.

Hardware Required:
  • Raspberry Pi Compute Module 4
  • CM4 Carrier Board
  • Adafruit I2S MEMS Microphone Breakout (e.g. SPH0645LM4H)
  • Google Coral USB Accelerator
  • USB data cable (not just a charging cable!) for the Coral

Hardware Wiring

I2S MEMS Microphone Pin Assignment

Microphone Pin Raspberry Pi Pin Description
3V 3.3V Power Supply
GND GND Ground
SEL GND Channel Select (GND = Left, 3.3V = Right)
BCLK BCM 18 (Pin 12) Bit Clock
DOUT BCM 20 (Pin 38) Data Out
LRCL BCM 19 (Pin 35) Left/Right Clock (Word Select)
⚠️ Important: The Coral USB Accelerator requires a USB data cable! A charging-only cable will not work. This was the biggest issue in this setup.

⚙️ Software Installation

1Enable I2S Interface

Edit the boot configuration:

sudo nano /boot/config.txt

Add in the [cm4] section (or in the [all] section):

dtparam=i2s=on
dtoverlay=i2s-mems

Save and reboot:

sudo reboot

2Verify Audio Device

After reboot, check if the microphone was detected:

arecord -l

You should see something like:

card 2: sndrpigooglevoice [...], device 0: [...]

3Test Audio Recording

Test hardware recording (Stereo, 48kHz, 32-bit):

arecord -D hw:2,0 -c 2 -r 48000 -f S32_LE -t wav -d 5 test.wav

Playback:

aplay test.wav

Test format conversion (Mono, 16kHz, 16-bit for ML):

arecord -D plughw:2,0 -c 1 -r 16000 -f S16_LE -t wav -d 5 test.wav

4Create Python Virtual Environment

mkdir -p ~/vscodeProjects/coral
cd ~/vscodeProjects/coral
python3 -m venv coral-env
source coral-env/bin/activate

5Clone Keyword Spotter Project

cd ~/vscodeProjects/coral
git clone https://github.com/google-coral/project-keyword-spotter.git
cd project-keyword-spotter

6Install System Dependencies

Run the included install script:

bash install_requirements.sh

Or manually:

sudo apt-get install -y python3 python3-pyaudio python3-numpy python3-scipy
sudo apt-get install -y python3-dev libsdl-image1.2-dev libsdl-mixer1.2-dev 
sudo apt-get install -y libsdl-ttf2.0-dev libsdl1.2-dev libportmidi-dev
sudo apt-get install -y ffmpeg libswscale-dev libavformat-dev libavcodec-dev

7Install Edge TPU Runtime

Install the current libedgetpu library:

echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list

curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

sudo apt-get update
sudo apt-get install libedgetpu1-std
Info: There's also libedgetpu1-max for maximum performance, but it runs hotter.

8Install Python Packages

In the activated virtual environment:

source ~/vscodeProjects/coral/coral-env/bin/activate

# Install PyCoral (includes compatible tflite-runtime)
pip3 install --extra-index-url https://google-coral.github.io/py-repo/ pycoral

# Additional packages
pip3 install pygame PyUserInput

9Download Models

cd ~/vscodeProjects/coral/project-keyword-spotter

# EdgeTPU Model
wget https://github.com/google-coral/project-keyword-spotter/raw/master/models/voice_commands_v0.8_edgetpu.tflite -P models/

# CPU Model (Fallback)
wget https://github.com/google-coral/project-keyword-spotter/raw/master/models/voice_commands_v0.8.tflite -P models/

10Connect Coral USB Accelerator

IMPORTANT: Use a USB data cable, not just a charging cable!

After connecting, check:

lsusb | grep -i "global\|google\|coral"

You should see the Coral device (Global Unichip Corp.).

11Set USB Permissions (optional)

If permission issues occur:

echo 'SUBSYSTEM=="usb", ATTRS{idVendor}=="1a6e", ATTRS{idProduct}=="089a", MODE="0666"' | sudo tee /etc/udev/rules.d/99-edgetpu-accelerator.rule
echo 'SUBSYSTEM=="usb", ATTRS{idVendor}=="18d1", ATTRS{idProduct}=="9302", MODE="0666"' | sudo tee -a /etc/udev/rules.d/99-edgetpu-accelerator.rule

sudo udevadm control --reload-rules
sudo udevadm trigger

12Start Keyword Spotter! ?

cd ~/vscodeProjects/coral/project-keyword-spotter
source ~/vscodeProjects/coral/coral-env/bin/activate

# With Edge TPU (fast)
python3 run_model.py --model_file models/voice_commands_v0.8_edgetpu.tflite

# Or without Edge TPU on CPU (slower)
python3 run_model.py --model_file models/voice_commands_v0.8.tflite
✅ Done! The system should now be able to recognize keywords like "yes", "no", "up", "down", "left", "right", "on", "off", "stop", "go".

Troubleshooting

Problem: "Failed to load delegate from libedgetpu.so.1"

Solution:

  • Check if the library is installed: ldconfig -p | grep edgetpu
  • Make sure the Coral device is connected with a data cable
  • Check if the device is recognized: lsusb
  • A reboot may help: sudo reboot

Problem: "ALSA lib ... audio open error"

Solution:

  • Check the audio card number with arecord -l
  • Adjust the device number (e.g. hw:2,0 instead of hw:0,0)
  • Use plughw:X,Y for automatic format conversion

Problem: "Sample format not available"

Solution:

  • I2S MEMS mics use 32-bit format: -f S32_LE
  • I2S outputs stereo: -c 2
  • Use plughw: for automatic conversion to mono/16-bit

Problem: Version Incompatibilities

Solution:

  • Use current versions: libedgetpu1-std + latest PyCoral
  • For old projects: All components must match (old + old OR new + new)
  • PyCoral automatically includes the matching tflite-runtime version

Technical Background

What is I2S?

I2S (Inter-IC Sound) is a digital audio bus with three signals:

  • BCLK (Bit Clock): Clock for each data bit
  • LRCL (Left/Right Clock / Word Select): Switching between channels
  • DOUT (Data Out): The actual audio data

At 48kHz stereo 32-bit: BCLK = 48,000 × 32 × 2 = 3.072 MHz

Audio Format Conversion

Hardware (I2S MEMS Mic) provides: Stereo, 48kHz, S32_LE (32-bit)

ML Model requires: Mono, 16kHz, S16_LE (16-bit)

Conversion via ALSA:

  • hw:X,Y - Direct hardware access, no conversion
  • plughw:X,Y - With automatic format conversion

Edge TPU vs. CPU

Property Edge TPU CPU
Inference Time ~5-10ms ~50-100ms
Hardware Required Coral USB/PCIe No extra HW
Model Format *_edgetpu.tflite *.tflite
Power Consumption ~2W Variable

Recognized Keywords

The voice_commands_v0.8 model recognizes the following words:

  • Directions: up, down, left, right
  • Confirmations: yes, no
  • Controls: on, off, stop, go
  • Other: silence (no word), unknown (unknown word)

Further Links

Solution Summary

The final working configuration:

  • ✅ I2S MEMS microphone correctly wired (SEL to GND for left channel)
  • ✅ Device tree overlay enabled: dtparam=i2s=on + dtoverlay=i2s-mems
  • ✅ Audio device recognized as Card 2: plughw:2,0
  • ✅ Coral USB Accelerator connected with USB data cable
  • ✅ Current versions installed: libedgetpu1-std + latest PyCoral
  • ✅ Compatible models: voice_commands_v0.8_edgetpu.tflite
Key Learnings:
  • The USB cable for the Coral must be a data cable!
  • All versions must match (libedgetpu + tflite-runtime + models)
  • I2S hardware always delivers stereo 32-bit, even with mono microphone
  • Use plughw: for automatic audio format conversion