Soothly

An offline baby cry detection system I built to run locally on mobile devices, helping parents decode what their little ones need.

Soothly Logo

As a new dad, I wanted a small weekend project to help with parenting. What started as a simple idea quickly turned into a deep dive down the rabbit hole of analyzing different baby cry datasets and building a complete training pipeline to make everything work on-device. Classic scope creep, but worth every minute!

In the future, I plan to implement these same models on embedded devices with baby monitoring video/audio systems. Stay tuned!

Demo

Soothly in action, analyzing baby cries and providing insights

Preview

Soothly classification
Soothly history
Soothly settings

Soothly app interface showing cry classification and history tracking

Tech Stack

React NativeTypeScriptMachine LearningONNXAudio Processing

Key Features

How It Works

  1. Audio Capture & Preprocessing: The app records audio using the device's microphone, resamples to 16kHz, and converts to mono for consistent processing.
  2. Feature Extraction: I extract rich audio features including MFCC, Chroma, Mel Spectrogram, Spectral Contrast, and Tonnetz to capture the unique characteristics of the baby's cry.
  3. Feature Aggregation: Features are calculated over small frames of audio and averaged to create a compact 194-element feature vector.
  4. Model Inference: The feature vector feeds into an ONNX model that predicts the cry type along with a confidence score.

Tech Challenges

Technical Implementation