Project Echo
A privacy-first macOS utility that automatically captures meeting audio and generates searchable transcripts using local AI. Zero cloud uploads.
Project Echo
The "Black Box" Flight Recorder for Digital Meetings
A privacy-first macOS utility that automatically captures audio from teleconferencing apps (Zoom, Teams, Meet) and generates searchable transcripts using local AI. Everything runs on-device with zero cloud uploads.
No virtual audio device needed! Uses ScreenCaptureKit to record both your microphone and meeting audio.
Features
Core Recording
- ScreenCaptureKit Integration - High-fidelity system audio capture
- Multi-track Recording - Separate tracks for system audio and microphone
- App-specific Capture - Target Zoom, Teams, Meet, and more
- Menu Bar Controls - Quick access to start/stop recording
- Marker Insertion - Tag important moments during calls
Intelligence Layer
- Local AI Transcription - WhisperKit (CoreML) for on-device processing
- Speaker Diarization - Identify who said what
- Smart Summarization - Extract action items and key topics
- Full-text Search - SQLite FTS5 for instant transcript search
- Zero Cloud Uploads - Everything stays on your device
User Interface
- Beautiful Library UI - SwiftUI interface with audio player
- Export Options - Audio and transcript export
- Comprehensive Settings - Customize quality, storage, models
- Privacy Dashboard - Clear data usage transparency
Architecture
+-----------------------------------------------------------+
| PROJECT ECHO |
+-----------------------------------------------------------+
| |
| +-------------+ +---------------+ +-----------------+ |
| | AudioEngine | | Intelligence | | Database | |
| | | | | | | |
| | ScreenKit | | WhisperKit | | SQLite + FTS5 | |
| | AVCapture |->| CoreML |->| Recordings | |
| | | | Diarization | | Transcripts | |
| +-------------+ +---------------+ +-----------------+ |
| | | |
| +------------------------------------+ |
| | |
| +----------+ |
| | UI | |
| | | |
| | Menu Bar | |
| | Library | |
| | Settings | |
| +----------+ |
+-----------------------------------------------------------+
Quick Start
Prerequisites
- macOS Sonoma (14.0) or later
- Xcode Command Line Tools
- ~2GB disk space for Whisper models
Build & Run
# Build the app
./scripts/build.sh
# Launch it
./scripts/run_app.sh
First Launch
-
Grant Permissions (one-time setup)
- Open System Settings > Privacy & Security
- Microphone: Add ProjectEcho.app, enable checkbox
- Screen Recording: Add ProjectEcho.app, enable checkbox
-
Start Recording
- Click menu bar icon
- Select "Start Recording"
- Join your meeting
- Click "Stop Recording" when done
-
View Transcript
- Open Library from menu bar
- Click on your recording
- AI transcript generates automatically
What You Get
| Feature | Status |
|---|---|
| Records your microphone | Included |
| Records meeting audio | Included |
| Auto-detects Zoom/Teams/Meet | Included |
| AI transcription (WhisperKit) | Included |
| Searchable library | Included |
| No cloud uploads | Included |
Project Structure
project-echo/
├── Sources/
│ ├── AudioEngine/ # ScreenCaptureKit + AVCapture
│ ├── Intelligence/ # WhisperKit transcription
│ ├── Database/ # SQLite management
│ ├── UI/ # SwiftUI views
│ └── App/ # Main entry point
├── Package.swift # Swift Package Manager
├── Info.plist # App metadata
└── scripts/ # Build and utility scripts
Privacy & Security
Local-First AI
- Whisper models run entirely on Apple Neural Engine
- No audio or transcripts leave your device
- No analytics or tracking
Data Storage
- Recordings:
~/Documents/ProjectEcho/Recordings/ - Database:
~/Library/Application Support/ProjectEcho/echo.db
Required Permissions
- Screen Recording: For system audio capture
- Microphone: For your audio track
- File System: Read/write to save recordings
Configuration
General Settings
- Auto-transcribe - Generate transcripts automatically
- Whisper Model - tiny/base/small/medium (speed vs accuracy)
- Storage Location - Where recordings are saved
Advanced Settings
- Sample Rate - 44.1kHz or 48kHz
- Audio Quality - Standard/High/Maximum
- CPU Usage - Optimize for performance
Tech Stack
- Language: Swift
- Platform: macOS Sonoma+
- Audio: ScreenCaptureKit, AVFoundation
- AI: WhisperKit (CoreML)
- Database: SQLite with FTS5
- UI: SwiftUI
Development
# Run tests
swift test
# Build for release
swift build -c release --arch arm64 --arch x86_64
# Debug audio issues
system_profiler SPAudioDataType
Roadmap
- [x] Core Recording Engine
- [x] AI Transcription
- [x] UI Polish & Database
- [ ] Cloud Sync (Optional, user-controlled)
- [ ] Advanced Diarization
- [ ] Real-time Transcription
- [ ] Mac App Store Release
License
Proprietary - All Rights Reserved
Contributing
This is a prototype. For production deployment, additional work needed:
- Code signing for App Store
- Comprehensive error handling
- Unit test coverage
- Performance profiling