Development Progress
Lyra AI Hip-Hop Platform — Backend Pipeline
Chenye Wang · Peijun Wu · Peabody Institute, Johns Hopkins University
4
Versions shipped
36+
Tests passing
Deploying
Status
v0.1
Foundation
Architecture
- ·DAG-based pipeline: Orchestrator dispatches tasks to modular Executors
- ·Versioned artifact store for reproducible project state
- ·State machine managing pipeline transitions (IDLE → EXPORT_DONE)
- ·Policy config (YAML) for system-level constraints
Music engine
- ·Harmony module — key parsing, scale/chord generation, progressions
- ·MIDI patterns — boom_bap + trap drum patterns with energy variation
- ·Audio utilities — pad/truncate, peak/RMS, limiter, mix, WAV I/O
- ·FluidSynth rendering with graceful sine/noise fallback
Milestone
- ·21 tests passing (schema, timeline, QC, E2E smoke)
- ·First full E2E run (Feb 10): MIDI → render → QC → mix.wav (15.8 MB)
v0.2
Conversational Interface
ChatExecutor (new)
- ·Gemini-powered chatbot for brief creation and patching
- ·Two modes: create from scratch or patch existing brief
- ·Patch whitelist — only approved fields (BPM, key, mood, style) editable via chat
- ·Brief sufficiency check — generates only when enough intent is present
UI overhaul
- ·Replaced 4-page Streamlit app with ChatGPT-style project chat
- ·Sidebar project list, bottom chat input, floating mini audio player
- ·Post-generation summary: style, mood, BPM, key, instruments with AI reasoning
- ·3 clickable example prompt chips for onboarding
Pipeline
- ·Instruments selection moved after brief lock
- ·instrument_map.json separated from brief.json as dedicated artifact
- ·metadata.json as single source of truth for all project state
v0.2.1
Instrument Control
Instrument patching (new)
- ·detect_instrument_request() — distinguishes instrument changes from mood/BPM changes
- ·apply_instrument_patch() — surgically updates individual stems without touching others
- ·compute_instrument_diff() — human-readable diff shown to user after patch
- ·InstrumentPreset schema: bank, program, name, reason for full GM soundfont control
Detection examples
- ·"I want to try another bass sound" → instrument request ✓
- ·"switch drums to Standard kit" → instrument request ✓
- ·"make it darker" → NOT instrument — handled by brief patch
- ·"slow down to 120 BPM" → NOT instrument — handled by brief patch
Tests
- ·15 new tests covering detection, apply, and diff
- ·False-positive prevention: mood/BPM/generic praise does not trigger instrument detection
v0.3Current
Selective Regeneration
Stem selection system (new)
- ·parse_stem_selection() — Gemini decides which stems to regenerate vs. keep
- ·StemMixMap schema — tracks which version of each stem is in the active mix
- ·Version resolution: null (exclude), NEW (regenerate), int (specific version), KEEP (current)
Selective regeneration examples
- ·"Only change the drums" → generate: {drums}, bass + pads kept at current versions
- ·"Redo everything" → generate: {drums, bass, pads}, full regeneration
- ·Error fallback: defaults to full regeneration if parse fails
Growth
- ·ChatExecutor: 379 lines (v0.2) → 488 (v0.2.1) → 735 (v0.3)
- ·midi_patterns.py: 257 lines (v0.1) → 336 (v0.3)
- ·models.py: 273 lines (v0.1) → 353 (v0.3)
lyra_webMarch 27, 2026
Public Landing Page
Separate frontend codebase powering lyrapro.ai — not a backend version update.
- ·Next.js 16 + React 19 + TypeScript + Tailwind CSS 4
- ·Email waitlist collection via Formspree
- ·Demo audio embedded for product preview
- ·Dark-mode design with purple/fuchsia atmospheric gradients
Next Steps
01Server deployment — Railway/Render, shareable link for real users
02Fix Google Gemini API version issue (current pipeline blocker)
03Chatbot output testing with MiniMax / GPT APIs
04First 10 real users — collect expression feedback
05Plugin system — EQ, compressor, reverb for audio processing layer
06Company incorporation — Delaware C-Corp (target: May 2026)
lyrapro.ai · April 2026Back to home →