Dual pipeline: face crop + wellbeing survey
Run an Emotion Check-in
AffectNet/CK+ models with contrast-aware preprocessing for sturdier predictions.
Local processing
Live capture while you answer
Stream 1 frame/sec to track emotions during the survey.
We only keep predictions (no images) and cap to ~60 frames.Idle
Frames: 0
Click start to allow your camera while you read and respond.
Waiting to start...
Consent: live video is processed locally for emotion scores; no frames are stored.
Live status
Model: Legacy
Detector: -
Confidence gate: 0.3
Saved: No
We use CLAHE to stabilize lighting, MTCNN when available, then CK+/AffectNet heads for richer labels.
Live capture summary
Frames: 0Start live capture to collect emotions while you read and answer.
Chat with the copilot
Remaining replies: 10How we keep accuracy higher
- AffectNet + CK+ checkpoints prioritized, auto-fallback to legacy if missing.
- Contrast Limited Adaptive Histogram Equalization (CLAHE) before resize.
- MTCNN preferred for tighter boxes; Haar cascade as CPU-safe fallback.
- Test-time flip averaging to smooth jittery predictions.