Dual pipeline: face crop + wellbeing survey

Run an Emotion Check-in

AffectNet/CK+ models with contrast-aware preprocessing for sturdier predictions.

Local processing
Live capture while you answer
Stream 1 frame/sec to track emotions during the survey.
We only keep predictions (no images) and cap to ~60 frames.
Idle
Frames: 0
Click start to allow your camera while you read and respond.
Waiting to start...
Consent: live video is processed locally for emotion scores; no frames are stored.
Step 1 / 12
One question at a time
JPEG/PNG, good lighting
Or capture live
Quick wellbeing pulse Some questions are reverse-scored to catch stress drift.

I’m comfortable asking for help when I need it.

I still enjoy hobbies or time with others.

I’m satisfied with how I’m spending my time.

I can express my feelings to someone I trust.

I’m drinking enough water and eating regular meals.

I feel motivated to start tasks.

I’m keeping a steady daily routine.

I’ve been replaying negative moments over and over.

I often feel on autopilot or numb.

I’m experiencing frequent headaches or stomachaches.

Submit to stop capture and generate the combined wellbeing + emotion summary.
Live status
Model: Legacy
Detector: -
Confidence gate: 0.3
Saved: No

We use CLAHE to stabilize lighting, MTCNN when available, then CK+/AffectNet heads for richer labels.

Live capture summary
Frames: 0

Start live capture to collect emotions while you read and answer.

Chat with the copilot
Remaining replies: 10
How we keep accuracy higher
  • AffectNet + CK+ checkpoints prioritized, auto-fallback to legacy if missing.
  • Contrast Limited Adaptive Histogram Equalization (CLAHE) before resize.
  • MTCNN preferred for tighter boxes; Haar cascade as CPU-safe fallback.
  • Test-time flip averaging to smooth jittery predictions.
See model card