Model Training
UC-AIML-001: Retraining on New Data
Purpose: Improve model accuracy using recent tenant data.
Capabilities Breakdown:
1. Automated Retraining (UC 80.1)
-
Trigger: "Model Drift" detected (Accuracy < 85%) OR "New Data Volume" > 1000 records.
-
Pipeline: Extract Data -> Cleanse -> Train XGBoost -> Evaluate.
-
Auto-Promote: If New Model Accuracy > Old Model + 2%, promote to Staging.
2. Model Registry (UC 80.3)
-
Versioning: Track
v1.0,v1.1with metadata (Training Date, Dataset Hash). -
Rollback: One-click revert to
v1.0ifv1.1shows bias in production.
3. Data Annotation (UC 80.4)
-
Feedback Loop: Front desk corrections ("This wasn't a Cancel, it was Reschedule") feed back into training set.
-
Human-in-the-Loop: Flag low-confidence predictions (< 60%) for human review.
Main Success Scenario:
- ML Pipeline detects "Churn Prediction" accuracy dropped to 82%.
- Pipeline triggers retraining on last 90 days data.
- New model achieves 88%.
- System deploys new model to
stagingendpoint. - Engineer approves promotion to
production.
Acceptance Criteria:
- [ ] Retraining triggers automatically upon drift detection.
- [ ] Previous 5 model versions are retained for rollback.
- [ ] Training data is anonymized before processing.
Related Use Cases
-
AI Governance: Budget limits for training compute.
-
Ops Monitoring: Tracking accuracy in prod.