-
-
Notifications
You must be signed in to change notification settings - Fork 0
Description
To make the ml-event-simulator more versatile for training machine learning models in educational contexts, I propose adding a feature to simulate learning events with adaptive difficulty levels. This would allow the simulator to generate events that reflect how learners progress through content of varying complexity (e.g., adjusting question difficulty based on user performance).
Proposed Feature:
Introduce an AdaptiveLearningEvent class or function to simulate events where difficulty scales dynamically (e.g., based on simulated user accuracy or response time).
Include configurable parameters, such as:
Initial difficulty level (e.g., easy, medium, hard).
Rules for difficulty adjustment (e.g., increase after N correct answers, decrease after M incorrect).
Metrics like accuracy, response time, or completion rate to influence difficulty.
Output event data in a format compatible with existing pipelines (e.g., JSON with fields for event type, difficulty, timestamp, and user performance).
Add unit tests to validate adaptive logic and documentation with usage examples.
Use Case:
This feature would enable the simulator to generate realistic datasets for adaptive learning systems, such as those used in personalized education platforms. It could also support analytics for studying how difficulty impacts engagement or learning outcomes.
Implementation Ideas:
Extend existing event models to include a difficulty attribute.
Create a logic module to adjust difficulty based on simulated user performance (e.g., using a simple rule-based system or probabilistic model).
Integrate with existing visualization tools to plot difficulty progression over time.
Example output: { "event_type": "learning", "user_id": 123, "difficulty": "medium", "correct": true, "response_time_ms": 1500, "timestamp": "2025-10-07T15:34:00Z" }