Part 5- Strategic Selection: A Deep Dive into Dwij's Multi-Objective Optimizer

CodeClowns Editorial TeamJuly 10, 202511 min read

A system design deep-dive into Dwij's Multi-Objective Optimizer (MOO), revealing how it moves beyond simple ranking to select a strategically balanced, diverse, and personalized daily test plan for every student.

Imagine a portfolio manager advising a client. They don't simply recommend the single stock with the highest projected return. Instead, they construct a balanced portfolio—a carefully selected mix of assets that balances aggressive growth with stable performers and manages risk. This act of selection and diversification is far more sophisticated than simple ranking. This is a crucial step where raw data analysis is transformed into an actionable, intelligent strategy.

This is the role of our **Multi-Objective Optimizer (MOO)**. In our previous posts, we detailed how we model the user, generate strategic candidates, and score each one across seven dimensions. But a list of scores is not a plan. The MOO is the "portfolio manager" of our system. It takes the rich, multi-dimensional score vectors for 20-30 viable tests and answers the ultimate question: "Which specific mix of 3-5 tests should this student see on their dashboard *today*?" This is where scoring gives way to selection, and recommendation becomes a truly strategic exercise.

The Post-Scoring Dilemma: Why the Highest Score Isn't Always the Best Choice

After our Scoring Engine runs, we have a list of well-evaluated test candidates. A naive approach would be to simply rank them by a composite score and show the top 3-5. However, this often leads to suboptimal, monotonous, and strategically unsound study plans.

The Monotony Trap and Strategic Imbalance

A simple ranking could easily result in a dashboard showing three difficult tests all on the same weak topic. While each test individually has a high "Weakness Score," presenting them together creates a frustrating and intimidating experience. Conversely, a user might be shown only easy, confidence-boosting quizzes, neglecting the challenging work required for real growth. We don't just want the highest-scoring tests; we want the *right combination* of tests that creates a balanced and productive learning experience.

[The MOO makes decisions based on the output from our previous layer. Read about it here: "The Scoring Engine: How Dwij's AI Brain Chooses the Perfect Test"]

The Three Core Functions of the Optimizer

The MOO is not a single algorithm but a sequential pipeline that applies three distinct functions to transform the scored candidate list into a final, curated daily plan.

Function 1: Weighted Score Aggregation

The first step is to calculate a single `compositeScore` for each test, but this calculation is dynamic and context-aware. It's a weighted sum of the seven score layers, where the `weights` themselves are loaded based on the student’s current `goal`. The core formula is:

compositeScore = Σ (score[i] × weight[i])

For example, a student whose goal is **"Mastery"** will have a high weight applied to their `weakness` and `retention` scores. A student whose goal is **"Simulation Ready"** (e.g., in the week before their exam) will have the `urgency` and `confidence` scores weighted most heavily. This ensures that the ranking of tests is always aligned with the student's overarching strategy.

Function 2: Constraint-Based Post-Filtering

After calculating a composite score and ranking the tests, the MOO applies a set of hard, business-logic constraints. This is a critical safety net that ensures the final selection is balanced and sane, regardless of the raw scores. These are not suggestions; they are rules. Examples include:

  • Topic Diversity Rule: Reject any selection that would result in showing more than two tests from the same macro-subject (e.g., Chemistry).
  • Fatigue Suppression Rule: Immediately reject any test with a `fatiguePenalty` score below a hard-coded threshold (e.g., -0.5), even if its composite score is high.
  • Avoidance Rule: Temporarily suppress tests covering topics the user has skipped more than twice in the last week to avoid frustration.
  • Confidence Guarantee Rule: Ensure that the final selection of 3-5 tests includes at least one test with a high `confidence` score, guaranteeing a chance for a positive experience.

Function 3: The Diversity Promotion Engine

The final function's job is to ensure the selected tests are varied and engaging. After filtering, the engine looks at the top-ranked candidates and actively promotes diversity across several axes. If the top three tests are all "revision" quizzes, it might swap one out for a similarly-scored "retry" test from a past mock. It ensures a healthy mix of test types (mock, revision, retry), difficulty levels (easy, medium, hard), and subjects. This prevents the student's dashboard from feeling monotonous and keeps the learning process dynamic.

Strategy Profiles: Configuring the MOO for Different Goals

The power of the MOO lies in its configurability. We don't have one static optimization strategy; we have several, each defined in a hot-swappable JSON config file. This allows us to tailor the entire decision-making logic based on the student’s stated goal, primarily by adjusting the weight vector used in score aggregation.

Goal TypeOptimization Strategy (Weight Prioritization)
Syllabus CoverageMaximize `Coverage` score, deprioritize long mocks, low `Urgency` weight.
MasteryHighest weights on `Weakness` and `Retention` scores.
Simulation ReadyHighest weights on `Urgency` score and full-length mock `type`.
Score BoostBalance `Weakness` score with a high weight on `Confidence` prediction.

MOO in Action: Finalizing "Riya’s" Daily Plan

Let’s revisit our student Riya. The Scoring Engine has just provided a ranked list of 25 candidate tests. Now, the MOO steps in to make the final cut.

  • Input: 25 scored test candidates. Riya's goal is `Mastery`.
  • Step 1 (Aggregation): The MOO loads the 'Mastery' weight profile. It applies high weights to the `weakness` and `retention` scores of all 25 tests and recalculates their `compositeScore`. A difficult Chemistry test now ranks higher than an easy History test, even if their initial scores were similar.
  • Step 2 (Filtering): The MOO scans the newly ranked list. It finds three Chemistry tests in the top five and applies the diversity rule, filtering two of them out. It also sees a full mock with a high `fatiguePenalty` and suppresses it.
  • Step 3 (Diversity): The engine reviews the remaining top candidates. It notices they are all "revision" quizzes. It looks further down the ranked list and finds a "retry" quiz from a past failed test with a similar composite score, and swaps it in to ensure test type diversity.

Final Selection: The output is a smart, balanced stack of 5 tests: a confidence-booster in History, a low-pressure quiz for her sensitive Chemistry topic, a retry quiz to fix past mistakes, a short revision test for a topic with high retention decay, and one manageable medium-difficulty test. This is a plan designed to move her forward, not just keep her busy.

Your Training Starts Now

Be the first to get access. Join the waitlist and help us build the perfect practice tool for you.


What’s Next: Closing the Loop

A truly intelligent system doesn't just make decisions; it learns from them. Now that we’ve walked through the entire recommendation pipeline—from context to candidates to scoring to selection—the final piece of the puzzle is adaptation. How does the system learn from every click, skip, and test result to make its future decisions even smarter? In our next article, we will explore the **Feedback Adaptation Loop**, the mechanism that allows our AI to evolve and improve over time.

[In the next Blog we talk about the Feedback loop, Read about it here: "The Scoring Engine: How Dwij's AI Brain Chooses the Perfect Test"]

Preparing for CAT, SSC, CUET or IELTS? Dwij gives you mock tests, AI-generated questions, and personalized planners — everything you need to practice smarter and get exam-ready.

#engineering blog#dwij#ai#system design#recommendation systems#optimization#multi-objective-optimization#edtech