At a Glance
Allowing everyone to follow actionable improvement steps in a ranked selection system can raise the success bar itself, turning individual gains into a collective arms race that makes upward mobility harder and can lock in inequalities.
ON THIS PAGE
What They Found
Providing actionable directions for rejected applicants causes many people to change the same features at once, which shifts the cutoff used to pick the top fraction of candidates. Because the selection rule depends on the current population, improvements become relative: what helps you today may be insufficient tomorrow. Over repeated rounds the system moves toward an equilibrium where further feasible improvements are too costly or ineffective Model Context Protocol (MCP) Pattern, and early winners shape both the direction and difficulty of future competition.
Data Highlights
1Selection chooses exactly ρ·n candidates; the acceptance cutoff equals the (ρ·n)-th largest score in the current population.
2Numerical example uses an actionable feature capped at 340 (modeled after GRE max), creating sharply rising marginal cost as candidates approach that bound.
3Under the dynamic update rule, the system converges to an equilibrium where additional feasible effort no longer yields profitable gains for candidates.
What This Means
Engineers and product leaders building decision systems for admissions, hiring, or grants should care because providing actionable recourse can unintentionally create an arms race that raises costs for everyone. Policymakers and fairness teams should evaluate whether recourse guidance amplifies initial advantages and creates persistent gaps rather than improving access Evaluation-Driven Development (EDDOps).
Not sure where to start?Get personalized recommendations
Key Figures

Fig 1: Figure 1 : Evolution of the classifier and population over time.

Fig 2: Figure 2 : Aggregate evolution of GRE variance, mean GRE, and actionable signal norm.
Ready to evaluate your AI agents?
Learn how ReputAgent helps teams build trustworthy AI through systematic evaluation.
Learn MoreYes, But...
Analysis assumes linear scoring rules and that actionable features lie in a known, bounded subspace; real systems with nonlinear models or hidden actions may behave differently. The model abstracts away heterogeneity in access to effort and long-term investments (for example, different costs or time horizons across candidates). Empirical outcomes depend on the choice of selection fraction, the shape of effort costs, and which features are truly actionable versus immutable. This aligns with the Consensus-Based Decision Pattern.
Methodology & More
A ranked selection mechanism that picks a fixed top fraction of applicants can be represented as optimizing the average score within that top tail. When rejected candidates receive explicit, feasible directions for improvement (actionable recourse), many of them make similar feature changes at once. Because the cutoff is computed from the current population, these simultaneous updates push the acceptance threshold higher; improvements are therefore relative rather than absolute. The work models this interaction as a discrete-time dynamical system: at each round the designer recomputes the linear scoring direction that maximizes the top-tail average, rejects those below the cutoff, and communicates the minimal actionable change aligned with the scoring direction. Candidates respond optimally under increasing marginal cost (a capped example uses a 340 upper bound). Over repeated rounds the system tends toward an equilibrium where further feasible improvements become unprofitable, and early accepted candidates effectively set the standard and direction of future competition. Practical implication: giving everyone the same practical advice can raise the bar, amplify initial disparities, and produce persistent stratification unless designers account for these systemic feedbacks. This perspective relates to the Role-Based Agent Pattern and can be informed by Dynamic Task Routing Pattern.
Avoid common pitfallsLearn what failures to watch for
Credibility Assessment:
Authors have low h-index (5 for one author) and no affiliations or top venue listed; arXiv preprint and no citations → limited signals.