How AI Is Making The Motherhood Penalty Worse
- AI performance systems don’t remove bias. They learn from past promotion patterns and turn those patterns into rules for future decisions.
- Because those patterns reward constant availability, uninterrupted work and high responsiveness, they systematically disadvantage working mothers whose careers don’t always follow that path.
- The result is that the motherhood penalty is no longer just cultural or individual. It becomes embedded in the systems companies use to decide who gets ahead.
Nearly a decade ago, researchers at Cornell University confirmed the existence of a motherhood penalty . Their study found that mothers are rated as less competent, less committed, and less promotable than equally qualified non-mothers, while fathers experience no such penalty and may even benefit from being a working parent.
The motherhood penalty is driven by a pattern of assumptions about availability, reliability and long-term commitment. These assumptions shape how performance is interpreted and how potential is assessed. When companies use AI to evaluate how well employees are doing their jobs and to inform decisions about promotions, raises and development, they risk baking those same assumptions into how decisions get made.
It is also a form of surveillance capitalism . Companies take how employees work, including availability, responsiveness and career patterns, and turn that information into data used to predict performance. Research on large language models shows how easily this happens. When identical résumés were evaluated, gender did not affect rankings. But when a parental leave gap was introduced, candidates were consistently scored lower across roles.
AI systems are not trained on gender or parental status directly. Instead, they rely on variables that correlate with those characteristics, such as past promotion decisions, performance reviews and behavioral patterns. If employees who advanced quickly shared traits like continuous employment, high responsiveness and consistent output, those traits become encoded as signals of success.
This is where the issue compounds. Women are promoted at lower rates than men, and the broken rung at the first step to manager continues to limit advancement. When those outcomes become training data, they shape how AI defines potential.
Consider how this plays out in practice. A company might train an AI system using data from employees promoted over the past five years. If those employees share patterns such as uninterrupted tenure, fast response times and consistent activity across the workday, the system will begin to flag those traits as indicators of potential.
Caregiving responsibilities, particularly among mothers, are associated with higher likelihood of career interruptions, reduced schedule flexibility and less consistent availability. An employee who has taken parental leave or works more fixed hours may be rated lower, not because of performance, but because they do not match the pattern the system has learned to reward.
Human bias is inconsistent. It can vary by manager, team or department. AI systems, however, apply the same logic across an entire organization. Historical patterns inform model outputs, and those outputs shape future outcomes.
A central issue is that AI evaluates patterns, not causes. If a behavior frequently appears among employees who are promoted, the system treats it as a signal of performance, even when it may not be directly related to job capability.
The “Always On” Performance Model
High responsiveness may look like engagement. But it can also reflect fewer competing demands outside of work. AI systems cannot distinguish between the two. Over time, these patterns become the standard for what “good performance” looks like. Employees who are consistently available are more likely to be rewarded, even if availability is not what makes them effective.
Regulators are beginning to address these risks. The U.S. Equal Employment Opportunity Commission has examined how automated systems may contribute to employment discrimination. One commonly used benchmark is the four-fifths rule, which flags potential bias when selection rates for one group fall below 80% of another.
However, experts note that no single measure is sufficient. They recommend using multiple methods to assess fairness, since different approaches can reveal different types of bias.
At the same time, transparency remains limited. Many employees are unaware when AI systems are used in performance evaluations, what data is being collected or how that data influences decisions. In some cases, even managers and HR teams may not fully understand how these systems operate.
Data-driven decision-making is often positioned as more objective. But for working mothers, it is not neutral. AI does not create the motherhood penalty, but it can scale it. What was once an inconsistent pattern of bias can become a consistent system of decision-making.
What Companies Risk Getting Wrong
Companies using AI-based evaluation tools must examine which signals are being prioritized, what those signals represent and who may be advantaged or excluded as a result. Without that scrutiny, AI shifts the motherhood penalty from something that happens informally to something embedded in how decisions are made.
For example, companies using tools like Microsoft’s Workplace Analytics can track how employees work throughout the day, including response times, meeting load and after-hours activity. These signals can provide insight into productivity and engagement. But they can also favor employees who are able to stay consistently connected.
An employee who logs off at a set time, responds more slowly or has gaps in activity may appear less engaged in the data, even if their output is comparable. The system is not intentionally excluding anyone, but it is prioritizing a pattern that is easier for some employees to meet than others.
Loading article...