top of page
Writer's pictureJeff Hulett

From Good to Great: Navigating AI’s Precision While Tackling Hidden Bias

Updated: 2 days ago

From Good to Great: Navigating AI’s Precision While Tackling Hidden Bias

Artificial Intelligence (AI) is rapidly transforming the way we approach tasks across work and personal life, especially with the rise of generative AI (GenAI). While AI’s precision and speed have revolutionized how we complete tasks, it’s essential to recognize that AI is not perfect. When used correctly, AI can get us about 80% of the way there—leaving the last 20% up to human effort. However, the risks of bias within AI systems pose a unique challenge, especially when precision can create an illusion of complete accuracy. Understanding how to leverage AI’s power while mitigating its shortcomings is crucial to maximizing its potential.


About the author:  Jeff Hulett leads Personal Finance Reimagined, a decision-making and financial education platform. He teaches personal finance at James Madison University and provides personal finance seminars. Check out his book -- Making Choices, Making Money: Your Guide to Making Confident Financial Decisions.


Jeff is a career banker, data scientist, behavioral economist, and choice architect. Jeff has held banking and consulting leadership roles at Wells Fargo, Citibank, KPMG, and IBM.


AI Precision and the 80/20 Rule


Generative AI allows individuals and businesses to perform tasks with remarkable precision and in much less time than ever before. Whether drafting reports, analyzing financial data, or writing code, AI accelerates the process of getting to an outcome that might otherwise take hours, or even days, to accomplish. However, while AI can deliver results quickly, it's often not perfect. This is where the 80/20 rule—known as the Pareto Principle—comes into play.


The Pareto Principle suggests that 80% of outcomes come from 20% of the effort. Applying this to AI, we can think of AI as delivering 80% of the precision we need. It quickly moves us from "bad" (0% progress) to "good" (80% progress), leaving it up to us to decide when “good” is “good enough” and whether pushing for "great" (100%) is necessary or whether 80% suffices. As Our Trade-off Life: How the 80/20 Rule Leads to a Healthier, Wealthier Life explains, “[T]here are many business alternatives that are necessary but less strategic. These are important to stop at ‘good’ and divert resources to other priority strategic projects." AI can get us to that "good" threshold faster, but the final push to perfect the outcome still requires human intervention.


In practice, this means AI can rapidly provide high-quality drafts of legal documents, medical diagnoses, or marketing strategies, but final editing, ethical considerations, and tailoring for context are areas where human expertise comes into play. The question then becomes, how much further do we need to push toward perfection?


Precision vs. Bias: AI’s Potential to Amplify Inaccuracies


While AI excels in precision, bias is a different challenge altogether. The siren's song of precision and speed can lure us into believing that AI’s results are entirely accurate, but biases—often embedded within the data that these models are trained on—can distort those outcomes. This risk is particularly critical in high-stakes fields like healthcare, lending, and hiring, where biased decisions can lead to unfair consequences, even when the AI provides precise results. Essentially, being highly precise but relatively inaccurate is akin to being exceptionally skilled at executing someone else's strategy.


As explored in Unequal Roots, Unequal Outcomes: How Modern AI Grows Past Bias, "Data is a mirror that reflects both the successes and the failings of the past." If AI systems rely on data sets that carry historical biases, the algorithms may inadvertently reinforce these inequalities. For instance, AI used in credit scoring might perpetuate racial disparities by relying on biased data, leading to unfair lending decisions. This happens despite AI’s precision in calculating credit scores based on available historical data. Although AI can make precise decisions using the data it has access to, the lack of additional data that is valid but not available to the scoring algorithms may result in decreased accuracy. As noted in Good Decision-Making and Financial Services: The Surprising Impact of Bias and Noise, "Algorithms are only as good as the data they are trained on. If the data is flawed, incomplete, or biased, the algorithm’s output will be equally flawed."


This paradox—where AI can be highly precise yet deeply biased—creates a challenge for individuals and organizations relying on these systems. We must understand that while AI can quickly achieve good results, unchecked biases can lead to harmful decisions, creating false confidence in the AI’s accuracy.


AI’s Speed: Beneficial, but at What Cost?


Generative AI’s speed is undoubtedly one of its most attractive features. It can complete tasks that would take humans hours in mere seconds, streamlining processes and enhancing productivity. For example, businesses can use AI to analyze customer feedback or monitor real-time market trends, making decisions based on a wealth of data that would be impossible to process manually. But this rapid pace of analysis and decision-making is where potential pitfalls lie.


The speed at which largely precise answers are rendered can give users a false sense of security in the decisions made using AI outputs. As AI quickly delivers answers, individuals may assume those answers are always correct, and therefore may not scrutinize the results closely enough. This could be particularly dangerous when biases are hidden within the data.


The 80/20 rule again comes into play here: AI can give us that quick 80%, but it’s critical to take the time to review and refine the final 20%. In some cases, striving for "great" might mean revisiting the data to identify and correct biases that the AI overlooked. In others, it may be as simple as applying human intuition to ensure decisions are ethical and fair. As Our Trade-off Life notes, "Great provides an opportunity to achieve 100% of the value, but you have a low(er) chance of achieving that value"—meaning that 100% perfection is often not necessary, but attention to key details is still critical.


The Role of Human Judgment


AI’s power is remarkable, but it still relies on human judgment to reach its full potential. In many ways, AI serves as a powerful tool that enables people to focus more on the complex, nuanced aspects of tasks rather than spending time on the basics. However, the final outcomes—especially in areas where fairness, ethics, and equity are at stake—must be carefully reviewed by people who understand the broader implications.


Dr. Marty Makary is a prominent surgeon and public health advocate known for emphasizing transparency and reform in healthcare. He argues that “medical blind spots,” such as entrenched practices and financial incentives, can lead to biased decisions that do not always serve patients’ best interests. Makary highlights that “systemic issues in healthcare often stem from an overreliance on outdated practices, rather than focusing on patient-centered, evidence-based approaches.”  He calls for a shift toward greater humility within the medical community, encouraging openness to new evidence to address these persistent biases effectively .


A key part of leveraging AI successfully is recognizing when precision is good enough and when we need to invest more effort to eliminate bias. In highly sensitive sectors such as criminal justice or healthcare, failing to address bias could have severe consequences. For instance, biased data in criminal justice could lead to AI models that disproportionately assign higher risk scores to individuals from specific racial or socioeconomic backgrounds, reinforcing disparities in sentencing and parole decisions. In such cases, relying on unchecked AI recommendations might result in unfair outcomes, perpetuating systemic inequalities within the justice system.


As noted in Unequal Roots, Unequal Outcomes, "Bias in AI models is often a reflection of bias in society," and addressing it requires more than technological fixes—it requires societal change and a commitment to fairness. The human touch is essential in understanding the complex social dynamics that algorithms cannot fully grasp, and in ensuring that AI serves all people equitably.


Conclusion: Embrace the 80%, but Don’t Ignore the Last 20%


Generative AI offers a massive leap forward in precision and productivity, dramatically reducing the time it takes to complete complex tasks. However, while AI can get us 80% of the way toward a solution, it’s up to humans to ensure that the final 20% is both accurate and free of harmful bias. As we continue to integrate AI into more facets of life and work, we must remain vigilant in applying human judgment where AI falls short.


The Pareto Principle serves as a useful framework for understanding how to approach AI in practical settings—AI may quickly get you from "bad" to "good," but it’s up to you to determined when “good” is “good enough” and decide whether pursuing "great" is worth the effort. Most importantly, we must be aware of the biases AI can introduce and ensure that we are not lulled into complacency by the illusion of precision. The future of AI holds great promise, but only if we carefully balance its speed and power with human oversight.


Citations from Good Decision-Making and Financial Services: The Surprising Impact of Bias and Noise:


  1. Hulett, Good Decision-Making and Financial Services: The Surprising Impact of Bias and Noise, The Curiosity Vine, 2021 https://www.thecuriosityvine.com/post/good-decision-making-and-financial-services

  2. Mullainathan, S., Shafir, E. Scarcity: Why Having Too Little Means So Much, Times Books, 2013

  3. Makary, Marty. The Price We Pay: What Broke American Health Care—and How to Fix It. Bloomsbury Publishing, 2019.

  4. Makary, M., “How Biases and Blind Spots Undermine Modern Medicine,” New York Times, 2019.

  5. Makary, M., “On Transparency in Healthcare,” Johns Hopkins Medicine Blog, 2018.


Citations from Our Trade-off Life: How the 80/20 Rule Leads to a Healthier, Wealthier Life:


  1. Hulett, Our Trade-off Life: How the 80/20 Rule Leads to a Healthier, Wealthier Life, The Curiosity Vine, 2023 https://www.thecuriosityvine.com/post/life-is-a-series-of-tradeoffs

  2. Frank, A. Who Were Your Millionth-Great-Grandparents?, National Public Radio, 2017

  3. Thaler, R., Sunstein, C. Nudge: The Final Edition, 2021

  4. Ramsey, D. The Total Money Makeover: A Proven Plan for Financial Fitness, 1994

  5. Roberts, R. Wild Problems: A Guide to the Decisions That Define Us, 2022

  6. Dawkins, R. The Selfish Gene, 1976

  7. Levitt, S. D., Donohue, J. J. The Impact Of Legalized Abortion On Crime, The Quarterly Journal Of Economics, 2001

  8. Levitt, S. D., Donohue, J. J. The Impact of Legalized Abortion on Crime over the Last Two Decades, National Bureau of Economic Research, 2019

  9. Grant, A. Think Again: The Power of Knowing What You Don't Know, 2021


Citations from Unequal Roots, Unequal Outcomes: How Modern AI Grows Past Bias:


  1. Hulett, Unequal Roots, Unequal Outcomes: How Modern AI Grows Past Bias, The Curiosity Vine, 2024 https://www.thecuriosityvine.com/post/unequal-roots-unequal-outcomes-how-past-bias-grows-into-modern-ai

  2. Bender, E. M., Gebru, T., McMillan-Major, A., Mitchell, M. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, FAccT ’21, March 3–10, 2021 https://dl.acm.org/doi/10.1145/3442188.3445922

  3. Ludwid, S. “Credit Scores in America Perpetuate Racial Injustice. Here’s How.” Quartz.

  4. IBM. “What is Backpropagation?” IBM https://www.ibm.com/cloud/learn/backpropagation

  5. Blattner, L., Nelson, S. How Costly is Noise? Data and Disparities in Consumer Credit, The Quarterly Journal of Economics.

Comments


bottom of page