INTRODUCTION
On July 23, 2025, the Trump Administration published Winning the Race: AMERICA’S AI ACTION PLAN (“AI Action Plan”).[1] Among the goals of the AI Action Plan is the elimination of inappropriate bias[2] and false information in the government’s AI systems. The corruptions caused by inappropriate biasing, which may be introduced at any number of stages in what has been called “the AI pipeline,” lead to systems and outputs that are unreliable and, in some instances, injurious.[3] A core concern regarding the outputs of such flawed systems is that the outputs can contain AI hallucinations – a phenomenon in which the system “in a large language model (LLM), often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.”[4] The roots of this concern are diverse and run deep, but a conspicuous and widely publicized example of hallucinating AI is provided by Google’s attempt to avoid bias and promote diversity in the outputs of its generative AI Gemini model. Google’s tampering with historical truths was ostensibly well-intentioned and harmless, but it pointed out some underlying AI-related concerns that have more serious and potentially injurious consequences. These consequences arise when a flawed AI system’s output is being relied on to make decisions that affect people’s lives. The following text briefly recaps the Google Gemini matter and then discusses aspects of the AI Action Plan that are intended to identify and eliminate inappropriate bias and falsity in government AI systems.
*This article is the first in a series of discussions of Winning the Race: AMERICA’S AI ACTION PLAN, issued by the Trump Administration on July 23, 2025.
AUTHOR
Gary Rinkerman is a Founding Partner at the law firm of Pierson Ferdinand, LLP, an Honorary Professor of Intellectual Property Law at Queen Mary University School of Law in London, a member of George Mason University’s Center For Assurance Research and Engineering, and a Senior Fellow at George Mason University’s Center for Excellence in Government Cybersecurity Risk Management and Resilience. The views and information provided in this article are solely the work of the author and do not comprise legal advice. They are not for attribution to any entity represented by the author or with which he is affiliated or a member. All Internet citations and links in this article were visited and validated on July 27, 2025.