As a cybersecurity reporter at ProPublica, much of my work Over the past two years, there has been a focus on how the federal government and its IT contractors, like Microsoft, have navigated major technology transitions. The one that makes the news every day is artificial intelligence.
This emerging technology has a grip on everyone: individuals, businesses, and the federal government are all rushing to use it. President Donald Trump and his Cabinet say AI will transform the nation, making it more prosperous, more efficient and safer — if only we can adopt it quickly enough.
But this message is not new. President Barack Obama’s administration used nearly identical language fifteen years ago, as the United States embarked on the cloud computing technology revolution.
I’ve studied how the federal government has managed — and mismanaged — this transition over the past two decades, and my reporting offers valuable cautions and lessons as policymakers encourage the use of AI and federal agencies adopt the technology.
Lesson 1: There is no such thing as a free lunch
SO: In the early 2020s, a series of cyberattacks linked to Russia, China and Iran shook the federal government. The Biden administration has called on big tech companies to help the United States strengthen its defenses. In response, Microsoft CEO Satya Nadella committed to giving the government $150 million in technical services to help improve its digital security. It also offered a “free” security upgrade to government customers.
NOW: Last year, the Trump administration announced a series of deals with technology companies intended to help federal agencies “purchase enterprise AI tools.” at prices favorable to the government.” Agencies could use OpenAI’s ChatGPT for $1. Gemini from Google for 47 cents. Grok by xAI for 42 cents. The administration hoped the low prices would make it “easier for federal teams to acquire powerful AI capabilities…to improve mission execution and operational effectiveness.”
Takeaways: Beware of gifts. Our investigation into Microsoft’s seemingly simple commitment revealed a more complex and profit-driven agenda. After installing the upgrades, federal customers would be effectively locked out, as switching to a competitor after the free trial would be cumbersome and expensive. At this point, the customer would have little choice but to pay higher subscription fees. The plan worked: a former Microsoft salesman told me that “the success was beyond what any of us could have imagined.” In response to questions about that commitment, Microsoft said its “sole focus during this period was to support an urgent request from the Administration to improve the security posture of federal agencies that were continually targeted by sophisticated state-level threat actors.”
Agencies looking to purchase AI tools at discounted rates today should consider how costs might increase in the future. The General Services Administration warns that “the costs of using AI can increase rapidly without appropriate oversight and management controls” and advises agencies to “set limits on use and regularly review consumption reports.”
Lesson 2: Surveillance programs are only as effective as their resources
SO: Under the Obama era, the federal government shifted its computing and sensitive information needs to data centers owned and operated by private companies. Recognizing the potential risks, the administration created the Federal Risk and Authorization Management Program, or FedRAMP, in 2011 to help ensure the security of the cloud computing services it encouraged U.S. agencies to use.
But in my recent survey of the programI found it to be no match for Microsoft, effectively burning out the FedRAMP team for five years as the company sought the program’s seal of approval for a major cloud offering known as GCC High. Despite serious reservations about its cybersecurity, FedRAMP ultimately licensed the product, in part because it lacked the resources to continue. In response to questions, Microsoft told me, “We stand behind our products and the comprehensive steps we have taken to ensure that all FedRAMP-authorized products meet the necessary security and compliance requirements.” »
NOW: Today, this small outpost within the General Services Administration has even fewer resources to oversee the cloud technology the government relies on, including AI. FedRAMP says it now operates “with an absolute minimum of support staff” and “limited customer service.” The program was one of the first targets of the Trump administration’s Department of Government Effectiveness.
Takeaways: FedRAMP, which will be the White House in 2024 note “It has to be an expert program that can analyze and validate the security claims” of cloud providers, is now little more than a rubber stamp for the tech industry, former employees told me. As federal agencies adopt AI tools that rely on quantities of sensitive information, the implications of this downsizing for federal cybersecurity are far-reaching. A GSA spokesperson defended the program and said FedRAMP “now operates with enhanced oversight and accountability mechanisms.”
Lesson 3: “Independent” reviews are only within certain limits
SO: The government has long relied on so-called third-party assessors to verify the security claims of cloud service providers like Microsoft and Google. In theory, these companies are supposed to be independent experts who recommend to FedRAMP whether a product meets federal standards. But in practice, their independence carries an asterisk: they are paid by the companies they evaluate.
My recent investigation revealed that this setup creates an inherent conflict of interest. In the case of Microsoft’s GCC High, two reviewers recommended the product even though they couldn’t fully evaluate it, according to a former FedRAMP reviewer. One of these companies did not respond to my questions and the other denied this account.
Learn more
We found that FedRAMP is well aware of how financial arrangements between cloud computing companies and their assessors can distort official conclusions about cybersecurity issues. The program even created a “back channel” to encourage reviewers to share concerns they might not otherwise raise in their official reports, for fear of angering their technology clients and losing business.
NOW: With FedRAMP reduced to being nothing more than a “paper pusher,” as one former GSA official put it, these third-party assessment companies have taken on even more importance in the verification process. In response to questions from ProPublica, GSA said the FedRAMP system “does not create an inherent conflict of interest for professional auditors meeting ethical and contractual performance expectations.” He did not respond to questions about the program’s return channel.
Takeaways: The pendulum has essentially swung back to the pre-FedRAMP era, when each federal agency was individually responsible for monitoring the products it used. GSA told me that FedRAMP’s job is to “ensure that agencies have sufficient information to make these risk decisions.” The problem is that agencies often lack the staff and resources to conduct thorough reviews, meaning the entire system relies on the claims of cloud computing companies and the reviews of the third-party companies they pay to evaluate them.
