
Google, following on the heels of OpenAI, published a policy proposal in response to the Trump Administration’s name for a nationwide “AI Motion Plan.” The tech big endorsed weak copyright restrictions on AI coaching, in addition to “balanced” export controls that “defend nationwide safety whereas enabling U.S. exports and international enterprise operations.”
“The U.S. must pursue an lively worldwide financial coverage to advocate for American values and assist AI innovation internationally,” Google wrote within the doc. “For too lengthy, AI policymaking has paid disproportionate consideration to the dangers, usually ignoring the prices that misguided regulation can have on innovation, nationwide competitiveness, and scientific management — a dynamic that’s starting to shift underneath the brand new Administration.”
One in all Google’s extra controversial suggestions pertains to using IP-protected materials.
Google argues that “truthful use and text-and-data mining exceptions” are “vital” to AI growth and AI-related scientific innovation. Like OpenAI, the corporate seeks to codify the appropriate for it and rivals to coach on publicly accessible knowledge — together with copyrighted knowledge— largely with out restriction.
“These exceptions permit for using copyrighted, publicly accessible materials for AI coaching with out considerably impacting rightsholders,” Google wrote, “and keep away from usually extremely unpredictable, imbalanced, and prolonged negotiations with knowledge holders throughout mannequin growth or scientific experimentation.”
Google, which has reportedly skilled a number of models on public, copyrighted knowledge, is battling lawsuits with knowledge homeowners who accuse the corporate of failing to inform and compensate them earlier than doing so. U.S. courts have but to resolve whether or not truthful use doctrine successfully shields AI builders from IP litigation.
In its AI coverage proposal, Google additionally takes challenge with certain export controls imposed under the Biden Administration, which it says “might undermine financial competitiveness targets” by “imposing disproportionate burdens on U.S. cloud service suppliers.” That contrasts with statements from Google rivals like Microsoft, which in January said that it was “confident” it may “comply absolutely” with the foundations.
Importantly, the export guidelines, which search to restrict the provision of superior AI chips in disfavored nations, carve out exemptions for trusted companies in search of giant clusters of chips.
Elsewhere in its proposal, Google requires “long-term, sustained” investments in foundational home R&D, pushing again towards latest federal efforts to reduce spending and eliminate grant awards. The corporate mentioned the federal government ought to launch knowledge units that is perhaps useful for business AI coaching, and allocate funding to “early-market R&D” whereas guaranteeing computing and fashions are “broadly accessible” to scientists and establishments.
Pointing to the chaotic regulatory surroundings created by the U.S.’ patchwork of state AI legal guidelines, Google urged the federal government to move federal laws on AI, together with a complete privateness and safety framework. Simply over two months into 2025, the number of pending AI bills in the U.S. has grown to 781, based on a web-based monitoring instrument.
Google cautions the U.S. authorities towards imposing what it perceives to be onerous obligations round AI methods, like utilization legal responsibility obligations. In lots of circumstances, Google argues, the developer of a mannequin “has little to no visibility or management” over how a mannequin is getting used and thus shouldn’t bear duty for misuse.
Traditionally, Google has opposed legal guidelines like California’s defeated SB 1047, which clearly laid out what would represent precautions an AI developer ought to take earlier than releasing a mannequin and wherein circumstances builders is perhaps held chargeable for model-induced harms.
“Even in circumstances the place a developer offers a mannequin on to deployers, deployers will usually be greatest positioned to know the dangers of downstream makes use of, implement efficient danger administration, and conduct post-market
monitoring and logging,” Google wrote.
Google in its proposal additionally referred to as disclosure necessities like these being contemplated by the EU “overly broad,” and mentioned the U.S. authorities ought to oppose transparency guidelines that require “divulging commerce secrets and techniques, permit rivals to duplicate merchandise, or compromise nationwide safety by offering a roadmap to adversaries on the right way to circumvent protections or jailbreak fashions.”
A rising variety of nations and states have handed legal guidelines requiring AI builders to disclose extra about how their methods work. California’s AB-2013 mandates that corporations growing AI methods publish a high-level abstract of the information units that they used to coach their methods. Within the EU, to adjust to the AI Act as soon as it comes into power, corporations must provide mannequin deployers with detailed directions on the operation, limitations, and dangers related to the mannequin.