Lina Khan, the chair of the US Federal Trade Commission has attracted a lot of attention for her New York Times opinion piece on the regulatory issues posed by the rise of generative AI.
The “trajectory of the Web 2.0 era”, Khan points out, was not inevitable. The domination of the web by a handful of Big Tech companies was shaped by a series of policy and enforcement choices ie: US regulators chose to look the other way while the likes of Facebook, Google, and Meta amassed a treasure trove of data on billions of people and acquired any company that looked to threaten their dominance.
In a follow-up interview with veteran tech journo Kara Swisher, Khan admitted that the real reason for regulatory inaction during the Web 2.0 era was down to the power of tech industry lobbyists, who have also managed to thwart progress on belated efforts to address data security and privacy, despite decent bipartisan support for reform in the US Congress.
Nevertheless, Khan claims that the US has all it needs to deal with the rise of generative AI.
“Although these tools are novel, they are not exempt from existing rules, and the FTC will vigorously enforce the laws we are charged with administering, even in this new market,” she wrote in the New York Times.
Tooth and nail fight
As I noted last week, the European Union has taken a very different view on that, drafting the AI Act which creates a compliance regime for systems drawing on AI. No existing US legislation requires tech applications to be vetted by a regulator in the way new drugs or pesticides are. Lobbyists will fight tooth and nail to avoid the US taking the same path as the EU.
Here, the closest thing we have to a regulator in this space, the Office of the Privacy Commissioner, this week echoed Khan’s sentiments.
“Generative AI is covered by the Privacy Act 2020 and my Office will be working to ensure that is being complied with; including investigating where appropriate,” Privacy Commissioner Michael Webster wrote.
He claims he has enough powers to deal with misuse of AI and issued a 7-point memo outlining his “expectations” for how organisations approach the deployment of generative AI systems. That follows his writing to government agencies warning them against “prematurely jumping into using generative AI without a proper assessment” and forming a “whole of government” approach to how it is used.
The memo suggests senior leaders in businesses and organisations decide how to implement generative AI and that a privacy impact assessment is conducted before it is deployed. It is all sensible stuff, but our privacy legislation is generally considered to be weak, even after the new provisions that went into effect in 2020.
A weak Act
As University of Auckland academics pointed out this week, “the Act does not provide an adequate punitive fines regime, it does not provide the right to be forgotten and it does not specifically address algorithms, profiling or automated decision-making”.
We basically ended up with a new privacy act that was already well out of date before it was put into effect. The academics hope the Minister of Justice’s “assurances of ongoing review and incremental reform of the Act” can spur legislative amendments as required.
With the deluge of new generative AI applications, we may find out sooner rather than later just how fit for purpose our legislation is for the era of AI.