Lina Khan, Chair of the Federal Trade Commission (FTC), published an opinion piece in the New York Times today, outlining her agency’s five main concerns about the potential dangers of artificial intelligence (AI) (see below for a list).
One of Ms. Khan’s concerns regarding AI is valid and should indeed be taken on by the FTC. Making sure powerful tech giants don’t become even more powerful is a serious one.
However, her other concerns are less on point – protecting users’ privacy is, of course, important. But that is something important wherever personal data is being processed, not a unique problem of AI. It is also something taken care of by privacy regulation (which the FTC is still due by the way).
And sure, AI may make the job of scammers and price-fixers easier. But it’s hard to see how there is something in the nature of AI that makes either issue worse in a qualitative fashion. AI makes bad guys more “productive” – just like it makes you and me more productive. But AI does not bring upon a step change to catastrophe. The FTC’s current measures against bad actors should suffice. (And hey, they can use AI to make catching the bad guys easier too.)
Ms. Khan’s concerns around bias in the training of AI systems may be more of an expression of the current administration’s own bias about bias. AI bias does not seem to be a widespread issue, and the introduction of anti AI bias regulation, when in the real world, there is little of it, could stifle day-to-day operations and innovation.
Curiously, one important item is missing in Ms. Khan’s list altogether – AI companies scraping content providers’ and publishers’ databases without their consent and without paying them for it. But perhaps the FTC believes that the free market will take care of that; and perhaps it will.
The FTC’s list of concerns should be seen as an early draft of the agency’s approach to regulating AI. It is the very early days of at-large AI products, and it is hard to say what even needs to be regulated yet. Making sure that the tech oligopoly doesn’t become even more powerful is one. Keeping tech giants from scraping third parties’ content to train their AI products without offering recompense is another.
But whatever the FTC is considering, it – and the administration at large as well as Congress – need to weigh two goals against each other. On the one hand, they need to protect consumers and fair competition. On the other, they need to give the industry as much leeway as possible to drive innovation, especially vis-a-vis our rivalry with China. If they do, the uber-technology that is AI will give the United States a competitive edge that will be hard to match by China, given the restrictions AI developers face in a totalitarian system.
In light of the FTC’s dual mission to guarantee fair competition and protect consumers, FTC chair Lina Khan sees the following five main dangers coming from AI:
- Making powerful tech giants even more powerful. Developing AI, particularly large-language models (LLM) such as OpenAI’s GPT, requires vast amounts of computing power and money. This could make AI platforms affordable only to big companies such as Microsoft, Google, and Amazon. Smaller players would need to base their AI products on the big companies’ AI platforms – making them dependent on them. (We have seen recently what not controlling the platform upon which you run your product can do to you. When Apple introduced privacy measures on iOS that severely cut into Meta’s ad sales, all Mark Zuckerberg could do was seethe impotently.)
- Makes price collusion easier. Ms. Khan also argues that AI tools could help companies coordinate pricing of their products, increasing costs for consumers and companies. (This is coming a little bit out of left field – pricing collusion does not seem to be a widespread problem, and it is hard to see how there is something in the nature of AI that worsens the issue in a qualitative way.)
- Helps scammers. Khan also sees AI tools making the job of fraudsters easier by helping them create phishing emails, fake websites, and fake product reviews. She also believes deepfake videos and voice clones could facilitate fraud and extortion on a massive scale. (The former item, as above, isn’t really anything that is brought upon by something in the nature of AI. AI simply makes the fraudster’s job easier, just like AI makes it easier for me to write this research note. The latter item, deepfakes, seems to be more of a problem of disinformation, where a fake Barack Obama might endorse the real Donald Trump, to use an extreme example. But of course, that is not in the purview of the FTC. This is perhaps something where eventually, Congress will have to weigh the value of free speech against the value of a stable society not riven by a constant stream of very believable lies.)
- Automates discrimination. Here, the concern is that AI systems may be trained on information that contains errors and bias, which could automate discrimination on a large scale, “unfairly locking out people from jobs, housing, or key services.” (Bias in AI training, long a pet complaint of the current administration, is a fair concern – the Google facial recognition project comes to mind, where engineers used a database of portrait photos to train the AI that mostly consisted of white people. This, however, seems to have been an issue of early AI development, where proof of concept was more important than political correctness. To be sure, eventually, AI systems will be trained on the largest data sets developers can possibly get their hands on – possibly the entirety of the Internet in the case of LLM chatbots.)
- Invades users’ privacy. If AI models were trained on emails, chats, and texts, they could disclose personal or sensitive information. (A fair enough concern given how close our industry has sailed to the wind in the past when it came to privacy protection. Even just the idea that an AI would rummage through people’s underwear drawers is unsettling, even short of disclosing any information. However, this seems to fall more into the realm of privacy protection regulations such as California’s CCPA and the EU’s GDPR.)

Leave a Reply