This has just become a big week for AI regulation

The EU is known for its hard line against Big Tech, but the FTC has taken a softer approach, at least in recent years. The agency is meant to police unfair and dishonest trade practices. Its remit is narrow—it does not have jurisdiction over government agencies, banks, or nonprofits. But it can step in when companies misrepresent the capabilities of a product they are selling, which means firms that claim their facial recognition systems, predictive policing algorithms or healthcare tools are not biased may now be in the line of fire. “Where they do have power, they have enormous power,” says Calo.

Taking action

The FTC has not always been willing to wield that power. Following criticism in the 1980s and ’90s that it was being too aggressive, it backed off and picked fewer fights, especially against technology companies. This looks to be changing.

In the blog post, the FTC warns vendors that claims about AI must be “truthful, non-deceptive, and backed up by evidence.”

“For example, let’s say an AI developer tells clients that its product will provide ‘100% unbiased hiring decisions,’ but the algorithm was built with data that lacked racial or gender diversity. The result may be deception, discrimination—and an FTC law enforcement action.”

The FTC action has bipartisan support in the Senate, where commissioners were asked yesterday what more they could be doing and what they needed to do it. “There’s wind behind the sails,” says Calo.

Source: MIT Technology Review

Posted in Uncategorised and tagged .