Lawmakers dive into AI ethics

Lawmakers in recent months have offered a slew of bills to oversee the use of artificial intelligence (AI) amid worries about the potential discriminatory effects of the technology.

Those efforts have been hailed by civil rights groups who say the government should provide more oversight of AI technology. But any legislation faces an uphill battle with tech companies eager to avoid more government regulation.

“What we’ve seen from a lot of companies is that they’re trying to get out in front of it,” Jameson Spivack, a policy associate with Georgetown Law’s Center on Privacy and Technology, told The Hill. “They’re recognizing popular sentiment is turning against them.”

Congress is paying more attention to the issue of technology and discrimination. Lawmakers in recent months have introduced first-of-their-kind bills on topics such as the use of facial recognition technology and safeguards to prevent biased algorithms.

Silicon Valley, though, is working overtime to make sure the industry has a seat at the table as those efforts advance.

The big question: Whether Congress and the tech industry can work together on crafting rules or whether they will be at odds remains to be seen.

“The internet industry is committed to working with policymakers and other stakeholders to ensure new technologies are not creating or reinforcing unfair bias,” Sean Perryman, the Internet Association’s director of diversity and inclusion policy, told The Hill in a statement.

The Internet Association, a tech industry trade group, represents companies at the forefront of AI including Microsoft, Amazon, Google and Facebook.

In the House: The Internet Association earlier this year supported a resolution introduced by Reps. Ro Khanna (D-Calif.) and Brenda Lawrence (D-Mich.), which called for the “ethical development” of artificial intelligence technology.

The resolution, which was light on specifics, called for the creation of AI ethics guidelines that would “empower women and underrepresented or marginalized populations” and offer “accountability and oversight for all automated decisionmaking.” It attracted endorsements from tech companies, including IBM and Facebook.

Khanna, who represents Silicon Valley, told The Hill that he and Lawrence are now working to assemble a group of stakeholders in the AI ethics debate — including academics, civil rights advocates and tech companies — to develop a framework that will guide any legislation he introduces on the issue.

“Congress doesn’t have the expertise to address this within our own building,” Khanna said. “We need to go outside to the academics, to thinkers in this space, to people who really understand what is happening and have their expertise. Then we can debate the appropriate framework.”

In the Senate: As Lawrence and Khanna work with industry to assemble a group of AI experts, other members of Congress though are barreling ahead with legislation around biased algorithms and facial recognition technology.

A bill introduced last week by a group of Democrats from both chambers, including Sens. Ron Wyden (D-Ore.) and Cory Booker (D-N.J.), a 2020 presidential candidate, would require companies to review their computer algorithms for “unfair, biased or discriminatory” decisionmaking. The Federal Trade Commission would enforce and oversee those assessments.

Lawmakers have also increasingly called for regulations on facial recognition technology, which analyzes human faces for the purpose of identifying them. Last month, Sens. Brian Schatz (D-Hawaii) and Roy Blunt (R-Mo.) introduced a first-of-its-kind bill that would require third-party testing and human review of facial recognition technologies before they are made widely available.

What’s next: Lawrence told The Hill that, as it stands, the tech industry is the “Wild West” and she believes there must be adequate regulations to keep up with changes.

“I think that any legislation needs to recognize that while these technologies affect everyone, they disproportionately affect vulnerable people,” Spivack said.