The US government may create a digital bill of rights to protect people against artificial intelligence technologies.
The White House’s Office of Science and Technology Policy is looking at how biometric technologies – such as facial recognition impact people. In particular, it is considering how stakeholders are “impacted by their use or regulation.”
In a Wired opinion piece that refers to America’s Bill of Rights, two government advisors call for action. Eric Lander and Alondra Nelson said: “Throughout our history we have had to reinterpret, reaffirm, and periodically expand these rights. In the 21st century, we need a ‘bill of rights’ to guard against the powerful technologies we have created.”
In fact, the writers set out a range of options open to the US government. Those include requiring federal contractors to follow a bill of rights, or adopting further laws and regulation to curb and control their use.
At the moment, the committee is in fact-finding mode. It is calling on all of those concerned with or by such technologies to submit their thoughts, experiences and ideas.
The context for the proposed bill is the rapid take-up of machine decision-making technologies by companies. While web searches and music apps have long made use of AI to suggest products and tunes, algorithms are now prevalent in daily life.
Because many algorithms are trained on historical data, bias creeps in in subtle ways. For example, data sets for training algorithms sometimes date back to the 1960s when attitudes on race and gender were different. Those differences are often embedded in machine decisions in ways that discriminate against women, for example, or racial groups.
“In the United States, some of the failings of AI may be unintentional, but they are serious and they disproportionately affect already marginalized individuals and communities,” Lander and Nelson say. “They often result from AI developers not using appropriate data sets and not auditing systems comprehensively, as well as not having diverse perspectives around the table to anticipate and fix problems before products are used (or to kill products that can’t be fixed).”
Not surprisingly, the US software trade association BSA favours self-regulation. It wants companies to conduct their own risk assessments and demonstrate how risks are being mitigated.
“It enables the good that everybody sees in AI but minimises the risk that it’s going to lead to discrimination and perpetuate bias,” Aaron Cooper, BSA’s vice president of global policy, told AP News.
This website uses cookies to ensure you get the best experience on our website.
Read our Privacy Statement & Cookie Policy