Since the start of the widespread protests against racial inequalities, IBM has announced that it cancel facial recognition programs advance racial equity in law enforcement. Amazon has suspended police use of its Rekognition software for a year to “put in place stricter regulations to govern the ethical use of facial recognition technology.”
But we need more than regulatory change; the whole field of artificial intelligence (AI) must come out of the computer lab and accept the support of the whole community.
We can develop amazing AI that works in the world in a largely unbiased way. But to achieve this, AI cannot simply be a subdomain of Computer Science (CS) and Computer Engineering (CE), as it is now. We need to create an academic discipline of AI that takes into account the complexity of human behavior. We need to move from IT-owned AI to IT-based AI. AI problems don’t happen in the lab; they happen when scientists move technology into the real world of people. The training data in the CS lab often lacks the context and complexity of the world you and I inhabit. This flaw perpetuates biases.
AI-based algorithms have been found to display biases against people of color and against women. In 2014, for example, Amazon found that an AI algorithm developed to automate headhunting learned to be prejudiced against female candidates. MIT researchers reported in January 2019 that facial recognition software is less accurate at identifying humans with darker pigmentation. More recently, in a study At the end of last year, by the National Institute of Standards and Technology (NIST), researchers found evidence of racial bias in nearly 200 facial recognition algorithms.
Despite the countless examples of AI mistakes, the zeal continues. This is why the announcements from IBM and Amazon generated such positive media coverage. Global use of artificial intelligence increased by 270% from 2015 to 2019, the market expected to generate revenues of $ 118.6 billion by 2025. According to Gallup, almost 90% of Americans already use AI products in their daily lives – often without even realizing it.
Beyond a 12-month hiatus, we must recognize that while building AI is a technological challenge, the use of AI requires heavy non-software development disciplines such as social sciences, law and politics. But despite our increasingly ubiquitous use of AI, AI as a field of study is still lumped into the fields of CS and CE. At North Carolina State University, for example, algorithms and AI are taught in the CS curriculum. MIT hosts the AI study under CS and CE. AI needs to turn it into humanities programs, race and gender curricula, and business schools. Let’s develop an AI track in political science departments. In my own program at Georgetown University, we teach concepts of AI and machine learning to security studies students. It must become standard practice.
Without a broader approach to professionalizing AI, we will almost certainly perpetuate the prejudices and discriminatory practices that exist today. We can just discriminate cheaply – not a lofty goal for technology. We need the intentional establishment of a field of AI whose goal is to understand the development of neural networks and the social contexts in which the technology will be deployed.
In computer engineering, a student studies programming and the fundamentals of computer science. In computer science, they study computational and programmatic theory, including the basics of algorithmic learning. These are a solid foundation for studying AI – but they should only be seen as components. These basics are necessary to understand the field of AI but not sufficient on their own.
In order for people to feel comfortable with a large deployment of AI so that tech companies like Amazon and IBM, and countless others, can deploy these innovations, the whole discipline must go beyond the CS lab. . Those who work in disciplines such as psychology, sociology, anthropology and neuroscience are necessary. Understanding patterns of human behavior, biases in data generation processes are necessary. I couldn’t have created the software I developed to identify human trafficking, money laundering and other illicit behavior without my training in behavioral sciences.
Responsible management of machine learning processes is no longer just a desirable component of progress, but a necessary component. We must recognize the pitfalls of human prejudices and the errors in replicating these prejudices in the machines of tomorrow, and the social and human sciences provide the keys. We can only achieve this if a new field of AI, encompassing all these disciplines, is created.