Human Rights and Artificial Intelligence Forum @MIGS, Concordia U.
Zach Devereaux (left) interviewed by the MIGS podcast team, Alexandrine Royer and Duncan Cooper.
Last Friday, we presented our disinformation work at the Montreal Institute for Genocide and Human Rights (MIGS) at Concordia University. There, we heard from experts in data science, machine learning and international governance and policy. We’ve compiled a summary of the talks we attended, and you can watch videos from the conference on the MIGS Facebook page for more details.
Our own Zach Devereaux, Director of public sector relations at Nexalogy, was invited to speak alongside prominent academics Enzo Maria Le Fevre Cervini and Alexander Görlach, Canadian government specialist Tara Denham, lawyer Mirka Snyder Caron, and disarmament expert Erin Hunt.
Building on remarks by previous speakers, Devereaux explained the difference between supervised machine-learning (which relies on human-annotated data) and unsupervised machine-learning (where the AI figures out for itself the relevant information patterns for decision-making), and gave the examples of Alexa and Spotify as technologies that rely on our feedback to learn and decide, for instance, what songs or content to recommend for each user.
Devereaux demonstrated use cases for our social media discovery and analysis system NexaIntelligence, including how it was used during NATO’s Trident Juncture 2018 exercise in Norway and in the aftermath of frigate Helge Ingstad’s sinking in Norwegian waters. NexaIntelligence detected bot accounts spreading both pro and anti-NATO tweets, as well as accounts propagating misinformation regarding the exercise. The last tweet in the table of top tweets below claims NATO forces would invade Norway and the rest of Scandinavia, which is definitely not an accurate description of the joint-forces exercise.
Watch Zach explaining the Helge Ingstad misinformation campaign:
Devereaux then showcased Nexalogy’s work on the Saudi Arabia-Canada rift of 2018: the propaganda filtering technology showed that of the 500,000 tweets about the diplomatic rift, 7,000 were spread by Russian sympathisers known to spread questionable information. In addition, NexaIntelligence helped identify prominent members of the far-right such as Ezra Levant, Tucker Carlson and Jake Tapper as main catalysts for these conversations. The tweet at the core of the controversy (featured below) showed a plane headed for Toronto, an image reminiscent of the 9/11 tragedy in the United States.
Find out more about Nexalogy’s disinformation and propaganda filtering methods or contact us to know how our technology can be applied to your security and business needs.
Enzo Maria Le Fevre Cervini spoke about the need for a human-centered approach to governance and the use of AI in the public sector, and gave us concrete applications for tools based on machine learning and natural language processing. He explained how AI was used to identify the language spoken by newly arrived asylum seekers in Europe, which gives Europeans the capability to interact with them better. He told us of an ongoing project to detect Fake News bots, and of a tool that automates gender-based bias detection and reporting. This last tool is already in place and has reduced processing time for cases of gender-based violence, sending files to the attorney general’s office in priority. Finally, he explained how AI helped predict militias’ attacks on villages a few hours before these attacks occur.
Enzo Maria Le Fevre Cervini is the Director at the Budapest Centre for Mass Atrocities Prevention, Research Coordinator at the AI Lab in the University of Buenos Aires and currently Coordinator of the thematic group on Emerging technologies in the public sector on behalf of the Agency for Digital Italy.
Alexander Görlach made a strong argument for empathy as a means to counter modern autocracies and populism (and save liberal democracies?), which often incite resentment to further divide societies. Citing 1960s sociologists who analyzed the rise of fascism and its appeal, Görlach emphasized the authors’ concerns over mass consumerism, which they thought was not in the public’s interest. We also discussed the dangers of big tech feeding its readers homogeneous content and how that could increase the polarization of societies by keeping individuals in distinct ideological funnels. Finally, Görlach noted the axiomatic shift in our current society, which he said is now divided along the lines of cosmopolitanism against patriotism (see David Goodhart’s work).
Democratic societies (based on a constitution and human rights), he said, are in opposition with the rest of the world and are now facing a crisis because of the increase in gaps within these democracies. In Görlach’s opinion, Fukuyama’s argument in The End of History? is being validated by the changes we see in contemporary societies, namely, the “universalization of Western liberal democracy as the final form of human government” (Fukuyama, 1989). Fukuyama’s argument, which we inevitably must simplify for the purposes of this article, is essentially that the death of Communism as an alternative to liberal democracy does indicate we may have reached Hegel’s “end of history” in the sense that no “viable systematic alternatives to Western liberalism” and consumerism have appeared. The author goes into further detail about “unresolved grievances” of minority ethnic and religious groups, which is why “terrorism and ware of national liberation will continue”, but his main argument is that the “Common Marketization of international relations” is continuing to spread and is becoming the norm. Read Görlach’s own article siding with Fukuyama’s theory.
Cited authors & publications: Fukuyama, The End of History?, 1989 (JSTOR)
Huntington, The Clash of Civilizations?, 1993 (JSTOR)
Alexander Görlach is the Carnegie Council Senior Fellow for Ethics in International Affairs, Senior Research Associate at Cambridge University, and Editor in Chief of Conditio Humana.
Tara Denham presented on her work for the Canadian government and gave us precisions on the aftermath of the G7 conference on AI of december 2018: an international panel of AI experts, academics, and relevant civil society actors is being assembled to develop recommendations for participating governments. She also spoke about China’s social credit system, which relies on citizen’s data being funneled to the government by the private companies owning it. Finally, she described research by academics on emerging technologies including but not limited to AI, and their work on issues specific to the Global South.
Tara Denham is the Director of the Centre for International Digital Policy at Global Affairs Canada, which includes the Digital Inclusion Lab and the G7 Rapid Response Mechanism.
Mirka Snyder Caron presented her work for the Montreal AI Ethics Institute as a lawyer specialized in intellectual property and fintech. She spoke of the benefits and dangers of behavior nudging with the example of Google’s auto-reply and smart compose functionalities in Gmail. She also led a discussion on AI biases, as machine learning tech can adopt a status quo and exacerbate biases depending on the materials used to train the AI. Snyder Caron recommended governance take place early in the creation of an AI and said a preventative approach would be to ensure diversity in the team training the AI to counter biases. Finally, the lawyer mentioned growing concerns over the efficacy of some AIs in emulating human speech including language tics. Enshrining in law the need to mention whether one is interacting with an AI or a real person would help avoid confusion, Snyder Caron concluded.
Mirka Snyder Caron in an Associate at the Montreal AI Ethics Institute, Manager at the L.L.C. Holdings inc., and certified lawyer working on IP, FinTech and BigData.
Erin Hunt presented on her work at Mines Action Canada and international negotiations for a legally-binding ban on lethal autonomous weapons systems (LAWS), also known as killer robots. Hunt spoke of the importance of humanitarian disarmament in zones such as Syria and Yemen, where landmines are still very much present and responsible for life-threatening injuries and deaths. She mentioned Google’s latest debacle with employees opposing the company’s involvement in warfare. About 4,000 employees of the tech giant refused to continue working on technology that would be used to wage war.
Most importantly, Hunt said, the biases resulting from AIs being trained essentially by men combined with the automated use of weapons is certain to bring about a flurry of ethical issues. She further elaborates on this point in her latest article Why ‘killer robots’ are neither feminist nor ethical. Hunt regularly publishes writings at Open Canada on matters such as feminism, Canadian policy and diplomacy regarding automated weapons and humanitarian disarmament.
Erin Hunt is Program Manager at Mines Action Canada and a Humanitarian Disarmament Expert.
We want to thank the team at the Montreal Institute for Genocide and Human Rights (MIGS) and Concordia University for hosting us as part of this exciting panel on the state of AI and Human Rights, as well as all the other speakers, whose presentations taught us much about local and international efforts to use AI within an ethical and human-centered framework.