Arun Balajiee

PhD Student, Intelligent Systems Program, University of Pittsburgh

Computer vision: who is harmed and who benefits?

01 Sep 2020 - Arun Balajiee

Talk Author: Timnit Gebru

Talk Date: 09/01/2020

The idea behind the talk was to elicit the larger implications of Computer Vision (CV) as not just a “computational field” but as a field with the applications of CV in day-to-day interactions of people. This would also mean that CV has become very integrated in the decision making processes of law enforcement agencies, healthcare, ID verification systems, access control systems, and many more – which affect anyone’s daily activities. In such as case, it becomes imperative to discuss the discriminatory facets of the technology, the two sides of the coin – Is CV become harmful or beneficial with its increased use in different circumstances. It is important the implications of algorithmic bias and ways to mitigate them.

In her beautiful presentation, Dr. Timnit breaks down the several factors that affect, and need to be considered in building a more inclusive AI system that can be applied in various pratical use cases. Firstly, she laid down the various potential benefits of AI in building adaptive systems that provide personalized support to individuals. But to build natural adaptive support systems, a lot of individual specific identifiable information becomes known and stored in the system – such as a person’s routine, their personality, their social interactions, their physical profile and body specifics. This information storage, if hacked, could lead to potential side-effects and in some cases harmful effects to the individual, disrupting not only their routine, but also probably leading to far reaching consequences. Hence, it is important to be mindful of the information that is collected to build system while still respecting the privacy of the individual. Additionally, it is important to be mindful about the circumstances and the uses cases that the adaptive support is necessary – does the environment provide enough security and trust to build personalized AI systems?; What is the point of view of the programmer building the CV enabled system for a group of people who he/she has no idea about?; What demographics require the support and is the programmer aware of these challenges?. Dr. Timnit then proceeded to explain the idea of Faception and HireVue, a firm built to perform personality analysis using individual facial profiles. While they interview candidates in the their interviews, they try to predict the possible psychological states by recording the interviews, sometime without the knowledge. Instances such as this one bring to question the possible morality and legality of such applications of CV in common uses – for a computer or a robot agent, these are menial tasks, but is the programmer aware of the implications of these use cases? There are a large number of stakeholders that need to be considered in each of these uses. It is on a case-by-case basis can one decide how much of the consequences of abuse of powerful technology lies in the hands of the developer, the project manager, the business partners and company selling the product. It is also important to make aware the people using these tools, of the larger implications and the Service Level Agreements with using a product.

Dr. Timnit then proceeded to discuss the discrimnatory algorithmic biases against some racial and ethnic profiles. Sometimes, using facial analysis, the individual is deemed a criminal, owing to the bias in the training datasets. Some individual tasks get categorized to be more “feminine” or “manly” based on the training datasets. People of certain races are not recognized by the systems or recognized with high error rates simply because they were not built keep the racial profile data in mind. In some cases, wrongful criminal indictment proceedings have been done owing to use of CV models built using social media profiles and surveillance footages with certain discriminatory characteristics. A project Our Data Bodies discusses such cases in further detail. All this brings to question the possibilities of removing algorithmic bias not just with changing the training datasets of CV models, but a large understanding of the demographics, the large cosequences of their actions from the relevant stakeholders building the product. While big tech firms such as Google, Microsoft, IBM and others have taken a step towards this direction – and some after the benchmark scores from a feedback system build by Dr. Timnit et al., it is still a long way until social issues related to Computer Vision enabled systems are resolved.

From our understanding of this really great, deep talk and our personal point of view, is that technology evolves with innovation. Innovation comes with keeping open mind, in an inclusive environment where support can be provided to those who need, without discrimination and which can be taken without inhibitions. Programmers of today have to be mindful of the programs that are built and have to be careful in catering to the needs without disrupting the social routine and damaging effects. While the constant struggle of technology being a double-edge sword remains, we as developers need to keep changing our outlook of the world and have an open mind to be inclusive so our software systems reflect those ideas as well.