Interview with Helen Edwards | Sonder Scheme
Updated: Dec 30, 2020
Abhivardhan, Baldeep Singh Gill and Kshitij Naik from Indian Society of Artificial Intelligence and Law had a special interview session with Helen Edwards, founder of Sonder Scheme.
Abhivardhan: Please introduce yourself and your connection with AI as a field.
Helen: I am based in US and Sonder Scheme is a boutique company focused on the human-centric design of AI. We are focused on AI ethics and we think that getting ethics up front is what really matters but very much of our whole business is around speaking workshops and a design system for including more people and more ideas in the front part of an AI design and we have a process that we have put together that help people do that. Our speaking workshops and speaking design system are available online that we use to help people understand how to design human-centric AI.
Abhivardhan: Impeccably our interest in Artificial Intelligence ethics stems from social sciences because our focus is particularly on the legal and anthropological side. So I’ll start the interview now and my first question would be, “How would you like to elaborate about Sonder Scheme and a little about the market which is actually concerned with ethics i.e. how technology companies in the US envision about AI ethics and what do they see currently in these situations as well?”
Helen: I think that in particular in the last few years, people have become much more aware of the power of the platforms and the power of machine learning sitting behind these platforms. So, on one hand, you have got public awareness of ethics, the speed and scale and of AI research behind those platforms and the impact on commoners or democracy or media. On the other, you’ve got a much more aware of the process of AI now.
Treatment of people who see in these automated decisions through government programs and we has many cases in the US in Michigan and Oregon and in various other states where social welfare systems have been automated and people have no idea about the impact that it poses on the vulnerable communities. So, if we imagine that at either ends of the spectrum that how the big platforms in the future of antitrust, the future of democracy, how things are working in government in codifying social systems. It all comes down to the same thing that more and more behavioural problem and social problems can be encoded in AI and can cause harm before people even notice. Public awareness is caused the companies to think about AI ethics. Employee ethicists have a good intention but it depends on how you measure the impact of those ethicists that is a particular point of interest today and a lot of people are focusing on the companies and how these companies measure the impact of ethicists because either they are able to make decisions about the products or is it for the reputational reasons.
Baldeep: How in the post-COVID-19 situation, the employment and entrepreneurship sector related to Artificial Intelligence (AI) will be affected in developed countries or D9 countries?
Helen: If we look at the core basics of AI, it learns from data. What we see with COVID-19 is that it is a phenomenal disruption in the data and the deep discontinuity in data, one should wonder what the AI is going to do with the data and what conclusions the AI will make with such data because the COVID-19 data is outside the trained model of AI. We have to look at if there is a reduction in the accuracy, number of false positives and false negatives in supervising models. The first thing we have to do is to relook at some of the AI models and if I would be running a company with big models I would be asking questions relating to change of accuracy and the impact of data discontinuity and start measuring the impact. The other important thing is that COVID-19 at many other places is amplifying existing effects or speeding up the things, so one of the things is that AI has made a lot of progress and has been able to speed up the process of basic science like understanding how proteins and cells work.
I expect to see more progress and more support towards the basic science and the contribution that AI is making. So in the post COVID world, it would be crazy to expect anything other than some degree of slowdown for some entrepreneurs and funding startups across the board. So surviving this for an AI entrepreneur is the same as anybody.
I think there’s another part of it that is important around is what happens with employment and the speculation about more robots coming in. People are investing more on robots rather than other people and there is a historical precedent today and last century a lot of mechanisation of US agriculture because of the labour shortage and this time we’re in a potential labour shortage and also concerns about health and safety. We can also know that digital surveillance and tracking will become more pervasive and it’s a huge area of discussion but whether it is in society or whether it is in tracking the workers at the workplace, people would actually want to know how and whether they came in contact with the virus until there is a vaccine which we hope there will be as soon as possible. So even some of the most vocal advocates of privacy and digital privacy are talking about the vaccination rather than digital surveillance, I think in the post COVID world, it will take time to adjust towards digital surveillance and tracking happening in each country.
Kshitij: How a Human-centric AI can be made better? Will Human-centric AI be beneficial for developing countries?
Helen: If we go back to the core of human-centric design, it is about understanding and development going on around human-centric AI. It is a process of not making assumptions but it is a process of observance and understanding how to build something that supports the user and why that it strips away the assumptions of the observer of AI. In human-centred AI design, we take a step further, because we start out with the initial idea of that AI is different because it creates a role, an additional help, back and forth with the user that alters the way that preferences are formed and alters choice, so, alters human agency ultimately. Because we start with a core idea and an understanding that AI creates these roles and its not a traditional technology, it is about understanding what happens when we let Artificial Intelligence starts to learning. So, in traditional design, as a designer, you can be focused solely on intent i.e., you do not have to worry so much about the consequences but, in AI design you can not just focus on. You have to look at how AI works which is that, it learns from the data that is in the world and how it changes its roles.
The most valuable material is not plastic or glass but it is human behaviour and when you’re working on something whose behaviour is somewhat based on its post design experience as a designer you can not differentiate and you can not step away and say here is my intent and it does not matter what actually happens as long as I intend for a good outcome or intend for this to happen. You have to actually anticipate all of these consequences and it is the core where the human-centred AI design starts. It is about understanding and thinking about the intent, the consequences and it’s about the power and what shifts around that power is algorithms operate and its data changes.
So, human-centred AI design is really stepping back and say let’s think about these things upfront i.e., thinking about accountability, inferences in privacy and how privacy changes and, thinking about biases and fairness and equality in the first part of the design so that we can really understand what is the role of the human and what is the role od the machine. A lot of poor automation decisions, poor AI decisions and the worst decisions are made because there is a poor differentiation what a human does and what a machine does. This happens with personalisation because we do not know how we feel about something until it actually happens. So you’ve got to keep thinking through all of these processes, different categories and groups, data and the biases that might happen in that groups of data to allow yourself to design upfront for the consequences of recommending to someone as an individual something that they do not actually want or not recommending something to an individual that they did want and it actually excludes them. Such a thing happens in pre-recruitment because of the way that biases either come in from a company looking or come in from the platform that is doing the advertising and that you end up in the situation with people that would have been interested in seeing a particular job interview or job placement do not actually end up seeing it. So, all of these things we see happening here I see no reason why they would not happen in any other country. It is going to happen at a bigger scale. These different social positions and all of these when we work with clients and when we work into consequences, a lot of people say that we cannot see what is going to happen. These all these are unintended consequences but a lot of them are not and they are actually quite visible right at the outset because all our ideas is just amplified and exacerbate and find these divisions in society that already exist. So, our idea is that human-centred design applies everywhere but you can see different outcomes and different results just simply because there are different social lines in different countries.
Abhivardhan: So I have an interesting question. When we say that we have to make a responsible and ethical AI, how in limited terms, can we really achieve a responsible and ethical AI and in this century we see an AGI or strong AI?
Helen: Predicting the rise of AGI, we do not even try to predict it this because it is an old saying that all forecasts are wrong but some are useful. So, when it comes to forecasting AGI, I look for where are the other people ideas on this useful. I think there are a couple of things. First of all, go back to the point of ethical and responsible AI. You can look at it from the existential level i.e., how we can make AI safe and not [dangerous for] humans. You can look at how you actually plan and resolve the harm that happens every day. We put our 99.5% focus on the harm that can happen now rather than existential harm. Ethical AI is now a process and you can Google ethical AI and you’ll be inundated with checklists and ideas and frameworks and principles. One of the reasons that we put so many efforts into our tools and our online product is that so much of that is completely overwhelming. It is not necessarily practical and it does not help you to get to a really tangible concrete answer about how to make a decision that is fair. So at the baseline level, we have ethical AI is now something that at certain levels, is technical and feasible. We can do make use of our system. Ethical sourcing of data that’s all stuff that is pretty well understood and everyone should be doing it. There are also meeting standards for making fair AI. That kind of stuff is relatively easy to do - you give it to your data scientist or you can put a team together who make the decision on how you want to label data or how you want to provide feedback. Those are relatively codified things even that still takes a lot of discussions to get there. It is about supporting people making those decisions because people are encoding their values. So I think today the most difficult thing is getting consumable data from the right people. I think the most common practice is to leave a lot of these things at the level of data scientists. What we try to do is really talk to the leaders. We have got to care about these things. You have got a legal exposure, you have got a reputational exposure. There are things that AI does like proxy discrimination and you have to pretty aware of. So that is the first thing I would say about really understanding that if you got an AI in the company, you have to think about if its another employee or a machine employee. I think that AGI concession depends on what you actually think AGI is. So, if you’re thinking AGI is basically a digital human that actually acts like a human, whether it is a party or whether it is just a master algorithm. My response to this as a human is that why would you want to do that because the whole point of AI is that we value it for the things that humans can not do. We can say it is not like a complete substitution to the human. You can see places where the technologists might want a complete human substitution and it might be theoretical. But when you look at surveys, researches and studies about what people want when they have an embodied AI, it is a very narrow idea of human and someone who is there to do one particular thing like remind them to do something or keep them company at a very simple way. It is not like the complexity of a fully human relationship by any means. But if you are thinking that AGI is an intelligent but different form and complementary to human intelligence and learns and self supervises fashion and draws inferences from sources and ways that are causally relevant. To complete, we are on our way to doing that and when it appears, I do not know whether AGI will be able to recognise the context and an AGI that can have some causal relevance. I do not think people want an AGI. I do not see any evidence that there is value in an AGI. The more sophisticated are the uses of AI, the more we become quite attuned about to what is the cause behind this, what is the justification, what is the accountability. At the moment, justification and accountability state and even the explainability might be very much based on justification and the [structure of] accountability that people look towards the humans. So, when we think about the idea of more existential issues around AI, I look to Alex Rutherford on beneficial AI. I think he’s got some fascinating ideas to measure the part of the machine to be programmed to be uncertain about what human wants. So the AI will always check-in the person and the moment to know that they are not there. That takes us back to intentions and consequences and human agency and the fact that we are fundamentally unpredictable. We do not know how we are going to feel when it actually happens.
Baldeep: In relation to creating a responsible and ethical AI, I would like to know your thoughts about algorithmic bias and the Gender Shades Project. How will we solve the problem of algorithmic bias?
Helen: In some ways, I have done a lot of research on this in the last few months. What really comes down to is that humans are biased and AI is reflecting the bias that humans have. So the study that brought the technologists attention was one done at Princeton. back in 2016 where they analyzed the rise of bias in language because of human bias. So for example flowers were more pleasant than bees and more associations between African and American names and European sounding names who were more suited to get an interview for a job. So these biases and the one which you referred to the Gender Shades Project by Joy Boulamwini at MIT, there was another important statement bring into people’s general consciousness that algorithms, all AI will reflect some kind of biases and there is no such thing as having it to be unbiased AI because in the end even if you remove bias from a technical perspective, you are still left with some form of judgment from humans about what bias? But it is about thinking back and saying how do you think about [the problem]. This is it just a technical challenge because of their good debiasing techniques where you can visualise, see, ban and remove obvious parts of bias. For eg. you can just type and understand that part of the reason of that bias and see that algorithm is not enough seeing dark coloured skin or differentiate between genders and some of that as a matter of bias and the representation of the data. So collecting more data and training algorithms, training the AI more on the darker skin. Significant improvements were achieved after she published her work. Some guy at the Microsoft went back and retained and improved the product significantly. So we need to consider that there is historic bias and representation bias in the data and many examples are there like Amazon's recruitment algorithm that they ended up removing as the women were being hired at a lower rate and that was a limitation and we see that every year, we have a lot of examples of racial bias, gender bias. Even with a statistically neutral and technically devised algorithm, if it is used inappropriately they perpetuate discriminatory results and unclear outcomes. So, it is a complex techno-social system and I think there are four levels of fixes that I found during my research. First was the technical debiasing tools and the first everyone should be doing that. There is a human-centred design which makes you much more aware about the bias up front and makes you consciously consider on how you are going to offset historical or representational bias and data and it makes you consciously consider issues of fairness and making decisions about fairness that you’re going to have equal odds and what are the mathematical descriptions of fairness that you can decide upfront. Then there are legal and regulatory solutions and there is a huge amount happening in that area. Obviously some of the regulatory protections are there now against discrimination. In the US we have employment law and various other laws that protect against discrimination.
This is a fascinating area because the closer people look, the more they realize that AI presents a unique challenge whether it is proxy discrimination or affinity profile that people have referred to as friction-free racism because you can hide behind these affinities that are not true. Then you have got even more interesting research coming out of the UK where it talked about how from a legal perspective how we rely so much on our intuition to pick up something that feels discriminatory but AI works in a totally different uninsured level.
So a lot of these ways that we detected discrimination in the past are necessary reliable and there’s certainly a huge amount happening at grassroots and a lot of people stepping up and legal challenges coming into play, though pretty slow it is working. The fourth level is the level of society. So what are the entrepreneurs doing, are the entrepreneurs coming up with ways to reveal bias, to fight bias, to show bias, to build more ethical AI upfront and what are communities doing to make AI fairer. There is a huge reaction towards using facial recognition technologies in cities or in schools and the pushback on that is definitely there but it is very community-driven.
Kshitij: We understand that AI bias can enable discrimination, and can violate the constitutional principle of equality. Is there any way to incorporate AI’s dynamic ambit into a legal structure?
There's a lot of interesting application for AI in the legal space when you look at how rapidly AI works and how Incredibly fast AI can assist cases, look for the Patents and can forecast behaviours and figure out outcomes about how judges are likely to rule, a few years ago there was quite a flurry of research in that space and it was startling how much progress could be made by just applying basic kick searches, and these kick searches could go through huge amounts of documents to help humans in due diligence and predict legal outcomes, what I have seen when I was doing the Quartz work how you forecast human behaviour and what you do with that outcome. One of the researchers that I talked to had flipped the whole idea upside down they looked at it in a different way and thought if they were not good at forecasting decision making then they should focus on the person whose decision is closest to the data and who has the most say in the decision and they went through the hypothesis of will the AI be better if this will be done and will be better production, higher production and accuracy at forecasting the judges decision, and interestingly that is what they found that the AI was able to forecast the judges decision and I think that leaves us with the question across the justice system and even we use it in design that ‘if you have the power then it is your decisions are the ones that can be looked at’ from outside by an AI, that can count as performance management and in performance management companies they take the data of the person making the decisions ‘the managers’ and they apply it to the employees and use that as a higher accuracy production and you use that to look for bias in making decisions for example. As we put more AI into these systems and we keep humans thinking about what it is telling us then were able to be quite innovative about the way that we use the data, we don't just look at the groups that the algorithms are acting on we look at the groups that are using the AI and we look at what the researchers call study maps and so we look at those people and so we start looking at patterns of human behaviour in kind of the opposite way that it was originally intended and that is very interesting.
Abhivardhan: So we know that in this field we have a career for AI Scientists, there are developers, programmers and many more people in the technical field bust most of them deal with only the technology, but when we talk about ethics we understand that there are AI Ethicists. What qualifications do Tech and consulting companies look for an AI Ethicist?
I think what we see with AI Ethicists is a real mix, there are philosophers a lot of people from academia have come in and I think that's really interesting talking to these people because they, and even I was originally an engineer so talking about Philosophy is interesting to me. I spoke with one philosopher at Princeton. I think companies that can afford to that, it's really terrific to introduce that sort of person in your leadership team. The most common way that i see ethics rise is through the legal channel so the Corporate counsel and the other channel is the Brand image and how their AI is perceived, but now I see that we get a new sorf of introduction of ethicists that are more generous like people work as consultants or just problem solvers in general, Analysts, people with humanities background, corporate consultants, Political scientists and those general process based thinkers are good at studying literature and taking a senior role and can work with people from across the organization, and they're good at getting people together and making people realise about what they need to care about it , they actually get people doing different stuff, not completely changing the process but adding to it and extending the process, Bringing more diversity so that there's a different level of thinking introducing more decision points and then at the end of it i think what the most important things that the ethicists who are making the biggest difference are able to do is able to stand in front of the leadership and actually demonstrate that they made a decision or a decision that they brought to be either through their process they were able to change the course of a product for a group of users and they added value so in actual words they are able to make the course change and not just be there as a reputation enhancement.
Baldeep: We are witnessing a boom in AI startups in India. What potential challenges do you see based on your experience in the US and what ethical rules would you like to recommend
I think the first thing is having ethics right upfront not having it as a part of a human-centric design process. You've got to make sure that you have a diverse group of people at the table don't just start with data reinforcement go into it with the idea that you've got to optimize for more than one thing and be prepared to understand and to measure and have a good debate about how to choose what to optimise and how to optimise.
Think that the thing that is emerging in the US, of course, the pandemic has slowed things down but before the epidemic started, people were starting to build some pretty clear ideas about what kind of regulatory systems might need to be built to ensure that AI is equitable and peoples claim on the fairness of the AI should be validated and it's pretty interesting to look at some of Markel Klein who is a computer scientist who also spent most of his research into the use of AI in the financial market.
He talks a lot about taking the self-regulatory structure in the financial market and applying it to regulating AI so that you can put real-time monitoring that suits you and you don't have to monitor absolutely everything and you don't even need to understand the model of the data. And when you look at some of the things that he talks about, the self-regulation of AI through these mechanisms that are self-regulatory and are already existing in the financial market you can really see how that would apply whole AI and so I think part of your answer to your question is how do Indian Entrepreneurs and Indian regulators and Indian tech companies get a hit of that kind of thinking and start thinking about the kind of designs that could be made with that idea.
Kshitij: Can India and the US embrace a private-level AI Partnership in some areas?
I think that there are going to be partnerships because some of the core things that matter are already in place and there's a huge business boom. India is a Democracy, so we share many values and we always look to find talent and to solve problems so I think there's a lot of opportunity for partnerships no sure exactly how you guys probably might have a better idea of the core opportunities that are available and I think there is a good foundation to discover opportunities.
Observations by Research Interns
Manohar Samal
Firstly, it’s euphoric that the institution took up this interview. I was extremely pleased to see a variety of discussions taking place on aspects like AI in post-epidemic recovery effect of AI induced surveillance, the consequences of applying artificial intelligence in different nations, inclusion of aspects like accountability in the AI design, significance of an unbiased AI, the theory of agency in predictions by AI, the regulatory mechanism for AI, its effect on employment and the disintegration of AI behaviour by scientists and companies to create ethical and accountable AI. This interview reflected on laudable and intriguing research problems faced in AI ethics. I strongly agree with Helen when she stated that AI amplifies and helps find social divisions and that the consequences which arrive from such use is different due to varied social lines amongst countries. This aspect paramountly shows that a novel approach keeping in mind the ground realities and unique social structures of India in conducting AI ethics & law research is the only way forward for formulation in India’s model. This is my humble opinion.
Mridutpal Bhattacharya
Her answer as to whether the biasing in AI algorithms can be perfected is seconded by me, but I have a question as to the possibility for an AI to be self-conscious enough to draw up it's own algorithm following a basic initialization with a predetermined algorithm? In that case, the bias would be cut out without having humans compromise on their quality of work, for humans believe what they believe, & thus follow their biases which can't be altered without altering their personality & in turn work efficiency.
Ananya Saraogi
Taking the objective of Sonder Scheme into consideration, the questions raised broadly stuck to the area of AI- Ethics and Law. The area of concern was raised in the form of questions beginning from the humanitarian aspect of AI to its sustenance in the future. The status of AI in the present situation of COVID-19 was also dealt with. The whole interview was focussed on the viewpoint of the people dealing with AI in the US. For example, when the question was raised on AGI the first question raised by Ms Helen was what is the understanding of AGI for the interviewers. While the COVID-19 point was being discussed there was a query that could also be raised. The query is as to whether with a great fall in the economy, would the investment in AI get affected? Or would the investment grow at a greater rate looking into the uses of AI especially in case of the growth of GDP?
The Indian Learning, e-ISSN: 2582-5631, Volume 1, Issue 1, July 31, 2020.
Comentarios