Interview with Helen Edwards | Sonder Scheme
Updated: Dec 30, 2020
Abhivardhan, Baldeep Singh Gill and Kshitij Naik from Indian Society of Artificial Intelligence and Law had a special interview session with Helen Edwards, founder of Sonder Scheme.
Abhivardhan: Please introduce yourself and your connection with AI as a field.
Helen: I am based in US and Sonder Scheme is a boutique company focused on the human-centric design of AI. We are focused on AI ethics and we think that getting ethics up front is what really matters but very much of our whole business is around speaking workshops and a design system for including more people and more ideas in the front part of an AI design and we have a process that we have put together that help people do that. Our speaking workshops and speaking design system are available online that we use to help people understand how to design human-centric AI.
Abhivardhan: Impeccably our interest in Artificial Intelligence ethics stems from social sciences because our focus is particularly on the legal and anthropological side. So I’ll start the interview now and my first question would be, “How would you like to elaborate about Sonder Scheme and a little about the market which is actually concerned with ethics i.e. how technology companies in the US envision about AI ethics and what do they see currently in these situations as well?”
Helen: I think that in particular in the last few years, people have become much more aware of the power of the platforms and the power of machine learning sitting behind these platforms. So, on one hand, you have got public awareness of ethics, the speed and scale and of AI research behind those platforms and the impact on commoners or democracy or media. On the other, you’ve got a much more aware of the process of AI now.
Treatment of people who see in these automated decisions through government programs and we has many cases in the US in Michigan and Oregon and in various other states where social welfare systems have been automated and people have no idea about the impact that it poses on the vulnerable communities. So, if we imagine that at either ends of the spectrum that how the big platforms in the future of antitrust, the future of democracy, how things are working in government in codifying social systems. It all comes down to the same thing that more and more behavioural problem and social problems can be encoded in AI and can cause harm before people even notice. Public awareness is caused the companies to think about AI ethics. Employee ethicists have a good intention but it depends on how you measure the impact of those ethicists that is a particular point of interest today and a lot of people are focusing on the companies and how these companies measure the impact of ethicists because either they are able to make decisions about the products or is it for the reputational reasons.
Baldeep: How in the post-COVID-19 situation, the employment and entrepreneurship sector related to Artificial Intelligence (AI) will be affected in developed countries or D9 countries?
Helen: If we look at the core basics of AI, it learns from data. What we see with COVID-19 is that it is a phenomenal disruption in the data and the deep discontinuity in data, one should wonder what the AI is going to do with the data and what conclusions the AI will make with such data because the COVID-19 data is outside the trained model of AI. We have to look at if there is a reduction in the accuracy, number of false positives and false negatives in supervising models. The first thing we have to do is to relook at some of the AI models and if I would be running a company with big models I would be asking questions relating to change of accuracy and the impact of data discontinuity and start measuring the impact. The other important thing is that COVID-19 at many other places is amplifying existing effects or speeding up the things, so one of the things is that AI has made a lot of progress and has been able to speed up the process of basic science like understanding how proteins and cells work.
I expect to see more progress and more support towards the basic science and the contribution that AI is making. So in the post COVID world, it would be crazy to expect anything other than some degree of slowdown for some entrepreneurs and funding startups across the board. So surviving this for an AI entrepreneur is the same as anybody.
I think there’s another part of it that is important around is what happens with employment and the speculation about more robots coming in. People are investing more on robots rather than other people and there is a historical precedent today and last century a lot of mechanisation of US agriculture because of the labour shortage and this time we’re in a potential labour shortage and also concerns about health and safety. We can also know that digital surveillance and tracking will become more pervasive and it’s a huge area of discussion but whether it is in society or whether it is in tracking the workers at the workplace, people would actually want to know how and whether they came in contact with the virus until there is a vaccine which we hope there will be as soon as possible. So even some of the most vocal advocates of privacy and digital privacy are talking about the vaccination rather than digital surveillance, I think in the post COVID world, it will take time to adjust towards digital surveillance and tracking happening in each country.
Kshitij: How a Human-centric AI can be made better? Will Human-centric AI be beneficial for developing countries?
Helen: If we go back to the core of human-centric design, it is about understanding and development going on around human-centric AI. It is a process of not making assumptions but it is a process of observance and understanding how to build something that supports the user and why that it strips away the assumptions of the observer of AI. In human-centred AI design, we take a step further, because we start out with the initial idea of that AI is different because it creates a role, an additional help, back and forth with the user that alters the way that preferences are formed and alters choice, so, alters human agency ultimately. Because we start with a core idea and an understanding that AI creates these roles and its not a traditional technology, it is about understanding what happens when we let Artificial Intelligence starts to learning. So, in traditional design, as a designer, you can be focused solely on intent i.e., you do not have to worry so much about the consequences but, in AI design you can not just focus on. You have to look at how AI works which is that, it learns from the data that is in the world and how it changes its roles.
The most valuable material is not plastic or glass but it is human behaviour and when you’re working on something whose behaviour is somewhat based on its post design experience as a designer you can not differentiate and you can not step away and say here is my intent and it does not matter what actually happens as long as I intend for a good outcome or intend for this to happen. You have to actually anticipate all of these consequences and it is the core where the human-centred AI design starts. It is about understanding and thinking about the intent, the consequences and it’s about the power and what shifts around that power is algorithms operate and its data changes.
So, human-centred AI design is really stepping back and say let’s think about these things upfront i.e., thinking about accountability, inferences in privacy and how privacy changes and, thinking about biases and fairness and equality in the first part of the design so that we can really understand what is the role of the human and what is the role od the machine. A lot of poor automation decisions, poor AI decisions and the worst decisions are made because there is a poor differentiation what a human does and what a machine does. This happens with personalisation because we do not know how we feel about something until it actually happens. So you’ve got to keep thinking through all of these processes, different categories and groups, data and the biases that might happen in that groups of data to allow yourself to design upfront for the consequences of recommending to someone as an individual something that they do not actually want or not recommending something to an individual that they did want and it actually excludes them. Such a thing happens in pre-recruitment because of the way that biases either come in from a company looking or come in from the platform that is doing the advertising and that you end up in the situation with people that would have been interested in seeing a particular job interview or job placement do not actually end up seeing it. So, all of these things we see happening here I see no reason why they would not happen in any other country. It is going to happen at a bigger scale. These different social positions and all of these when we work with clients and when we work into consequences, a lot of people say that we cannot see what is going to happen. These all these are unintended consequences but a lot of them are not and they are actually quite visible right at the outset because all our ideas is just amplified and exacerbate and find these divisions in society that already exist. So, our idea is that human-centred design applies everywhere but you can see different outcomes and different results just simply because there are different social lines in different countries.
Abhivardhan: So I have an interesting question. When we say that we have to make a responsible and ethical AI, how in limited terms, can we really achieve a responsible and ethical AI and in this century we see an AGI or strong AI?
Helen: Predicting the rise of AGI, we do not even try to predict it this because it is an old saying that all forecasts are wrong but some are useful. So, when it comes to forecasting AGI, I look for where are the other people ideas on this useful. I think there are a couple of things. First of all, go back to the point of ethical and responsible AI. You can look at it from the existential level i.e., how we can make AI safe and not [dangerous for] humans. You can look at how you actually plan and resolve the harm that happens every day. We put our 99.5% focus on the harm that can happen now rather than existential harm. Ethical AI is now a process and you can Google ethical AI and you’ll be inundated with checklists and ideas and frameworks and principles. One of the reasons that we put so many efforts into our tools and our online product is that so much of that is completely overwhelming. It is not necessarily practical and it does not help you to get to a really tangible concrete answer about how to make a decision that is fair. So at the baseline level, we have ethical AI is now something that at certain levels, is technical and feasible. We can do make use of our system. Ethical sourcing of data that’s all stuff that is pretty well understood and everyone should be doing it. There are also meeting standards for making fair AI. That kind of stuff is relatively easy to do - you give it to your data scientist or you can put a team together who make the decision on how you want to label data or how you want to provide feedback. Those are relatively codified things even that still takes a lot of discussions to get there. It is about supporting people making those decisions because people are encoding their values. So I think today the most difficult thing is getting consumable data from the right people. I think the most common practice is to leave a lot of these things at the level of data scientists. What we try to do is really talk to the leaders. We have got to care about these things. You have got a legal exposure, you have got a reputational exposure. There are things that AI does like proxy discrimination and you have to pretty aware of. So that is the first thing I would say about really understanding that if you got an AI in the company, you have to think about if its another employee or a machine employee. I think that AGI concession depends on what you actually think AGI is. So, if you’re thinking AGI is basically a digital human that actually acts like a human, whether it is a party or whether it is just a master algorithm. My response to this as a human is that why would you want to do that because the whole point of AI is that we value it for the things that humans can not do. We can say it is not like a complete substitution to the human. You can see places where the technologists might want a complete human substitution and it might be theoretical. But when you look at surveys, researches and studies about what people want when they have an embodied AI, it is a very narrow idea of human and someone who is there to do one particular thing like remind them to do something or keep them company at a very simple way. It is not like the complexity of a fully human relationship by any means. But if you are thinking that AGI is an intelligent but different form and complementary to human intelligence and learns and self supervises fashion and draws inferences from sources and ways that are causally relevant. To complete, we are on our way to doing that and when it appears, I do not know whether AGI will be able to recognise the context and an AGI that can have some causal relevance. I do not think people want an AGI. I do not see any evidence that there is value in an AGI. The more sophisticated are the uses of AI, the more we become quite attuned about to what is the cause behind this, what is the justification, what is the accountability. At the moment, justification and accountability state and even the explainability might be very much based on justification and the [structure of] accountability that people look towards the humans. So, when we think about the idea of more existential issues around AI, I look to Alex Rutherford on beneficial AI. I think he’s got some fascinating ideas to measure the part of the machine to be programmed to be uncertain about what human wants. So the AI will always check-in the person and the moment to know that they are not there. That takes us back to intentions and consequences and human agency and the fact that we are fundamentally unpredictable. We do not know how we are going to feel when it actually happens.
Baldeep: In relation to creating a responsible and ethical AI, I would like to know your thoughts about algorithmic bias and the Gender Shades Project. How will we solve the problem of algorithmic bias?
Helen: In some ways, I have done a lot of research on this in the last few months. What really comes down to is that humans are biased and AI is reflecting the bias that humans have. So the study that brought the technologists attention was one done at Princeton. back in 2016 where they analyzed the rise of bias in language because of human bias. So for example flowers were more pleasant than bees and more associations between African and American names and European sounding names who were more suited to get an interview for a job. So these biases and the one which you referred to the Gender Shades Project by Joy Boulamwini at MIT, there was another important statement bring into people’s general consciousness that algorithms, all AI will reflect some kind of biases and there is no such thing as having it to be unbiased AI because in the end even if you remove bias from a technical perspective, you are still left with some form of judgment from humans about what bias? But it is about thinking back and saying how do you think about [the problem]. This is it just a technical challenge because of their good debiasing techniques where you can visualise, see, ban and remove obvious parts of bias. For eg. you can just type and understand that part of the reason of that bias and see that algorithm is not enough seeing dark coloured skin or differentiate between genders and some of that as a matter of bias and the representation of the data. So collecting more data and training algorithms, training the AI more on the darker skin. Significant improvements were achieved after she published her work. Some guy at the Microsoft went back and retained and improved the product significantly. So we need to consider that there is historic bias and representation bias in the data and many examples are there like Amazon's recruitment algorithm that they ended up removing as the women were being hired at a lower rate and that was a limitation and we see that every year, we have a lot of examples of racial bias, gender bias. Even with a statistically neutral and technically devised algorithm, if it is used inappropriately they perpetuate discriminatory results and unclear outcomes. So, it is a complex techno-social system and I think there are four levels of fixes that I found during my research. First was the technical debiasing tools and the first everyone should be doing that. There is a human-centred design which makes you much more aware about the bias up front and makes you consciously consider on how you are going to offset historical or representational bias and data and it makes you consciously consider issues of fairness and making decisions about fairness that you’re going to have equal odds and what are the mathematical descriptions of fairness that you can decide upfront. Then there are legal and regulatory solutions and there is a huge amount happening in that area. Obviously some of the regulatory protections are there now against discrimination. In the US we have employment law and various other laws that protect against discrimination.
This is a fascinating area because the closer people look, the more they realize that AI presents a unique challenge whether it is proxy discrimination or affinity profile that people have referred to as friction-free racism because you can hide behind these affinities that are not true. Then you have got even more interesting research coming out of the UK where it talked about how from a legal perspective how we rely so much on our intuition to pick up something that feels discriminatory but AI works in a totally different uninsured level.
So a lot of these ways that we detected discrimination in the past are necessary reliable and there’s certainly a huge amount happening at grassroots and a lot of people stepping up and legal challenges coming into play, though pretty slow it is working. The fourth level is the level of society. So what are the entrepreneurs doing, are the entrepreneurs coming up with ways to reveal bias, to fight bias, to show bias, to build more ethical AI upfront and what are communities doing to make AI fairer. There is a huge reaction towards using facial recognition technologies in cities or in schools and the pushback on that is definitely there but it is very community-driven.
Kshitij: We understand that AI bias can enable discrimination, and can violate the constitutional principle of equality. Is there any way to incorporate AI’s dynamic ambit into a legal structure?
There's a lot of interesting application for AI in the legal space when you look at how rapidly AI works and how Incredibly fast AI can assist cases, look for the Patents and can forecast behaviours and figure out outcomes about how judges are likely to rule, a few years ago there was quite a flurry of research in that space and it was startling how much progress could be made by just applying basic kick searches, and these kick searches could go through huge amounts of documents to help humans in due diligence and predict legal outcomes, what I have seen when I was doing the Quartz work how you forecast human behaviour and what you do with that outcome. One of the researchers that I talked to had flipped the whole idea upside down they looked at it in a different way and thought if they were not good at forecasting decision making then they should focus on the person whose decision is closest to the data and who has the most say in the decision and they went through the hypothesis of will the AI be better if this will be done and will be better production, higher production and accuracy at forecasting the judges decision, and interestingly that is what they found that the AI was able to forecast the judges decision and I think that leaves us with the question across the justice system and even we use it in design that ‘if you have the power then it is your decisions are the ones that can be looked at’ from outside by an AI, that can count as performance management and in performance management companies they take the data of the person making the decisions ‘the managers’ and they apply it to the employees and use that as a higher accuracy production and you use that to look for bias in making decisions for example. As we put more AI into these systems and we keep humans thinking about what it is telling us then were able to be quite innovative about the way that we use the data, we don't just look at the groups that the algorithms are acting on we look at the groups that are using the AI and we look at what the researchers call study maps and so we look at those people and so we start looking at patterns of human behaviour in kind of the opposite way that it was originally intended and that is very interesting.
Abhivardhan: So we know that in this field we have a career for AI Scientists, there are developers, programmers and many more people in the technical field bust most of them deal with only the technology, but when we talk about ethics we understand that there are AI Ethicists. What qualifications do Tech and consulting companies look for an AI Ethicist?
I think what we see with AI Ethicists is a real mix, there are philosophers a lot of people from academia have come in and I think that's really interesting talking to these people because they, and even I was originally an engineer so talking about Philosophy is interesting to me. I spoke with one philosopher at Princeton. I think companies that can afford to that, it's really terrific to introduce that sort of person in your leadership team. The most common way that i see ethics rise is through the legal channel so the Corporate counsel and the other channel is the Brand image and how their AI is perceived, but now I see that we get a new sorf of introduction of ethicists that are more generous like people work as consultants or just problem solvers in general, Analysts, people with humanities background, corporate consultants, Political scientists and those general process based thinkers are good at studying literature and taking a senior role and can work with people from across the organization, and they're good at getting people together and making people realise about what they need to care about it , they actually get people doing different stuff, not completely changing the process but adding to it and extending the process, Bringing more diversity so that there's a different level of thinking introducing more decision points and then at the end of it i think what the most important things that the ethicists who are making the biggest difference are able to do is able to stand in front of the leadership and actually demonstrate that they made a decision or a decision that they brought to be either through their process they were able to change the course of a product for a group of users and they added value so in actual words they are able to make the course change and not just be there as a reputation enhancement.
Baldeep: We are witnessing a boom in AI startups in India. What potential challenges do you see based on your experience in the US and what ethical rules would you like to recommend