Skip to main content

Question: How did you get started with the Ethical AI Database (EAIB)?

It was interesting. My personal motivation came from my undergraduate work at the University of Texas at Austin.  I was researching algorithmic bias and gave a TedTalk on the subject in which I conducted a ton of research.  The more I researched fairness and algorithmic bias, the more I realized that this is a really scary thing and it has some really dangerous consequences.

Fast forward to May of this year, I started to notice that all AI companies were lumped together regardless of their approach.  Some companies were trying to use an approach that solved for the current problems we have in AI and tech around ethics, transparency and trust.

I felt that that distinction needed to be made as there was no current resource available. I decided it might be a good idea to create a database to call out the companies that are doing it right. 

Question: What has been the receptivity to your database?

People are definitely responding to it in a positive way.  Not only do I get about 30 to 40 new submissions to the database every quarter, I’ve gotten a ton more LinkedIn followers, which is always nice. 

I’ve received a lot of feedback from people saying that they use this database and see the value add of bringing attention to companies doing it right. I’ve also heard from investors who find that it is a useful and effective tool to identify companies that adhere to ethical principles.

Because it is a relatively new market, I’m always getting positive feedback for providing visibility and transparency in an area that is not covered by other outlets.  

Question: The database has been growing quarter-over-quarter, what have your learned and what are you doing differently?

The categories are fluid. I have five different categories, but if a company doesn’t quite fit into one of those categories, I recalibrate the categories. I release a quarterly report about the new companies that I’ve add quarter-to-quarter and what is happening with them such as new investments. 

The database itself is pretty straightforward. I mean, it’s not fancy, it’s just a way to see everything that’s going on. I’m working on some cool stuff on the side, but that’s the main product. 

Question: Do you think more companies are starting to adopt responsible AI, responsible tech? If not, what do you think is holding them back?

I have found that companies like corporate entities and institutions are not very proactive when it comes to responsible AI. It really depends on where you are located. In the US, specifically, people don’t have a ton of incentive to be creating responsible tech. The companies that are tackling these issues are the ones in my database. What I see holding companies back is based on where they are located and their target markets.

The biggest problem, the biggest challenge in this entire space is demand for technology. Right now, the US market does not really encourage or provide incentives for responsible tech. That’s changing. There is some talk of legislation in the US. In Europe and the EU, there’s a ton of legislation around regulating technology and data which is starting to move the needle a bit. Right now, what is holding responsible tech back is due to fragmented global markets.

Question: You just referenced legislation, so let talk about Biden’s Blueprint for AI and GDPR in Europe, do you think this will speed up market adoption of responsible tech?

I think it depends. In the United States specifically, legislation and policy is what’s going to really move the needle, because we have AI maturity here. Every company is aware of what their AI systems are doing. They’re pretty advanced. The data scientists are aware of what’s going on but again the problem is that there’s no incentive. Unless there is incentive, American companies just don’t feel the need to put in the effort to actually use responsible tech or adapt their old systems to fix what’s wrong.

In the EU, there’s a lot of policies. There’s a general consensus that we care about ethical tech, but the problem in the EU is that companies are still ramping up their AI efforts. They’re not as mature as US companies. They don’t quite understand their pain points yet with regards to responsible tech. What it means to be ethical is a question that is still under development in the EU. What will move the needle is time. As companies start using AI more effectively and start asking these questions, you will see legislation to back that up. You will see policies that will push people to care about proper usage. Seeing how that plays out is really fascinating.

Question:I just read this a McKinsey report that came out and it said that consumers definitely want digital trust. They are starting to leave brands that aren’t transparent about their data practices. What do you see from the consumer, can they drive change?

That’s a good point. There is a push from consumers. In general, big tech has been hounded time and time again for being non transparent. The more consumers demand trust, it’s only going to increase the need for transparency and ethical AI/Tech, particularly as we see more and more incidents of big tech violating consumer digital trust.

It’s definitely a factor. If you are a smaller company that does interact with millions of consumers, then the driving force is going to be policy and other incentives. But if you’re a big tech company, if you’re a social media network, you’re at the mercy of your customers and if your customers want transparency, then that’s what you’re going to have to provide. 

Question: What needs to happen at the academic level?

My entire time as an undergraduate in engineering, not once was I given an intro to ethics, nothing like that, despite being in such a technical discipline. That is changing. Now there are introduction to ethics courses that are being required. 

The problem is students kind of echo the corporate mindset in that performance is everything and ethics kind of goes out the window. Most academics in AI are not really that concerned with how do we make these systems more fair and transparent. They are concerned with how we can make the systems better, from a performance point of view. I think that is fine. But I think it should be mandatory for any data scientist to have exposure and an understanding of ethics so that they have an idea of what can go wrong. That is important. However, I still believe that ethics is not a primary requirement of mature tech companies. As a data science major, I can say that I still haven’t taken an ethics course.

In order to get that experience, you have to hunt it out. I’m abnormal for a data scientist in that I care about ethics. I’m not the average student. It’s hard to get passionate about ethics unless you’re already passionate about it. It is really a critical discipline. 

Question: Do you think there are any organizations out there that also can drive change?

The reason that I asked this question is that I just interviewed David Ryan Polgar, the founder of All Tech is Human, his organization’s mission is to have people from cross disciplines create a dialogue around responsible tech. The more people understand and advocate for it, the better. What are your thoughts on this?

I think he does some great work. These organizations play a pivotal role in creating a community of people that cares and want to drive change. I think that’s a big deal, especially when you consider the idea that AI is Western centric. All of the people who are governing AI principles and leading the industry are all Western, developed countries. The people who are voicing concerns about ethics and solving ethics problems are all coming from the same region. We are still lacking international input. 

Right now, there’s no global community that’s coming together. There’s no voices from South America, there’s no voices from Africa. I do see that the Western organizations are trying to drive inclusion. They are trying to get international communities from all over the world to weigh in on the critical issues relating to ethics. Ethics driven by only one part of the world is not truly inclusive or ethical.

Question: What would you like to say in closing?

The solution is about creating awareness.  The more people are aware, the more they will care. I mean if you just pay attention to the downside of ML and see that there are systems that clearly discriminate against different races, who would not want to care and change the system.

As a data scientist, you need to be aware of these things. Educate yourself, do what you need to do. Take ethics classes in college, even though they’re not part of the core requirements. Take the class, do the research. Be aware of the fact of what can go wrong. Don’t put ML on a pedestal. Don’t put AI on a pedestal. Make sure that you are aware of the fact that these things are already broken and will only get worse unless people try and actively fix them.

 

____________________________________________________________________

 

About Abhinav Ragunathan:

Abhinav Raghunathan is a graduate student at the University of Pennsylvania majoring in Data Science and focusing on fair AI and ML. Prior to Penn, he dual-majored in Computational Engineering and Mathematics at UT Austin, where he delivered a TEDx Talk on the dangers of algorithmic bias. Abhinav publishes independently on topics related to ethical AI. His writing has been published in Open Data Science, Towards Data Science, and the Montreal AI Ethics Institute. After obtaining his Master’s degree, he will be joining Vanguard’s Investment Management Fintech Services team as a data scientist.