AlgoFace sat down with David Ryan Polgar, Founder of the non-profit All Tech is Human, an organization committed to building a responsible tech ecosystem by uniting a diverse range of stakeholders in co-creating a better tech future.
David is a tech ethicist and a lawyer by training. He is a sought after international speaker and commentator and a member of TikTok’s Content Advisory Council. He talks about #democracy #responsibleai #responsibletech and how #alltechishuman.
Interview with David Ryan Polgar
Founder of All Tech is Human Part 2
Question: Let’s talk more about the mission of All Tech is Human. What do you want to accomplish?
At All Tech is Human, we focus all of our activities around expanding and developing a more cohesive responsible tech ecosystem or movement. The reason why that’s important is because we need professionals from diverse backgrounds involved in this space because we are dealing with these complex issues.
I see this as a collective action. An individual cannot solve these problems alone. These are societal issues that are being intertwined with civic engagement. You have an expanded ecosystem that allows for new pathways for people to enter this space.
All Tech is Human released our first Responsible Tech Guide in September 2020 which we update every September. When I launched All Tech is Human in 2018, we hosted Ethical Tech Summits in New York, San Francisco and Seattle. We had a lot of college students attending our events. We also had people looking to change careers attend as well as professionals who had left the big tech companies. They all had a common pain point and were looking to get involved in the discussion, helping to change the dynamics.
They would always ask the same questions. What does this career look like? Where can I go to school for it? How do I upskill? What is the job description? Who’s hiring? How do I volunteer for an organization? Who are the organizations? What books do I read? What podcasts should I listen to?
I saw a real need for a guide to democratize information so everyone has equal access to it so we created the Responsible Tech Guide to highlight the people, organizations, and ideas involved in the growing responsible tech movement and ecosystem.
I often hear from people that they wish more people cared about tech ethics. I tell them that they do. I know hundreds of people who care deeply about this topic.
I’m connected with dozens of organizations that are putting out white papers, that offer volunteer opportunities and that are hiring. The average person doesn’t know that because we are still not a cohesive group. We span many professions, organizations, etc. We are trying to make it more cohesive to let interested parties know that there are over 300 organizations committed to AI ethics.
Many of the large tech companies are also getting involved. Salesforce created an ethical tech department and Microsoft is doing something similar. Sometimes it can be in conflict with the company’s purpose as an ethical tech department looks at the impact of technology which can sometimes be opposite to the business’ objectives.
Tech ethics attracts a lot of academics. It makes sense. But if you compare an academic university environment, which has a lot of academic freedom, that’s oftentimes totally different than a big corporate position that is very restrictive on what the individual can say publicly. I think we are going to see a lot of change over the next couple of years with regards to corporate whistleblowing.
We are talking about how it affects how people vote, how people see the world, how happy they are. I want a world where every employee feels empowered to speak up when they see something that they think is wrong. If we don’t have a world like that, it goes against public interest.
Question: How do we create more responsible technology?
I think what people are struggling with, with social media or just anything that is tangentially related to technology is that there are inherent tradeoffs, particularly as it relates to human behavior. In my opinion one of the biggest problems is that we focus so much on viewing it as a tech solution, when in fact it tends to be a mix of intertwined issues as it relates to individuals, education, agency, empowerment which creates the need for better design policies.
In other words, it is not centered around how we make better tech, it is about developing socially responsible tech companies while having proactive regulation, educating the populace to empower individuals to be good digital citizens — that is the mission of All Tech is Human.
That’s how All Tech is Human views complex issues. For example, when we think about crime in New York City, nobody says we need to reduce crime. It’s about the criminal. It’s about mental health. It’s also a bit about education. It’s also about design, urban planning. It’s about economic opportunities. It’s about the weather. We tend to see crime spike in the warmer weather.
When we think about our behavior on social media platforms, complex issues arise because you have an individual who is interacting with technology. So how is the individual affected by the design?
A larger issue is how social media is affecting us given it is bi-directional. I personally have to stop anyone who says, well, technology is just a tool. I’ve heard the analogy that it is like a hammer — you can build a house or you can choose to hit somebody over their head. The difference with social media platforms is that we are being influenced by them but are in turn also influencing the platform. You don’t influence your hammer after you buy it from Ace Hardware, right? It’s a hammer. It doesn’t change its shape.
Question: So let’s turn to the Blueprint for the AI Bill of Rights. What are your thoughts on that? How is it similar to your Responsible Tech Guide?
We’re kind of loosely connected so that is a good thing since we’re tied in with so many different organizations and individuals. We have now interviewed over 500 individuals across sectors including the government for our Responsible Tech Guide. At this point, we can barely keep up with what is happening so it is also positive to have more resources like the AI Bill of Rights. What’s happening is there has always been a lot of activity that happens outside of the government, what we’re trying to do is to connect those issues.
We held our Responsible Tech Summit in May at the Canadian Consulate in New York and on August 17th , we held a gathering for youth tech and well-being at the Australian Embassy. We’ve had a lot of governments reach out because they see a need to tap into our knowledge base.
Governments across the globe are looking into regulating tech policy, but there are fundamental problems that exist within the system. For example, Mia Dand started a movement for women in AI ethics. What her work solves is a really kind of classic problem as we see the same voices opining on this topic. Mia created a list to provide the media and other sources with the names of diverse voices in AI ethics. There is no excuse for a lack of diversity in AI Ethics because there is incredible talent out there.
I see this happening with governmental bodies. They are looking to create a better system to create trust and public safety. At All Tech is Human we are creating a hub that is a free resource to tap into on various topics. There’s a lot more work we need to do to improve accessibility and transparency so people can find the information quickly. I think the White House is recognizing that as well. From the policymaker standpoint, we need to move away from thinking that there’s actually a silver bullet.
Question: Do you think the big tech companies will try to influence the blueprint so it does not get formalized?
I think that responsible AI is always struggling with the fact that there’s no fixed answers. It is not like one company is ethical and another is not. All companies are evolving with the current environment.
I constantly get asked, Do you guys have a list? Do you rank companies?Do you certify? My response is that we would never do that because it’s not fixed like a company could be good today and bad tomorrow.
It is part of the reason I chose the unusual name, All Tech is Human. It makes a lot of sense to me. It’s like a Buddhist distillation of the fact that it’s about humans and what we’re doing in terms of regulation, design, development, deployment, as opposed to the technology per se.
That’s the key difference. The academic version of it would be that you basically have a huge fight that’s going on behind the scenes and you have a crowd that’s more techno deterministic in which we just are reacting to technology. Our tech future is really about what we collectively decide to do. And that’s why it’s named All Tech is Human.We are the ones designing it. We can build a better tech future by getting better people involved.
I had an AHA moment of my own. Why would you just rely on one person’s opinion?
How do you embed ethical thinking throughout the entire DNA of the organization? How does that become part of the process? I would say that’s the equivalent of the Chief Diversity Officer. Unless you change the culture and empower the person to make change, having a Chief Diversity Officer is not really effective. It is really about embedding it in the culture.
Another example would be what happened with Google’s AI ethics team. If you have really intelligent, respected individuals, but they’re not capable of carrying out what they’re charged to do, then that’s a cultural issue that the company needs to fix.
About David Ryan Polgar:
David Ryan Polgar is a pioneering tech ethicist, Responsible Tech advocate, and expert on ways to improve social media and our information ecosystem, along with increasing the ethical considerations regarding emerging technologies. He specializes in uniting a diverse range of stakeholders in order to tackle complex tech & society issues, cultivating conducive environments for forward progress.
David is the founder of All Tech Is Human, an organization committed to connecting and expanding the Responsible Tech ecosystem; making it more diverse, multidisciplinary, and aligned with the public interest. As the leader of All Tech Is Human, he has created a unique grassroots-meets-traditional-power-structure model that is uniting thousands of individuals across the globe to co-create a better tech future. The non-profit has a Slack community of over 4k, has profiled hundreds of leaders in the ecosystem, holds regular summits and mixers, developed a mentorship program, and more.
In March 2020, David became a member of TikTok’s Content Advisory Council, providing expertise around the delicate and difficult challenges facing social media platforms to expand expression while limiting harm. He appears in the newly-released documentary, TikTok, Boom. David is an expert advisor for the World Economic Forum’s Global Coalition for Digital Safety.
An international speaker with rare insight into building a better tech future, David has been on stage at Harvard Business School, Princeton University, Notre Dame, The School of the New York Times, TechChill (Latvia), The Next Web (Netherlands), FutureNow (Slovakia), Infoshare (Poland), the Future Health Summit (Ireland), NATO, and many more. His commentary has appeared on CBS This Morning, TODAY show, BBC World News, MSNBC, Fast Company, The Guardian, SiriusXM, Associated Press, LA Times, USA Today, and more.
David is a monthly expert contributor to Built In (writing about the Responsible Tech movement), and an advisory board member for the Technology and Adolescent Mental Wellness (TAM) program, and a participant in multiple working groups focused on improving tech and aligning it with our values.
The main throughline throughout David’s work is that we need a collaborative, multi-stakeholder, and multidisciplinary approach in order to build a tech future that is aligned with the public interest.