Skip to main content

The Current State of AI and its Unspoken Challenges


Hello, everyone. I’d like to welcome you to the first episode of our new podcast, A.I. in the real world. Today, we’ve got a really exciting conversation between Taleb Alashkar and Ramesh Raskar. I’m going to intro both of them, we’ll get a little feel for who they are, and then we’re going to let them kind of dive into the world of A.I.

So first, we’re going to start with Taleb. Taleb, how are you today?


Doing great. Thank you, Jeffrey.


Excellent. Excellent. So Taleb, why don’t you tell us a little bit about yourself, and then I can kind of dive in and poke around if we need to poke around a little bit more.


Yeah, thank you.Jeffrey. I’m the CTO and co-founder of AlgoFace. I have a Ph.D. in Computer Vision Machine Learning from France back between 2011 and 2015. And then I moved to the US in early 2016 to work in facial biometrics, augmented reality for human faces.

After that, I was working for a company for driver monitoring systems here in Michigan, where I moved from Boston. And since 2017, working on AlgoFace, which was incubated inside another company, Algo, and we spun off last March in AlgoFace.

Like we focus on an inclusive, unbiased A.I. solutions for the human face.


Awesome. Thank you very much. That’s going to make for a great conversation here today. Ramesh, I’m going to toss it over you. Same question. Give us a little intro about yourself. Tell us some accomplishments and why we’re interested in hearing from you today.


Thank you, Jeffrey. Wonderful to be here. I’m Ramesh. I’m a professor at MIT and a lot of my work is at the intersection of computer vision, machine learning, imaging and digital health. I think we live in an amazing era where improvements in computing are impacting not only other digital systems, but also many physical systems.

So my excitement is really at the intersection of the digital and physical systems.


Awesome. Awesome. So this is definitely going to be a great conversation. Ramesh, I see you have a bunch of awards in here. You know, I’m looking at your LinkedIn. I advise everybody who’s listening to check out both Taleb and Ramesh’s LinkedIn.

Tons of great accomplishments, tons of great experiences that they’ve had. Ramesh, you’ve got a ton of awards here that I’m looking at, is there any one of these that really jump out that we want to talk about?


I mean, the awards that I’m really proud of for the work done by my students, my post-docs on my colleagues. And you know, it’s unfair that award has my name on it. But really, I know a recognition of all the hard work, all the talent that I get to work with on a daily basis.

So awards like the Lemelson award that recognizes accomplishments in engineering or my ACM Siggraph award, which recognizes accomplishments in visual computing are, you know, just make me very proud given, uh, given my team here.


Awesome. Awesome. Yeah, definitely shows. Obviously, you know, your guidance and your tutelage certainly moves these things along, which is going to be fantastic. So I want to kind of dive into our conversation here and really, you know, I’m going to take a back seat.

I want these two brilliant minds to kind of have a conversation about where our A.I. is, where they see it going, and some of the cool technologies that are going into A.I. today. So I’m going, I’m going to turn it over to you guys, and I’ll let you guys kind of go at it.

And certainly I’m here. If we need being.


Yeah, I can start. Thank you, Jeffrey and Taleb. Let’s talk about, you know, the current state of AI and some unspoken challenges. And as we look at, you know, the tremendous progress in the ability to exploit compute and data and new algorithms in A.I., it’s become very obvious that, you know, as far as the computer algorithms are concerned, there’s a tremendous progress.

But data continues to remain one of the bottlenecks. And one mechanism around that is to simply harness the data that’s either outsourced or tap into sources that remain siloed. And to do that, there are two or three problems we need to solve.

One is, first of all, we have to figure out if the data is reasonable quality. And so some kind of data janitorial work is very critical. The second is that a lot of the data is actually very sensitive.

It could be health data or identifiable data, financial data, and it’s important to gather that in a way that preserves privacy and overcomes bias and other important values. And the third is that the data itself is illiquid. And to make it liquid, you know, to create new incentive mechanisms so that folks are to start sharing.

So far, we’re seeing the web to print world. The incentive has mainly been, you know, some other tentative benefits, whether it’s social media or, you know, traffic directions and so on. But as time goes on, we need a completely different economy to make the data more liquid and lubricate the whole ecosystem.

So I see that there’s three main challenges in dealing with data. So Taleb, I know you’re excited about, you know, many aspects of compute algorithms as well as data. What do you see as the current bottlenecks and unspoken challenges?


Yeah. When it comes to data like that it’s true, like A.I. becomes really in our lives. Recently, it’s departed from research labs into our lives, like around ten years ago for two reasons like computational power of GPUs, plus availability of a huge amount of data, especially like deep learning based, supervised machine learning solutions, which are the the common A.I.

Now, like maybe more than 90% of any A.I. solution in action is deep learning-based and provides supervised-based learning approach. But now, like when we figure out, like, there’s a lot of data when you want to build a real world A.I. solution that can, you want to put it in the hand of millions, sometimes millions of people you can figure out like data is a huge challenge for the same. Or like, I don’t want to repeat the same reasons that you mentioned. They are illiquid. They are…they may use privacy information, medical data.

Or biometric data that we are dealing with in AlgoFace or the data themselves are not representative enough. But there’s always different data. You you acquire data from different machines, from different things, or they have different characteristics or, like in medical data or many scenarios, there are certain classes or certain areas where a lot of data, and there is corner or edge cases that you have very, very little of data.

That’s a very, very big challenge, time consuming and resource consuming ahead of any AI or machine learning based project. Now we believe thanks to the advancement in two areas, the one it’s procedural networks and rendering simulator we can now. We reached the stage in the last couple of years where synthetic data is realistic enough to incorporate and this sensing data and harness our real raw data. together. And train machine learning model and assisting data generation is very, very hot topic and is very important. It can save tons of time in front of machine learning models and production. It can give us more robust A.I. solution because we have more diverse and more any curative dataset if we can really produce extremely realistic synthetic data in short amount of time.


So that’s fascinating, right Taleb? I mean, the ability to create new synthetic data, and it kind of reminds me of the trends in, you know, genetic cloning, you know, the Dolly the sheep. And the concern always is yes, it’s okay to create Dolly the sheep, but it’s difficult to create something entirely synthetic.

And so right now, we, in the world of synthetic data, we always need some data before we can create more synthetic data. So let’s go back to the beginning of synthetic data, right? I mean, there are certain benefits to creating synthetic data.

First of all, it can be created in large quantities without worrying about, you know, tapping into real world data. Once you have some real world data, as far as a genesis center, data can be simulated in many ways.

So kind of compute can generate data, which is always fascinating. It has benefits that when you do release the synthetic data, you don’t have to worry about issues like privacy because if you do it right, sensitive data may not reveal any identifiable data in the original data.

And probably the most important is that in synthetic data, you can play with parameters that matter for training machine learning, like bias and sample size and so on. So we feel that, you know, the world of synthetic data has emerged very quickly over the last few years, as you said, because of some of the new opportunities in generative models. But tell us a little bit more about how you see this challenge of starting with at least some real data. And what is the potential for going to truly purely synthetic data?


Yeah, that’s very true, like it is maybe like a Catch 22, a problem in the world of data. I need some data large enough to generate synthetic data. If our ultimate goal is to kill the need of real data, currently, that’s not possible. It is not something viable right now. But the advantage is even using some technique of the generative models, plus transfer learning. Sometimes you need very, really tiny amount of data to generate tons of data. So let’s assume I need to try and search an algorithm. I need like 1 million images or 1 million scan or whatever.

And you can really get this 1 million out of maybe a few thousand if you have the same pre-trained model or just as an A.I. transfer learning technique. The dilemma here is startling that the synthetic data is still inheriting certain characteristic or statistical distribution of the real data, and we need to be careful, especially when it comes to privacy, like if we want to discuss privacy, some overrepresented in the classes or objects inside real data that we are using to generate sensitive data. This information can be leaked to the generative– to the synthetic data as well. We need to be very careful about that.

We should like, not view that blindly. And there’s some technique to overcome that technically by using some differential privacy techniques or federated learning, maybe, Ramesh, you can also comment on these two techniques and

A still-emerging field, there’s no final solution, but I still see very voluble and using the differential privacy and federated learning can reduce and mitigate these threats to a high level.


That’s absolutely true. I think that’s one of the challenges you mentioned with synthetic data, such as the need for some original data before it can run a generative model on it. And second, from a logarithmic point of view, the need for really good transfer learning algorithms.

Because what you’ve trained on synthetic data may not generalize to the distribution of the real data that you actually care about, a real, and the benefit of synthetic data that you could reduce leakage and achieve some kind of privacy or even a trade secret.

Algorithmic data is very critical. I mean, I would say techniques like differential privacy or federated learning or what we do in our group at MIT, which is called split learning, which is a variation of federated learning.

So techniques like split learning and others can definitely play a very critical role in maintaining privacy. But don’t you think there are somewhat orthogonal to synthetic data? Because the whole idea behind clearly synthetic data is that the data itself is, extremely high quality in its raw form, suffused official privacy and add some noise to it reducing the value of the synthetic data. And if you use federated learning or split learning-like techniques, which are more suitable for distributed computing, then you run into this problem that you need some coordination between client and server.

Don’t you think the benefit of synthetic data is something you can just put it out there and you can either buy it or sell it or produce it yourself? And it’s almost like a, you know, a JPEG file that anybody can decompress after the fact.


Yeah, that’s true. Like those two things that help you to find the leakage of sensitive information from real data to sensitive data at the same time may reduce the quality or the value of sensitive data, but I believe if we keep pushing in this direction from both industry and academia, we will find some sweet spot between privacy and protecting the really more data privacy class. Having high quality of synthetic data.

Synthetic data also, it is not only about privacy. Yeah, privacy is very, very important topic now in the age of A.I., how we can have high, accurate A.I.-based solutions without sticking the privacy of everybody on the edge by synthetic data generation has many other values.

For example, corner cases and edge cases in many scenarios. You cannot find, for example, think about some skin condition, using A.I. Or some skin…dermatology. There are some common diseases that you can find tons of images for those problems or disease.

And but there are some orphan diseases and very little like whatever you leave, you will never add, so you always have a huge dataset of imbalances classes. So without synthetic data generation or data augmentation, it would be very difficult to create our solution for these data.

Also, since big data as well, I believe it’s played a very, very critical role in imbalance in data, especially when you want to tackle real world scenarios like skin condition and disease. What do you think about that?


I think that’s a very valid point that, you know, the bias in the data that we worry about can really, really play a critical role. And the example you gave of this highly heterogeneous datasets is a skin condition or different disease conditions.

It’s an important one. So, I take that for sure. Maybe on that note, we can jump into some of the things you are already doing in AlgoFace, Taleb. Can you tell us, can you share a little bit about how you use synthetic data in AlgoFace?


Yeah. In all the phase, we focus on developing A.I./A.R. solutions related to human face like face tracking technology, facial certain facial attributes or something related to hair as well, collecting data and labeling the data on. For example, we have two online facial landmark tracking systems plus other face attributes detection.

Collecting the data and adlibbing is extremely difficult, tough problem and time consuming to a human in the loop. Like when you want to put 100 people to, to label your data, it is difficult to control the consistency between people who have different opinions on the same image. So recently, we started to use synthetic data to augment internally our pool of data.

So we have like synthetic faces. that we can generate in an extremely realistic way. Also, we can control the attributes that we are generating in certain…to a certain degree. So not only are we generating synthetic images to have, your data set is more balanced and less privacy concern.

Also, in many cases, you can create 100% perfectly labeled data already. So that’s a huge advantage, right? Think about if you can generate like people with different skin colors, you know, and if you have control in model to generate those photos in realistic way, you don’t think really to label them that can really save you months of work and.a lot of resources as a startup.


That’s great, can you…I mean, if I think about the world of renderings, which, as you know, a lot of self-driving car industry uses for its training, versus synthetic data which, as we just discussed, still has its roots in real data.

But using generative models as goes forward to kind of rendering of scenarios and faces and so on, which are the superior synthetic versus data generation. These two techniques today are somewhat distinct. You see in your own work these things coming together because the sort of physical phenomena that are sufficiently easy to render with purely rendering based approaches, but they don’t have data for it and the other way around. So how do you see these two strains merging at some point?


Yeah, I think I see it coming together already. I see people in the market already, like, trying to marry these two thing to have more efficient synthetic generation when it comes for simulation engines, for mostly autonomous cars right now. There’s big companies using those to simulate certain weather conditions or bad road conditions that they cannot capture in real world scenarios and train their autonomous driving system on it.

When it comes to the human face, it is not easy. There is no easy way to generate the synthetic face model, a realistic one – you can maybe easily create cartoonish face, but if you want to have some digital artist to create really, really realistic, 3D model on, using one of the platforms, it takes hours long, it is not something easy. But, you can merge these two techniques. You can start by generating some libraries of, using some, using the 3D artist and create those digital assets and use them and feed them alongside the real data from that generative models to create new 3D or 2D faces together.

I think both of them can play nicely together, it’s still in its infancy. It’s just started, I believe. I think like in the next three to five years, we will see many platforms for generating…like traditional artists,

they will start to incorporate some generative models technique, embodying it and start to help them to generate and adding new features to those spaces. And that would cause certain, like, maybe concern or question how we are perceiving if you went these…

I think those like you will, we will start to maybe look at faces that we don’t see in the real world. We don’t know how those algorithms explicitly working, and maybe that will affect our perception of a human face.

Like recently, I bought a book, an artistic book, about face coloring for kids. And what is nice about that book, all those pictures of faces that have been created by generative model by A.I. So there is no artist drawing those faces, and we are putting these books in the hands of kids and maybe their perception of the human face can be affected slightly.


Fascinating, fascinating point about how society might perceive these faces. I mean Hollywood actors have been worried about virtual actors replacing them for a very long time. It hasn’t happened yet, but maybe you’re seeing that maybe virtual characters may not happen, but synthetic data generation from some existing lowly paid characters could actually start to start creating some ripples. Why do you think about that, Taleb?


Yeah, I think digital influencer now and digital avatars now it’s becoming a really, very big thing, especially in South Asia now, there’s a huge industry about digital influencer or digital avatar, they are creating totally digital character, full-body, face, animation and even there are some these YouTubers like those characters…

…they have multimillion followers on their YouTube channels, some of them targeted to our kids as well. I think this is something really for me, like. From a scientific perspective, it is extremely exciting. But also at the same time, it’s concerning how those digital characters will communicate, what kind of messages and…in visual ways and audio ways on the audience on, like…maybe soon we will find maybe YouTube or social media, we are like, is this a real person, or is this like, a digital character. And I don’t know if we can say digital? And you can find them posting or creating content and even communicating with each other. And when it comes to kids, and in education, I think that can be interesting, like…it can have a lot of advantages and also some concerns too, to think about.


Certainly. I mean, when it comes to generative synthetic data, is there something special about human faces compared to other data? Because, you know, kind of evolutionary, we are so attuned to even the most minute changes and any sense of, you know, kind of the, the, you know, the chasm of reality.

So is there, is there something specific about human faces that you encounter in your work at AlgoFace compared to other synthetic data generation?


Absolutely. The human face is the most captured object in history from, you know, from even like painting or photography, like, millions of faces like, you know, people like, everything about the human and our face represent our identity, everybody wants to capture their faces and upload it in the era of social media. And even in our plane, there are certain area in our visual cortex about processing faces. So…and we are extremely sensitive. The human is extremely sensitive about anything not very realistic, or correct, in the human face. Like, our brain has been trained to recognize people from their faces and recognize their emotional status. And if they are happy, they are stressed…like, even our facial expression, our very, very fundamental communication, between people.

So people, like, genuinely care a lot about face, and when you start to create synthetic faces…the bar is really high. Yeah, for example, if you are generating synthetic, I don’t know, like, products for shelves detection for, like, automation of a retail store…if you are generating stuff in cars, people are even unguarded and can be less sensitive. But things like human faces are…and generally they are similar. Like if you want to look at…a human face is one object, you can accurately train face detection – faces are one object. Like all human faces, you can group them as one object. But at the same time, like, we can recognize, “This is Taleb, this is Ramesh, this is Jeffrey.” So we say, as you say, our brain trained to take every subtle facial movement or facial attribute changes, and that would be transferred automatically to any A.I. solution. For example, if you are generating faces for emotion recognition, you need to be extremely careful about how we are listing that emotion that you are generating and also put the faces and the face pigmentation.


Yeah, absolutely. I think this uncanny value problem has not been easy for even special effects to overcome. So it’s…it involves a lot of creative, talented input and the likelihood that a pure algorithm could do that is definitely, you know, an uphill task.

So given that I mean, as you know, there are about a dozen companies that sell so-called generative synthetic data products out there. But I assume AlgoFace creating generative representative faces is still a very unique problem. So what’s your own take inside AlgoFace? Do you have your own modules or are there some modules of products that you think are pretty commonplace now and then you would encourage our viewers to use?


Yeah. You know, I feel like there’s very, very few players in this market, like very few and maybe around the globe to the best of my knowledge, there’s less than five. Something around five serious companies, they have some serious products in the market now.

All of them emerging in the last couple of years. So there’s very, very few players, there’s some few companies that we are talking with about their ability of generating synthetic, realistic faces. And at the same time, we are working on our own solutions to create synthetic data that fits our needs because in AlgoFace, we are working on some unique solutions that are not very popular. So as a service provider and they may not have exactly what we need and what we need may take a lot of customization on their work.

So we are working also internally because we have already built a new data set, a proprietary data set, that we know our problems very well, we have a very good pool of talent in that area. So mostly, like, we are creating a lot of synthetic data internally to this point. And this is, still, like, early stage initiative is still, it needs certain time to make sure and maybe one day to replace the need for any real data at all. It’s kind of like, maybe the dream is train A.I. model on 100%, synthetic data and testing on 100% synthetic data and it works perfectly or as good as if you train it on 100% real data.


Yes, that’s, that’s fascinating. I mean, I still remember the demo video that you and the AlgoFace team had posted about two years ago on being able to do facial landmark detection in real time, independent of race, gender, skin color and so on. I think that remains one of the leading demos in this space. So, you know, I’m looking forward to some amazing innovations in the field of generative synthetic data as well coming from your team.

And then just to kind of wrap up, what are your…how do you see it? I mean, one dream you say is, you know, so little data, you know, you said if the order of 100 images is enough to create, you know, other synthetic data. What is the other dream? I mean, if I, if I look at children, you know, they’re able to see just a few, you know, snapshots of their, of new people, you know, and start making assumptions based on that. Whether it’s faces or poses, or other visual inputs. If I look at the progress we have seen in speech, the synthetic generation that has progressed dramatically.

Can you share with us how you see this field progressing over the next few years, especially with respect to how you see this inside AlgoFace?


Yeah. In general, I think one of the fundamental problems now in A.I. that needs to be solved is the ability to transfer knowledge from one problem or one space to another, like as our brain, for example, people can recognize a new object if they saw it one time. You know, for example, maybe a six-year-old, show him a new game or new tool one time, and they can memorize it and recognize it after ten years. Even they saw it only once in their life. And that’s not possible, at all, for example, if you build any object classification A.I. model now for, like, 1,000 objects, and they show them one example of 1,001 objects or like other new objects that would be forgotten about. Like, you cannot.

So, but I think our brain works in a different way when we build object recognition ability in general, regardless of the object themselves. Then we can start to add some library to those objects, and we don’t need, really, to have 1,000 or 10,000 examples of the same object in different orientations to be, to be the everything behind it.

That, if we arrive at this stage of maturity also provides learning or unsupervised learning that can solve and reduce and remove a huge meaning of new data ad-libbed data. But all these problems that we are trying to solve on data from synthetic data generation are real data because inherently all currently available A.I. models are extremely hungry for data.

I think the next breakthrough that can happen, is…how about developing some A.I. models that are not really very, very, very hungry for a lot of data? That’s really the 1 billion dollar question. If somebody can crack this problem, that would be amazing. Inside AlgoFace our…we are not, like, a multibillion dollar research institute, which we are like an ambitious startup trying to solve a very tough problem area of the human face. We want to know we want to have extremely accurate and inclusive A.I. based solution, like analyzing the human face for good, right. For face tracking, face animation, augmented reality for a human face, maybe like tackling some telehealth.

Now telehealth is huge, booming after COVID, digital psychiatry, digital health, digital vital sign retention. I think if we can build real time solutions that are built inside the consumer device, which is inclusive, I mean, because it can see all faces regardless of their shape or color or skin condition, and that can make a very, very big difference. And this is what we are fighting and trying to crack in AlgoFace.


That’s, that’s fantastic, I mean, you’re absolutely right that, you know, generated synthetic data and its transfer learning and generalization is just the beginning, but it’s a really exciting space. And it’s always, always a joy to talk to you, Taleb.

I hope in the coming weeks we get to talk more about, you know, other aspects of A.I., especially as it relates to face A.I. Thanks a lot.


Thank you very much, Ramesh.


Gentlemen, I want to thank both of you guys for a fantastic conversation. I want to thank everybody for tuning in to this week’s episode. Again, this is A.I. in the Real World. We had Taleb and Ramesh talking today about A.I., synthetic data and all the other interesting things that are going into this evolving technology.

Again, our podcast is going to be the ongoing discussions about this emerging and advanced technology from all the experts like Taleb and Ramesh. We’re going to be covering topics such as the sciences, the legal aspect of it and all the ramifications that go along with it.

If you have questions, if you have topics you want to hear about, please feel free to reach out to me. Again, my name is Jeffrey Freedman, I can be reached at

We’re looking forward to seeing you on future podcasts and thanks again to both of you. Really appreciate the time today, and I hope everybody got a lot out of it.


Thank you very much, Jeffrey.


Thank you.