Profound

S4 E6 - Dr. Khai Minh Pham - Redefining Our Understanding of AI

April 02, 2024 John Willis Season 4 Episode 6
Profound
S4 E6 - Dr. Khai Minh Pham - Redefining Our Understanding of AI
Show Notes Transcript

In this episode of Profound,  I talk with Dr. Khai Minh Pham, whose unique approach to artificial intelligence challenges conventional paradigms and opens new frontiers in AI research and application. Dr. Pham, with his extensive background in both medicine and artificial intelligence, shares his journey towards creating a distinctive AI framework that prioritizes knowledge over data, steering clear of the traditional data-centric methodologies that dominate the field.

Dr. Pham recounts his early realization of the limitations inherent in human cognitive processes and how this propelled him to explore AI as a means to augment human memory and decision-making capabilities. 

Central to this episode is Dr. Pham's critique of the prevailing AI models that rely heavily on data processing and pattern recognition. He introduces his concept of "macro connectionist AI," a system that mimics human reasoning more closely by forming high-level knowledge representations rather than merely processing data inputs. This approach, according to Dr. Pham, not only enhances AI's problem-solving capabilities but also significantly reduces the computational resources required, challenging the current industry trend towards increasingly complex and energy-intensive AI systems.

you can find Dr. Pham's LinkedIn below:
https://www.linkedin.com/in/khai-minh-pham/

John Willis: [00:00:00] Hey, it is John Willis. Just wanted to give you a heads up on this next podcast. I've been really focusing a lot on Generative AI, trying to find out where the Deming hooks are. There's some interesting things, but there's more interesting stuff about Generative AI. So this is going to be the first podcast I'm going to do on the profound Deming podcast series.

 I may actually turn this into its own podcast. As some of you might have seen, I've got a newsletter called Attention Is All You Need. I also have the Deming newsletter on LinkedIn. But we may move this over to to the a new podcast series. But for now, I'm shimmying it in, in the Deming one.

I hope you enjoy it. Hey, give me some feedback if you want me to continue to do Podcasts like this on the on the Deming podcast. If you don't like that and let me know, just be nice to get any feedback on LinkedIn and any place you [00:01:00] can find me thanks again for listening. Like I said, I'm always around.

So if you want to poke me on any questions, feel free again, LinkedIn is probably the best place. So I hope you enjoy this episode Dr. Pham. Brilliant, unbelievably interesting guy. I think really trying to rethink how we should be doing AI. And it was, I met him at a scale LA DevOps and scale you know, in Pasadena just recently.

And he was gracious enough to come and do a podcast with me again. Hope you enjoy it. Hey, this is John Willis. I wanted to talk about like, you know, sometimes people ask me at my age, why do I travel so much?

And you know, like, don't you want to settle down or, you know, And, you know, one of the reasons I travel is I get to meet people like my guest guests, which are just, you know, like, there's no possibility of meeting this, this person, unless I go to DevOps days, LA and scale. And so I'm really, really excited [00:02:00] to to introduce my next guest Dr.

Khai Pham. You know, extraordinaire MD, PhD, Dr. Pham, could you introduce yourself? 

Dr. Khai Pham: Thanks, John. First of all, I'm delighted to do this podcast and we had a great discussion before that. So about myself yeah, as you mentioned, I have an MD and PhD in AI at Sorbonne University in France. And what is interesting, the way I get into AI.

is based on my weakness. And a lot of time when you have a weakness and you try to overcome it, it's how you, you go deeper. My weakness is my mom is Vietnamese. It's not, it's not my weakness. You have to see the weakness. And because she's Vietnamese, I have to be physician. Or a lawyer or general, but it is no other choice.

So I'm physician. My sister is physician, but at the second year of medicine in [00:03:00] France, it's eight years of medicine. It's long, but at the second year of medicine, I realized that There is no way I can remember all that and how to combine all this knowledge, not data, knowledge. So then I said to myself, yeah, AI should, should help because I just heard about it and the computer should help.

Then I went to see some AI expert and they explained me the way they do it. if A and B then C. And now, well, sometimes you don't have A, you're not sure about B, but you still have to decide. So this really motivate me to start to, to do my own journey. I decided not to read any AI books because I didn't want to end up with the same conclusion.

And the approach I had was, I start with the problem and try to design a system that fit to my problem, not the opposite. In this case most of research design a formal system and then [00:04:00] they try to fit into this formal system how we think. So then at my PhD, the, my thesis was about providing a unified.

Cognitive framework for ai. You know, you're young. You, you, you don't care if it's difficult. I was thinking about myself. Yeah. In physics, people work on a unified theory, so why not In cognition? In particular? In ai. Yeah. So I did that. It was a great thesis the president of the university say, if I'm right, I'm going to change AI who wants to work with me.

Of course, nobody, because I'm a student and they're all professor, but this frustration started to grow. When I started to publish, I published in IEEE peer review and so on. And I realized that two things, if you are not in the trend of what's going on, people don't listen to you. [00:05:00] Worst people think you are dumb.

 And in the academic works, it was difficult to have enough resource. So I decided to, to start at my first company to have money to do my research, which completely. Stupid, right? Because research is a cost. It's not. Yeah. So I started without money, without computer. This can be another story we can talk.

But then, yeah, it's how my first AI company took off in Silicon Valley. It worked very well. It was in FinTech CRM. Because at that time, life science is not what it is today. Great exit 600 plus million. And I depress because it, it, it was not my, my goal. My goal was to prove the technologies working in real world.

And how to apply it. So 2017 I started thinking not life science to really dedicate the second generation [00:06:00] to, to this. So since we, we, we, we grow, but maybe later on, I can talk about the, the company. 

John Willis: We'll definitely get into that. I, I did you know, looking at your bio a little bit, you know, the, I guess there's, you know, I think people always enjoy this sort of the annotated stories, but you Bill Gates had called you at some point or something.

Guest: Yeah. So, yeah. You know so I started my first company in France and as I mentioned, I didn't have money. I didn't have computer. So I have to find a way first to have a computer. So I started to pitch to different company and Sony at that time did workstation and they gave me two workstation.

Then I have to pitch to some entrepreneur to give me a desk. So one of them gave me a desk. Then I didn't have money. So I went to a bank. Imagine, you know, decades ago, go to a bank in France explaining AI for some [00:07:00] reason. Maybe the person have enough of me want me to be out of his bank. So give me a credit line.

 And then I, I realized that. No way I can grow in this ecosystem because people don't take so much risk at that time. Things are changing right now in France, but at that time some people told me, yeah, your technology really great, but in case it doesn't work, I get fired. But if I buy it from a big company, I'm fine.

That's right. It's not just about money. It's about the whole mindset. So I decided to to put all my savings to go to Silicon Valley for a conference where I can meet, you know all the CEOs. And, and I met Bernard Verne. He was the president of Microsoft in Europe. Already billions of dollars, and just told me, Mr.

Verne, you have nothing to prove anymore at Microsoft. Join me. We were three people because, you know, I'm young, my [00:08:00] technology is the best, no brainer. A few months later, he came to my office, which I found normal I realized today how much he, he costs per hour, right? But, and then we were talking about, and then one day I, Yeah, I came to my office was Monday.

I have an assistant. She knows nothing about, computer and she's Portuguese. The reason I mentioned she's Portuguese is she thought, okay, somebody called from Mr. Gatess. It's Mr. Verne. So then I start to make the connection. No, it's Gates. And so, yeah, since I don't know if you are available or not I asked them to call back.

Okay. Did you take the phone number? No. Oh, no. So hopefully they call back. So I had a one to one meeting with, with Gates for more than an hour because at that time he was only interested in AI using probabilistic network. But I. Tell him [00:09:00] probabilistic network is not going to fly because of the computation and other things.

So after that, he wants me to meet Mirvold. He was the CEO of Microsoft at that time. But I asked people what you think, you know, I'm nothing compared to Microsoft, right? So people told me this company, their time is not like your time. They have all the time in the world. But more you explain, well, at some point you just have to go.

So I, I decided to Decline because I, I had something particular in my mind. I didn't want that. Yeah. A big corporation dictate me where to go. Yeah, because you 

John Willis: could have got buried. I agree. All right. So yes, 1 of the things I, you know, we, we met at scale, which is downstays LA and scale and And I got stuck in a panel with you, Reza, and I felt like such an imposter, but I posted that video out there.

It was pretty good. And then you gave an hour presentation next day. [00:10:00] And you know, one of the things I, I guess I wasn't sort of fully in tune. I was trying to, you know, sort of understand the last thing you said, but you gave the example and I was, you know, we didn't have enough time to, I had to head to the airport, but you know, that I, you know, the lady and the tiger example, I thought that was, You know, and you were using that because I really want to explore out your like you earlier.

If you were listening, he almost put double quotes around knowledge versus data. So we're going to get into that pretty heavy, but you use the lady and the tiger. And so I went back and I tried to sort of Google. Where is it? Is there any. Any presentation about anybody thinking about it? And, you know, closest I could come to was this sort of trolley car problem, right?

Which, you know, is sort of overused. But if you wanted to sort of just like dumb it down and explain to me how you were using this, the lady and the tiger story as it applies to AI, I guess. 

Guest: Yeah, so the lady or tiger is is is from a book on puzzle. [00:11:00] I don't remember the name of the author. But if you if you Google lady and tiger book, a bunch of them.

Sure. The goal for me. Was different point. One is to show that it has nothing to do with data two. So since this has nothing to do with data, there is no training. It's pure reasoning logic. Now it doesn't illustrate all the reasoning AI because We reason in a very different way. We reason by logic, by analogy, by constraint, by probability, and so on by case.

So this just illustrate one of the thing. And the other thing I wanted to illustrate is a lot of time. People told me, you know what? We focus on data because knowledge is, is not complete. We, we as human try and we were not able to solve it. I wanted to highlight how [00:12:00] limited we are in terms of processing knowledge based on meter, we can only process five to nine concepts at the same time.

So the lady on tiger, we just two room. And don't, don't, don't tag me as a sexist or whatever on the lady. It's just the puzzle from the book. So I'm going to announce it and, and, and I want to make some remark about that. So the problem was to identify. To infer where is the lady in room 1 or in room 2, room and you have some knowledge about it.

 So the knowledge is if the lady is room 1, then the sign 1 is true. If the lady is room 2, then the sign 2 is false. It's the opposite for the tiger. And on top of each room, you have sign one and sign two, both of them say exactly the same thing. Both room contain the lady. Where's the [00:13:00] lady. So maybe, you know, progressively more, I give you knowledge about the problem.

Maybe you feel like I, I'm, I have to accumulate all that this related to our five, nine concept at the same time that we can accumulate. So there is. very big difference between the available knowledge in the world, the knowledge that you acquired by working a lot, and the knowledge that you can use to process the problem, because it's limited to five, nine concepts.

So the lady on tiger, I wanted to illustrate that. And a lot of time, yeah people say the wrong room because of, yeah, it's so imagine nine room, right? And if we, you know, go after the cell then it's not nine room, it's thousand and thousand, thousand of room. So the problem [00:14:00] today is we underuse dramatically what it's known.

It's great to continue to have more knowledge and it's very important. But I think the most productive things is to process what we know as a whole thing, rather than just be able to process what a human can process. Right. 

John Willis: Right. Yeah. So yeah, that's really interesting. But the the other thing I think that really caught my attention in your you know, the, the earlier presentation you gave is you gave the you know, you talk about neuro symbolic networks or in the, and, and, you know, so, and up until that point, my only sort of understanding of that was what DeepMind had published you know, about and then I, I sort of caught the end of your presentation, I re watched it and, and, you know, so there's a couple of things I'd like to sort of un un unpack there.

One is you know, what is a neural [00:15:00] symbolic network? And then you know, sort of what DeepMind referred to is what you referred to is Daniel Kahneman's thinking fast and slow. So we can unpack that. Most of my listeners totally understand that, you know, sort of, we use that in sort of the DevOps conversation quite a bit about, about, you know, cognition and, and bias and but then then like, I don't want to say what, you know, there's a couple of times during this, this podcast where I'm going to say, you know, what's wrong with this.

And I, you know, I think that's a, a loaded phrase, but, but for lack of a better phrase, what's wrong with sort of the deep mind version. And I think that opens up the door for really, you know, where we're going to go with this conversation. 

Guest: Yeah. Maybe instead of saying what's wrong, I would say what is incomplete.

Okay. Brilliant. Yep. Yep. This approach, right? So you were talking about neuro symbolic network. First of all, most of neuro [00:16:00] symbolic approach are not even the network. What I mean by that is people just connect. Completely two different system together. So you have the neural network, and then you have a symbolic AI.

And in the case of DeepMind, it's a rule based system. Which is, you know, very primitive reasoning AI things. So there is no unification of it. So it's, it's 2 different system that tried to interact to each other the approach that I work on. So, 1st of all, you know, a lot of time people talk about neural networks, but it has nothing to do with the neural networks.

When you see the implementation, it's, it's, it's, it's, it's far away from what we know today, how a neuron is working. For example, no refractory period. So anyway, it's, it's, it's just the kind of metaphor. So I'm prefer to use the term connectionism [00:17:00] that normally people use to, to describe that it's more exact than your networks. 

So connectionism today. is focusing on pattern recognition. It's focusing on data and how to extract this pattern. From this pattern, you can do prediction, statistical prediction on the symbolic side that stopped at the same time than, than the connectionism was more focusing on the reasoning part of it, the problem solving, and it's at the symbolic level.

So one is Okay. To reproduce intelligence, we have to reproduce the neuron. The other one, no, we just need to reproduce the logic level. So for a long time, these two schools were completely in parallel thinking that, yeah, symbolic approach can recognize a cat and then the neural networks can solve any problem.

We realized that And [00:18:00] can amend that you, you, you seated by, by the way he just passed away very recently talk about system one and system two system one is when you think fast and system two, you think slow. There is a reason for that. Everything you are doing in a second. Is the system one.

It's pattern recognition. You just recognize something, right? And before, when you just recognize it's a lion, you just run. You don't think. So this is pattern recognition. The system two is more painful. It's the thinking, the reasoning, the what if, and so on. So it take time, but it's where and how. Human, we are where we are today.

It's how science is about science is about to understand the causation, not just the correlation, the correlation help us, but the goal is the causation. So the neurosymbolic approach [00:19:00] today become more and more of use and for example Yann LeCun. So he, he was at the same university than me graduate just five years before we have some common professor.

John Willis: Yeah, yeah. Very good. Sorry. Yeah. Yann LeCun, right? Okay. 

Guest: Yeah. So he was focusing and, and, and brilliantly achieve things in machine learning. Today himself say, yeah, we need something else. We need, The internal representation of the knowledge, right? By the way I'm trying to reconnect with him, but he's so famous today.

So the thing is, pattern recognition is one thing. Reasoning is another nature, but some people think that paternal recognition, if we scale, scale, scale, scale, we're going to have reasoning and people confuse that with what large language model are [00:20:00] doing. It looks like it reason, but it's still do not reason, but maybe we have more time to talk about that.

So the neural symbolic AI is to see how we can have. Everything within the same system. So as I mentioned to you, there are hybrid system where you have two things and I work on that 30 years ago, how to unify that and what I call macro connection is. AI. So in abbreviation, may I, the approach is instead of using the neurons as the basic unit entity, I consider the assemblies of neurons because then I can represent something at higher level of sophistication, but still have the connectionism benefit.

So it has been applied in different industry. From electromagnet design, missile defense brewing beer finance, [00:21:00] fraud detections and so on. And today it's used for the second generation for generating human digital cell clones for drug R& D. So yeah, different level of neurosymbolic so, 

John Willis: so if I was the hybrid I'm, Yeah, so if I was sort of summarize like what we just heard, right?

So there's been these two tracks in AI for many years, right? It's sort of symbolic and there's the neural network track. But in the end, even the neural network track is really not, as we sort of think, it's not like a brain's neurons. It's what you're calling connectionism, right? It's, it's, it's, you know, sort of mathematical weighting of these connections of things.

Massively deep math, basically, and so and then with deep minds approach was really sort of two competing like take these two and I think, you know, sort of brilliant to sort of like use Kahneman's work to sort of say, hey, we can solve [00:22:00] better problems now than just neural networks, you know, neural networks being kind of system one thinking.

And then system two, like, in their sort of alpha geometry example, they figure out the symbols with the neural network, and then they do the hard math with the but what you're saying is that that's all great, and it's given us a bunch of good stuff, but in, in, sort of like, what, where is it incomplete?

It's, it's incomplete because it's still a data. Processing, and it's not sort of really focused on knowledge. So what you're talking about is creating sort of knowledge clusters of norms, still connection ism and stop me if I'm going off the rails here and then now we get the power of like sort of a a higher abstraction power of, of all these things.

Is that reasonably accurate? I 

Guest: would rephrase it a little bit. It's not that what DeepMind. Is doing it just about data. It's [00:23:00] it's it's data and all, but it's the system. You have one system processing knowledge, and then you have one system processing data, and they just. Talk to each. I don't think we have that in our brain. Everything is one is unified. So it's, it's, it's what I tried to talk about. The macro connection is AI.

There is no two system, but don't use that to recognize a cat because today with the technologies would be too expensive, but the system is capable of learning and doing reasoning far more than just rule based system because. In macro connection is a I may I each note called thinking note is a mini reasoning engine itself.

So you can throw into it any kind of logic. So I want to push the neural symbolic farther towards [00:24:00] unification and not just integration. Got it. That's 

John Willis: that's brilliant. The other thing you talk a lot and you hit on this a little bit, but I think it's also fascinating because you know, I think, again, the most of the people that would be listening to this are people that come from sort of DevOps or systems or, you know, organizational design, that kind of stuff. 

I really love the, yeah, you're sort of a distinction between you know correlation and causation. Now we you know, like most professionals that do what I do, you know, know the adage, you know you know, correlation is not causation, but I think you, you took that whole conversation to like a really mind blowing for me.

Of, you know, one is not all causation has correlation and, and both how it applies to the sort of way you're thinking about AI. I thought I'd love 

Guest: you to sort of, yeah, I think, I think creation and causation are so fundamental concept, [00:25:00] to really understand what we are doing and they are not easy to grasp. 

So, yeah, as you mentioned, statistic 101, everybody knows correlation is not causation. And the example I give is, you know, in 1949 there is a study showing a perfect correlation between ice cream sale and polio outbreak. And some physician even advised to eat less ice cream. Well, it's just because during the summer you eat more ice cream and during the summer people gather more together.

So the virus, you know, propagate more. So this is okay. Most of the people know it, but most of the people. Don't apply it. We are easily, you know, tweet by. Oh, there is a study showing a correlation between this and that. And in our mind, we didn't hear question. We hear causation. But beside that. Yes, some causation are not [00:26:00] correlated.

So how is that possible? So the example I take is, yeah, if you, if you recall data about rain and, and how the crops are growing, then yeah, you can see that when it's rain, the crops are growing. So there is a correlation. And then if you have more data, then you can see that rain Make the crop die because too much water.

So if you have the whole data set, then you don't see any correlation, even though we know that it's a causation because it provide water. So causation is very tricky. And as you can see, data is very dangerous. And some people think that the more I have data, the better it is. No, you have to know exactly what you are doing.

And in particular, to know that, you have to have knowledge. So, data alone is super, it's not even I [00:27:00] would say use sensors. So you need to have knowledge interpretation understand what is the data you have and then use knowledge. to challenge your data. And today, you know, we do learning and then we do prediction, right?

It's not the way human we are doing. We do learning and reasoning at the same time, because you challenge what you are learning right now. And by challenging it, once you find it consistent and coherent is With what you have, then you have the understanding. So today we, we, yeah it's, it's, it, it, it's very far from, from how human we are working. 

John Willis: So in, in a sense, your sort of chat, GPT is really sort of a data prediction trans transaction, right? It, it's you know, so to your point, it's giving us the sort of most probable. We know that at [00:28:00] the end of the day, it is a statistical model, right? It's giving the most probable answer. Now it's doing it in very complex math.

And a lot of really smart people have built systems that can 

Guest: get us. It's a masterpiece of engineering. But I, I would even challenge the term language. It's not a language model. It's a large world system because it doesn't understand the grammar, right? It just associate words together based on the statistic that it's so in all the texts.

But it doesn't understand fundamentally what is a subject, what is the verb, what is an adverb. So and this is the tricky thing. And I, I, I put some precaution here because I don't want people misunderstand me. Alan Turing is out of this world [00:29:00] in terms of brain. But I think his test, and maybe it's not what he mean and the way we understand it, mislead completely AI and is a very test that we have completely to rewrite.

Because what looked the same is not the same. And today, large language model looks like We very well with the human like Elisa, right. Has been done in 66, but by then boom. And some people compare it with chat GPT 3. 5, and it did even better than chat GPT 3. 5 in term of simulation. So I think it's very important people to understand the, the, the, the, the low level mechanism, because the output.

Looks great. But if, if you understand the underlying mechanism, then you know what's [00:30:00] going on. You understand the hallucination and so on. So this is the danger we are right today is that people see AI achievement based on the Turing test and it should not. It should be deeper. A lot of time people think AI, it's mathematic or algorithm.

How many times people just focus. I talked with some expert in AI and and just. What is the math behind that? What is the algorithm? Yes, if you do a connectionist system, you have the output, And in between, it's a black box. So you only have the algorithm to understand what's going on. But in knowledge and reasoning, no.

In knowledge reasoning, you have the output. And then when I ask you why, you're going to explain to me why based on the conceptual level you have. It's not the same why [00:31:00] than you can ask to the large language model. Because the large language model, when you ask why, it still just make association of this world.

The why that I'm talking about is really the cognitive mechanism that you are using to produce the outcome that you are using. This is the cognitive level that matters. I think that's, 

John Willis: that's the really interesting part, right? Like if I go back to the Lady and the tiger, or, you know, as I was, I've been reading a whole bunch of history of AI books, right?

And, you know, for the, the sort of the book I'm gonna finally create, hopefully you know, but you know that one of the things that comes up a couple of times is this dialogue between a man and a woman. And, and, and the woman says. Leaving you and the man says, who is he and that sort of be able to like that, that, you know, to me, that's almost a lady in the tiger problem.

It was that whole narrative. That story is [00:32:00] she, you know, she wants to tell the Prince which door to go to because he goes to the wrong door. So the highest, highest level of the story is there's this prince that the princess to be is falling in love with, but so that the king makes the prince guess which room.

If he picks the tiger, he dies. If he picks the other room, he gets some other bride. And so the poor princess has to determine, does she want her loved one to die? and send them to the tiger, or does she want him to be happily married to somebody she doesn't like, right? And I think that's in there lies this between that and the dialogue between the man and the woman is the beauty and the complexity of this problem that we're probably not, well, we're trying to 

Guest: solve.

You add another level here, which is for me  intelligence, is dumb.

It's just [00:33:00] mechanic. What you are talking about is values you know sentiment. So I started to try to, to model that. But what I mean is intelligence, I always say it's a killer machine. if it doesn't have the heart. Yeah, I want that. I'm going to find a way to get it. If it don't have ethics, if I don't have, you know, values and so on.

So this is another completely higher level. Which may or may not, I don't know yet. It has to spend quite a lot of time based on the same mechanism but it's, it's, it's not the cognitive level we are talking about anymore. It's the emotional level. A whole 

John Willis: nother layer. The other thing I wanted to, and then I want to sort of so one more thing I want to bring up that I, that we picked up on the panel that, you know, [00:34:00] that we were on with Reza Assal, and the I thought it was very interesting, and both of you had interesting comments on it, and then I want to sort of spend a fair amount of time, because I think The, the, the sort of biological you know, the, the, the, the models that you're building now are just absolutely fascinating.

So I want to save some time for you to tell people about that. But the thing that came up was, if you remember on the panel Reza said that, you know, we've got this 7 trillion dollar landmine of GPUs and you chuckled. And everybody in the room knew what he was talking about, right? Sam Altman and but the but then he went into like, like this, like, I had not thought of this at all.

And I thought it was really cool. There's this idea like there's, we're, we're practicing this intellectual laziness. You know, and he even quoted from my presentation how I tried to explain cosine similarity, and then you know, a mathematician I'm not, but he, he sort of explained, like, you know, we're, we're sort of [00:35:00] taking the, and now I'm totally paraphrasing, but we're taking the lazy route in, like, there's so much more math we could apply, and maybe we don't have to have a 7 trillion landmine of GPUs and possibly even be, you know, Destroying the earth, you know, like depleting all energy.

And it seemed like you were right in sync with that conversation about like, and then just to extend, which is a perfect segue into what you're doing is I remember somebody asking you about some of your models and, and they were like, they were sort of trying to challenge you, and you're like, yeah, but it works, , you know?

And like the best answer was you were, you're able to create these com and you're not having, you don't. You know, again, I'll let you sort of finish this up with a, with a complete explanation, but the way I took it was you weren't having to build a 7 trillion man line to figure out how you know cell biology works that there were, there were sort of non intellectual lazy ways to do it anyway.

So [00:36:00] starting from the intellectual laziness. 

Guest: Yeah, there is so many interesting points here. First of all, you know, nature is lazy, right? It's come from entropy, if I may say. We always go to the less energy as possible. So it's fine. Now, it depends what, where you want to go. And The laziness here is and, and, and sometimes people say, you know, when you have a hammer, everything looks like a nail and, and this is what we are doing today.

We have something that works and we just, you know, think that just, just, just do more of this. We will get it. But we all know that these generate so much consumption of energy beside the fact that the result. will not be there because it doesn't do the reasoning the way human we understand reasoning.

The other part of that [00:37:00] is that if people work, why we can do on the cell with, with so few resource is because knowledge for me is the smartest and most powerful way to compress data. To have a knowledge, a piece of knowledge is the result of a lot of data that you already process, analyze, curated, interpret, and so on.

So why on earth you want to go back again and each time we start from scratch. So all this knowledge. represent a massive amount of data that has been already processed and, and so on. So if you work the knowledge level, you need much less computational for the tasks you are looking for. I'm not talking about image recognition and so on.

I'm talking about the reasoning, the problem solving. So we can save a lot of computation energy by working at the knowledge level. [00:38:00] And we know it's not perfect. It's not whatever, but she is, we use just this and we are able to go to the moon and we are able to develop antibiotic and so on. So what if with the same methodology, we use the whole things.

So it's, it's why, in this case, we will not need. Even 1 trillion, to, to, to achieve a number of things that it's not necessary. Other thing. Yeah, maybe, but for some part of it, just, just, just don't take one tool and try to do everything with, with the same. Thank you. Yeah. 

John Willis: No, and so I mean, that's a, you know, I think that you know, that plays into like, the, the thing that you're trying to follow.

And I, I think I, I saw this, you said this on the panel, or I was watching one of your videos or something is that, you know, as humans, you know, back to the whole thing that, like, a lot of ways what we're building isn't the way a human thinks, even though we think it's doing that [00:39:00] is that. We accept these knowledge abstractions, like we build, like we, like we literally, we go to, you know, we spend like, you know, what, 12, years in school gathering knowledge, and we don't sort of decouple all that data every day, like we've learned enough things so the, I think the, the point I think you're making is why not, why don't we start and accept there are knowledge pockets and build our sort of macro connection Sort of networks that way, right?

And and then so and then that leads us perfectly into I think it's a great explanation Of how you've applied this to the to a human's biology and how you're like making incredible You know, like this burden says well, how is that gonna work? You know, like it works You know, like you're solving some really really interesting problems and not spending a trillion dollars on GPUs 

Guest: Yeah, so this is very important.

Of course [00:40:00] first of all, you know, having reasoning based on knowledge, and it's why you know, I'm more focusing on what I call human intelligence, the artificial intelligence, because now it has been used in so many way. So what I care is how a system can reproduce as much as it can the human intelligence 

 so why not working on on on the knowledge right away? Well, the 1st thing is you have to have the knowledge acquisition how to put the knowledge into the system and this has been a big challenge. There are 2 ways to do it. 1 is to be able to get it from the expert.

And the methodology I have with may I macro connectionist AI is how to represent the expertise directly into the network without any translation in [00:41:00] between what I'm saying That is at the, at the ai age where people build expert system, they have to interview the expert. Understand what they are talking about, transfer that into the rules.

 So you lost information, even though the rules cannot represent all the knowledge. So then the second thing is to focus on domain that you have already the digital knowledge, the format of that. So then you can extract that. Now that let's suppose the knowledge acquisition is solved. Yes. Human. We don't process data.

We don't My medicine, I didn't sit there and just observe a bunch of you know, electronic records and then say, now I'm physician. So we learn first. the knowledge, the foundation, the reasoning model. We, we, we, we, we never have access to reality. We only access reality through our reasoning model, our mental model.[00:42:00] 

So we build that and it takes time and we know it's not perfect. But it's allow us to solve bunch of problem, and it's allow us to have new hypothesis to improve this model with the data with new knowledge and so on. And it's absolutely not done that way. And I tried to scream that. Yeah, there is the other way.

Let's build the knowledge. We build a human cell. reasoning foundation model first. So we teach the system first, what is the cell based on whatever we know today or what we can get, right? And then we improve it. We already guide the system based on this knowledge, which, as I mentioned, the best way to compress data.

Yeah. And you were saying, 

John Willis: I think you're like the way sort of drug development works, you know, modern way compared to the way you're finding in your model. 

Guest: So today, you know in [00:43:00] 2022, there is about 10 plus billion dollars invest in machine learning for drug discovery. And drug discovery is super important.

So all these statistical model try to design the molecule that has to go to the target that you are looking for. But the thing is, there is the whole biology, what matter. You can have a perfect drug that fit into the perfect target. But what matter is what is the impact of this drug on the sale?

So what is the cell response? And in any big industry, aviation, automobile and so on, right? The first thing you do is to have a digital model of. The car or the airplane so you can do simulation of it. We don't have that in pharmaceutical industry. So it's why it takes so much time and only 4 percent chance of success and you spend about 2.

6 billion dollars on these 10 years. So what [00:44:00] we do is we take the gene expression data. of the human cell. We inject it into our human cell reasoning foundation model, and then we let it differentiate it. It's a kind of stem cell, if I may say, that get differentiated into bone, muscular, cancer, sick, whatever, based on the gene expression.

So in one hour we can generate any human digital cell clones. So then now we can have the sick cell. the healthy cell, the treated cell, the inflamed cell, and we can compare them to understand what's going on. So this is what we do. With reasoning, but then we improve the system, the network every six months based on additional knowledge or data we can have on, on the system.

John Willis: Yeah, so, like, you know, it was a couple of years ago in in Singapore, the Singapore you know, air traffic control, whoever it was, it's [00:45:00] a sort of a body that controls it and they built these digital twins and, and they literally could sort of simulate the whole. You know, planes coming, diverting planes, use it for training.

Guest: This is the important point you just raised here, John. What we do is digital self clone, not twins. And this is the difference. When you build a digital twins, most of the time, it's a mathematical model. It asks a lot of work where you, you, you redesign the car, the aviation and so on. But the cell. Is another level of complexity.

So we cannot afford today, you know, some people try to have a mathematical model of the cell, but you cannot scale because it has to be exact at everything. So in this case. It's called clone because we have this foundation model that just get differentiated [00:46:00] based on the gene expression, the way that kind of biology is working.

So we are not doing a tween where we have to simulate what is a macrophage, a T cell, a B cell or neurons and so on. We just take the gene expression and we let the gene expression with the reasoning to differentiate. The model. 

John Willis: That's that's brilliant. So this is all great, right? So, and, and, you know, I, I, I think we're going to have more conversations than this in the future.

I look forward to it. You know, 1 of the things and again, I think, you know, what, you know, this, the kind of stuff you're doing is like, you know, like, amazing, like, you know, helping, like, possibly cure cancer or detect cancer. I think you were saying there are some examples where some of your models are actually showing incredible results.

Guest: Yeah, we, we, we, in four months, we identify a completely novel target for people who are resistance to some drug in [00:47:00] IBD, inflammatory bowel disease. And it has been a review by our customer expert team six months after that, and, and, and, and, and we start to think about some IP, related to the drug candidate and so on.

So it's a very simple why. We cannot fair, if I may say, most of the time when you go to a conference, a, a, a drug conference, people are going to explain you what they call a pathways. You know, we target this particular protein because it has an impact on this, this reactions, but this is very limited of what's going on with the cell.

The cell has 1 billion biochemical reaction per second, so we do exactly the same thing because it works, but we do it. a thousand and thousand scale of what a human is doing. It's why there is no way we, we can fail. We use the same [00:48:00] methodology, the same knowledge, but at scale than what the scientist is doing today.

John Willis: And I guess that's the, you know, in a sense, if I'm going to sort of summarize the, this conversation is you don't need all the data to solve a problem, right? You just need a reasonable semblance of knowledge to at least. I 

Guest: would add. Yeah, no, this is another very important point. I would add, as I mentioned earlier, data can be dangerous.

You, you, you, you, you. Can mislead you completely. Knowledge. At least, you know what you have, right? And you know, it's not perfect, but you are able to go back and point out. Oh, here I need more data. I need to better understand the data and just you just, you know, just a prediction. And that's it. And now you have to trust the system that have 80 [00:49:00] percent chance to To be right.

Well, it depends on the case, right? Some cases, okay, but some case 80 percent is absolutely not acceptable. And even if you have 90%, but if you can explain as human, right, we are not right all the time, but at least we can explain the rational and the rational is important because then we can see how we can improve it and what kind of risk we really take.

Rather than just the outcome 

John Willis: and put another simplistic thing around it, which might always another opportunity for you. But like, in a sense, that's how science works, right? Scientific thinking, scientific method. It is all about the rational, the, the human rational, like the, the greatest breakthroughs we've had in human kind have been because of the rational model.

Right. And, and so I think this 

Guest: is the goal. It's the goal of science. is to have knowledge to rationalize our world. So then we can compute that at the human level. [00:50:00] But yeah, why, why stop at the data level? Because data is just a view of human on noise on signal, right? So maybe with, with another brain, our data will look completely different.

But we decide this is date, so it's completely arbitrary. Okay. 

John Willis: This is brilliant. So how you know, you know, if people want to, you know, I'll definitely put links to your LinkedIn and any of your work or, but if you wanted to put a message out to people, you know, think about the people here that listen to this, we mostly You know, people who run large infrastructure and operations and development and software and banks and insurance companies, you know, what would you like to say to them in terms of like maybe learning more about what you're doing or helping you or what's your sort of clothes, if you will?

Guest: Yeah, thank you for asking this. It's, it's mattered to me because for a long time, I know I'm [00:51:00] mortal, but I realized it not very long time ago. So it's a big difference. And the work that I'm on is 30 years. And I really want to spread that out now. And yeah, I would be interested to be in connection with people from machine learning to statistic to a I and of course the industry or.

People to thing together, how, how we can approach that with maybe a different way with less computational needs. And yeah, I'm looking forward to develop the reasoning processing unit network. So try to be in contact with some ship makers as well to rethink about how to interact. All the computational resource, rather than just having a network between machines, for example.

 So, yeah, I'm, I'm of course. Thinking not life science is [00:52:00] dedicated for drug RD. So we would be happy, we will have a, a con an event in June 2nd in San Diego just before Bio International. So please, yeah, contact me on, on LinkedIn, Khai Minh Pham and we we'll be in touch. 

John Willis: Well, I know we're going to stay in touch.

I've been fascinated by you know, I felt like it was it was really awesome to meet you. There's even more in this you know, I can always tell when I'm having a great podcast when I'm like, I've, I'm more actually as if I'm listening to it in my car, I'm having so much fun. So I, I thought this was one of my, you know, I kind of say it all the time, but it was one of my favorite podcasts I've done in a while.

So I 

Guest: really appreciate it, John, to give me a chance to express my passion. Yeah, 

John Willis: I guess that's the thing that probably stands out more than anything. That's why I probably had this gravity to you. It's not only you're sort of [00:53:00] intellectually brilliant, you're, you know, I always say that the most interesting people are the people who Don't have egos that should have egos, right?

 You know, then you, you ego this and then your, your passion is you know, sort of bleeding, you know, you can see it. I saw it in both of your presentations and I'm sure everybody saw it here too. So it was my pleasure and I look forward to having more conversations here with you. Sure. I really appreciate it.

Guest: Talk to you soon, John. Bye.