Profound

S6 E4 - Glenn Wilson – Rethinking Cybersecurity Through Systems Thinking

John Willis Season 6 Episode 4

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:20:49

In this episode, Glenn Wilson, a cybersecurity expert, joins me to explore how systems thinking can reshape how we approach cybersecurity, vulnerability management, and modern digital systems.

Glenn shares his journey from writing about DevSecOps to pursuing a master’s degree in Systems Thinking in Practice at the Open University. His motivation came from recognizing a troubling contradiction that, despite massive investments in cybersecurity, data breaches, ransomware incidents, and security failures continue to rise. This led him to question whether the industry’s largely reductionist approach misses the broader system dynamics at play.

A central part of the discussion focuses on Stafford Beer’s Viable System Model (VSM), a cybernetic framework for understanding how organizations maintain balance and adapt to their environments. Glenn explains how VSM’s five subsystems can be used to diagnose why cybersecurity systems often fail. Rather than viewing security as a set of tools or controls, Glenn argues it should be understood as a living system embedded within larger organizational and risk systems.

The conversation then expands into cybernetics, emergence, and AI, touching on Norbert Wiener, Ross Ashby’s law of requisite variety, and John Boyd’s OODA framework. Together, we discuss how feedback loops, adaptation, and emergent behavior shape both human organizations and AI-driven systems. Glenn raises an important concern: if organizations don’t adopt systems thinking, increasing automation and AI could amplify weaknesses rather than solve them.

We close by reflecting on the relationship between humans, AI, and complex systems. Glenn emphasizes that AI should be treated as a tool within a larger system, not anthropomorphized as human intelligence. The key challenge ahead is understanding how humans and intelligent tools coexist within systems that are adaptive, emergent, and increasingly complex.

The big takeaway: cybersecurity cannot be improved by optimizing isolated parts. Real progress requires understanding the entire system and our place within it.

John Willis (00:00)
Hey, everybody. This is John Willis, the Profound Podcast. Like, I don't even know what that means anymore. It could be could be Deming. It could be system thinking. It could be AI, of course. Of course, you've heard me talk a lot about AI. But today is going to be a blast because

I'll give my intro as Glenn Wilson wrote the book on DevSecOps. I'll let him introduce himself in a minute. we met in London at a DevOps conference, where we sort of got to know each other really well, we both went on Katie Anderson's Japan study trip and we were the only two on the bus of like 25, I guess, I don't know, of like, we're very focused on IT, specifically DevSecOps and DevOps. And everybody else was sort of lean, agile and sort of the non-

technology spaces. And so even though we enjoy the company and conversations around general ideas and practices, we'd have to sort of huddle together and put together our own sort of synthesis of what this meant for our craft. And I think we became, and we try to connect periodically. And Glenn is just a, know, soft spoke. One of the guys that like, I love it, which is soft spoken, but man, there's a lot of knowledge in that head. So.

Hey, Glenn, you want to go ahead and introduce yourself?

Glenn Wilson (01:17)
Hey, John, nice to be back on your podcast. I think this is my third or fourth. It's your podcast over the years. I think one of our podcasts was actually about the Japanese experience. of course, yeah, that's right. Yeah. But yeah, I'm Glenn based in the UK. I have written a book on DevSecOps, which was really my take on where there was a gap to the DevOps thinking and security needed to be

John Willis (01:23)
Yeah, that crazy.

Glenn Wilson (01:47)
I to ⁓ some ideas onto paper and produced a book, which I published over five years ago now. And then since then, I'm a lifelong learner. And I wanted to think, what do want to do now? What do want to do now? I've done this book and we went on that trip to Japan and I decided to start a master's degree.

And I was really interested in systems thinking in practice. So I looked around at certain universities that ⁓ offered courses in systems thinking and the Open University, is a UK based establishment ⁓ based up in Milton Keynes, they offer remote learning de facto. It's been they've been doing remote learning since ⁓ they were in.

incorporated back in, I think, 1960s. So I jumped on that course. And about three years later, when I reached the end of it, I'd completed a master's degree, at age of approaching 60 was a great feeling to have actually done something which not many people of our generation would do. yeah, so I chose systems thinking in practice because I was

generally interested in how systems work, what it is about systems and why. So there's a lot of people that I have read before I started this particular Masters, know, like the likes of Deming, which is, he talks about system in his system of profound knowledge. He's, you know, the first item he mentions is in appreciation for a system. what is a system?

So I wanted to think about what that meant. And there was the types of people that I was reading were Bertolamfi and ⁓ his idea about systems. And anyway, I wanted to just put it all together and just consolidate all my thinking, all my thoughts, all the literature out there. And I jumped on this course and it just seemed...

really appropriate for what I was trying to do. It's actually on some of the complexities that you have in systems and trying to understand that complexity. Obviously, the course, the master's degree takes you down a lot of different avenues, but there was a couple that I really found interesting and the one that really stood out for me, which I don't, maybe it's because of my background in business, was Stafford Beer's viable system model.

And so I ended up focusing on that area for my master's dissertation. And obviously there was a bit of an overlap there with my profession, which is in cybersecurity. yeah, so that's all. That's me in a nutshell. An interesting extension from our last introduction.

John Willis (04:26)
know, there's a sort of, know, John Allspaw you know, another good friend of ours, he did something a little younger maybe, but he went back, you know, and got his masters, and he focuses a lot on sort of learning from incidents, and you know, I had to get mad at me. I said the discipline was incident management, but I'm gonna call it that anyway. You know, after he had done it, we periodically have conversations, and I...

He said, John, you should go to London University. I'm like, yeah, John. And then he said, John, it's like learning on acid. And I guess I would ask you, because I know I sent you a template for things we can talk about based on your paper that I analyzed. Is it worth it?

you what's the sort of value? Like you're a professional, you've done well. And I get the insatiable appetite for learning, right? Like that's why we're friends. But like, would you suggest somebody like in their 40s, 50s, in my case, 60s? And I'm not gonna do it, matter what you say. But why?

Glenn Wilson (05:35)
Well, for me, it put a different lens on my profession. You know, like we get very trapped in our own echo chambers. We focus very much on the challenges we face on a daily basis. We speak to the same people about the same problems. We come up with the same solutions over and over again. And as my paper mentions, we don't really do very well in cybersecurity. There's a lot of documentation out there that shows that.

we are seeing an increase continually of increasing breaches in the cost of data breaches, in the cost of ransomware and the impact it has on customers. are trending upwards and have been for a long time. And yet there's been a lot of money spent on cybersecurity and a lot of people talking about how effective cybersecurity is, but this didn't weigh up. We've been in my echo chamber, we're talking about cybersecurity is a great thing.

I look at some of this research and think, well, actually it doesn't look like we're doing that great at cybersecurity. And another thing as well is I'm very interested in the safety industry as well. You mentioned Lund University. They've got a great course courses there on safety and systems thinking and safety. And they look at human and organizational performance. So hop, I think it is. And when you listen to a few people that

are very prolific in safety and Sydney Deck is one of those. They've been to Lund University, they've learned this stuff and they've written about this stuff and they've taught this stuff. And I felt safety is far more advanced than cybersecurity. I think, look at, yeah, exactly. We look at cybersecurity and we got, as you said, we're trending, we're trending upwards in all the bad, in all the bad ways. We're not trending upwards in good ways. Whereas safety,

John Willis (07:12)
Absolutely right,

Glenn Wilson (07:24)
they are trending upwards in good ways. know, they're seeing an increase in the number of like safe projects, decrease in number of deaths and fatalities. And why is it they've done that? And I think it's because of all the research they've done. So, yeah, I'm bringing some of this academia into my profession and saying, OK, we need to be able to do better here. What can we learn from academia?

John Willis (07:39)
Now, boy, that's fun.

You know,

mean, know, John Allspaw is sort you know, cracked open, sort of, sort of an opening, given sort of at least a light, light in whatever the tunnel is. Because he was sort of the first one in our boundary span, our boundary span, you know, we could simply say it's DevOps, DevSecOps, or it's SRE. I mean, in one hand, that's a terrible way to describe what we do. On the other hand, it's

It's the one that makes sense to most of our listeners, right? Which is, I'm least within my recognizable boundary of what we do, collectively we do, the people that go to DevOps days and conferences. John, I'm pretty sure was the first person to apply research to our space, right? And I know he's encouraged, I'm not unique in that he tried to talk me into going to London University, I'd show everybody he meets.

And there's been a handful of people, you probably paid more attention to this than I did, of the people who have gone through there and written significant work. And I'm not saying anybody who, like J.P. Paul Reed, for example, like his work is, he's a person who I sort of respect what he's done in his career. But the list of people that I think have made impactful research that sort of breached my border.

is a handful. And I would really, if I'm being lazy, I would say three people, you know, John, Jay Paul Reed, you and I'm not counting Decker and those guys and Woods, right? Because they're not, they weren't in our circle. They're in our circle a little bit now, but and Richard Cook. But yeah, I mean, so the long-winded sort of question or observation I want to make is, I found like it was

and it's even gonna get a little longer. Because a couple of years ago, I learned a fair amount about Jabe Bloom and taught me a little about qualitative analysis. And then I started applying that to something I had already been doing with large customers, going in and interviewing people. But I found that it wasn't easy to do qualitative analysis, especially as not a professional and not in academia, but even harder.

to come up with, you I noticed you saw the way you had to come up with the thematic and the structure of the, you know, so the class with the, you know, the way it was terrible and remember and stuff, but the, know, how you structure, point being that there's not a lot of, we don't have a lot of examples to go by, right?

Glenn Wilson (10:19)
Bye bye.

John Willis (10:27)
You know, and I struggled with this mightily and I wasn't even writing a paper. I was just trying to do organizational, you know, sort of an analysis or, you know, qualitative analysis. But it was this place that just seemed to get real difficult is the sort of the thematic and the classes and, you know, to have classifications and that stuff. And I noticed, you did, you know, sort of a shot at like, anyway, so I guess it seems like it's harder for us as frontier people.

to go into that sort of like analysis when there isn't in other fields like safety. I could read four other people's paper and say, Joe Blow did this and that made sense. Now I can apply it to what I go, well, you guys are going in and I say me on the perimeter, but you guys are going in blind. mean, it seemed like that was a little difficult too, right?

Glenn Wilson (11:19)
That was really difficult. So obviously we have to do literature research. We need to know where the gaps are in our knowledge and understanding. Usually you do literature research on something, you come up with lots of different examples and you might find a niche in there that's not covered. But I couldn't find anything really that was covered when I took the literature research.

John Willis (11:40)
That's what I assumed. was like, the coded thing, the coding, right?

Glenn Wilson (11:44)
Yeah, the code. Yes, the coding is yeah, that's right. Yeah, so you're coding your because it's qualitative research. You don't look at numbers really. You look at, you know, what people say in the context of what they say and disseminate information that way. So it's it's not statistical analysis of systems in this case, but more of a qualitative look at it. So there's different methodologies you can use and

to try and pull this information from people within the system, know, like the soft systems methods, which are quite, I think that's probably the one area where there's been a lot of research. With a little bit of crossover of cybersecurity and information security, but ⁓ to be honest, it wasn't that prolific. But yeah, so when it came to VSM, the viable system model, Stafford Beer.

I really struggled to find anything that was related to cybersecurity and thinking about cybersecurity as a system or even vulnerability management as a system. The beauty of the viable system model is that you can create a system at whatever level you'd like if you want to analyze it. So your system of focus is going to be something like the vulnerability management system.

⁓ And you can use other tools to actually work out where your boundaries are. So critical systems heuristics is a great example of that, where you can start to understand where the boundaries of your system are. So, you know, who are the people that you might need to speak to? What are the other stakeholders involved in this particular system and so forth? So if you can pull those into it as well, then you start to get a bigger picture of what vulnerability management is. And it's more than just

a team of developers sitting there running scans and fixing code based on their scan results, you realize that actually the vulnerability management system is huge. It's spread across the whole organization. Even down to the idea of if you think about you have maybe some vulnerability in your code somewhere that you deployed, say you've deployed that internally, if a developer is then

subject to a phishing attack and you accidentally give up the credentials to access the code, then obviously you end up with your code being breached. therefore, vulnerability management doesn't just focus on that particular vulnerability and fixing that vulnerability, but it's also about how do you manage other parts that can get you into that system. So it's not security in terms of layers of security. It's more about how the system plays out, how the system works.

John Willis (14:20)
Yeah, the classic kill chain is just this web of right like like you may use part of your code. Sort of like nested so deep in the you know exploit if you will. Yeah.

Glenn Wilson (14:23)
Yeah.

So my theory is that we are struggling in cybersecurity because we're taking this reductionist approach to security. know, have, you know, people run SAS tools, for example, static applications, security testing tools. We run DAST and IAST. We run SCA, we run container scanning. We do all these different things to try and identify the vulnerabilities within a code, but they're all done in silos.

There's no, mean, John Allspaw talks about learning from incidents. Do developers learn from incidents? Do they know that their code led to this incident and therefore they understand the consequences of their code not being correct, for example, or secure? And then you have the other elements of vulnerability management more around the pressures that put on developers to deliver code as quickly as they can. And what happens there? Why is it that people develop?

software that's more insecure when they're under a lot of pressure, even though they have these tools available to them. So are the tools being used correctly? Are the tools being configured correctly? And when you start to look at it this way, it becomes a little less reductionist. You then start seeing the whole thing as a system, a system that's broken in this case, I think, in cybersecurity. I don't think we do enough within organizations to think about the wider context of just

something as simple as vulnerability management. We just look at it in siloed approach to, or reductionist approach to fixing vulnerabilities. Even to the point that we basically generate a whole list of vulnerabilities, we rate them and score them, then prioritize them and then send them to the senior management to keep an eye on whether or not we're doing it correctly.

John Willis (16:21)
than that there's a napkin base cut off right this was something I talked about in my deming book how furious he would be on how we do like p1's and p3's ⁓

Glenn Wilson (16:30)
Yes.

But I think the viable system model, just to give you some, I mentioned it a few times. Stafford Beer. So Stafford Beer, he's an interesting character. was a businessman, very wealthy businessman who, he was invited to Chile by President Allende in the 1970s to help develop this cybernetic

John Willis (16:39)
The beers, right?

Glenn Wilson (17:00)
system within the country. So the idea is that factories on the ground, if they were running out of supplies, the cybernetic system will then allow them to source those supplies. And this is all done through a control center. They called it CyberSyn, I think. But SlapperBee went to Chile, spent some time in Chile and built this

Amazing system. ⁓ Unfortunately, there was a coup and Allende was captured and ⁓ he eventually he actually died during the coup. Stafford Beer managed to escape Chile. Some of his peers were unable to escape but spent some time in prison.

They subsequently were released and some of them actually written some really good books on this. Espeyo is probably one of the most influential people. the viable system model was something that Stafford Beer developed prior to going out to Chile. But he was using Chile as an example of how it would work in the largest scale possible at the country level. So basically,

The viable system model is made up of five subsystems. System one is operations, which is really dealing with the value that you're delivering to your customer ⁓ from within that system. And that's supported by a management within that system as well. system one is like the operational side and the management side of that particular small piece of the

system. And the rest of the system is really about communication, I think. So system two is this communication channel that spreads out from system one. because the viable system model is fractal, the idea is that you can have multiple system ones within the same level.

And so you'll have communication between these various system ones, between the management of those system ones, which is quite crucial really, because you'll need to have some sort of cohesion between the different systems that are working within an organization. System three is sort of like a management function that is inward looking. So the here and now within the organization.

It's really interested in understanding what the system internally is doing, how it's functioning, how it works. And to support that, there's also a system three star, which is something called an algebraic feedback loop. So thinking of this as like an alarm and alerts that goes off to let system three know that there's something quite not quite right. So rather than going up the system two channel, that'll go up the system.

three channels. So it's like a direct channel going up. And this is what Stafford B was trying to do Chile, you know, going back to what I just said about the, if you know, if there's resources that are missing, then there's an adjudonic feedback loop that says we're missing this stuff and the management can do something about it. Then you have system four, is more of an outward looking system. So it's looking at the environment really, understanding the

the future and where it needs to go, where the system needs to go. And it also looks at threats that are outside. So cyber security is a great example where we have lots of threats from outside. So question I have in organizations there is where's your system for? I don't put it quite as succinctly as that, but to explain what system for is and say, how do you monitor what's going on in the outside world?

There's going to be a conflict between system three and system four because system three is obviously looking at how the organization is structured and system four is looking at the threats from outside or even opportunities from outside. So system four has this urge to try and change the system and system three has this urge to try and keep it as static as it can because it's happy with what it was. So you have this homeostatic control system between

system three and system four. So there's feedback between system three and system four to make sure that they are not running away on their own, they are working together. And overpinning all of this is system five, which is the policy and the identity. So the policy is pretty much saying to system four and system three, this is what we want this system to do.

System four, can look externally. System three, you can look internally. How does this all pan out for us? And it also provides the identity of the system as well, system five. I mean, I've said it quite succinctly, very briefly there, it's, mean, Stafford Beer wrote extensively on this, he's a prolific writer at this time, but there's a couple of really good books out there that are worth reading if you're interested. One of them is Brain of the Firm by Stafford Beer. And another one is the,

The Heart of Enterprise by Stafford Beer as well. He describes the VSM in both those books in great detail. There are some supplementary books as well that he wrote that don't really add any more value, so they tell you more about how to implement them. So you've got diagnosing the systems. And I see a thing about VSM is that it acts in two ways. The first way is to diagnose

issues within an organization. So why is it the system stop functioning? And I think that's a really good example here, why we can use it in cybersecurity, the cybersecurity as a system or vulnerability management as a system may not be working. Let's diagnose that where where are the weaknesses? Where are the gaps? Is it in system one? Is it system two? Is it three, four, five? You know, what what are the challenges that we're facing here? And then the second

piece of VSM, which is why it's used, is to design a system. So how can we design a system from scratch? Which is what Stafford Beer was actually trying to do in Chile, the Cybersim project. designing a system that has a working system, one, two, three, four, and five, building it from scratch. yeah, so my paper was looking at...

not just looking at ⁓ VSM, I was looking at other systems as well, but VSM was the area that really interested me and I did take a closer look at VSM to try and understand whether VSM could be used to ⁓ manage vulnerabilities and if it could be used to manage vulnerabilities, could it then be used to design a system that was better at doing vulnerability management?

John Willis (23:36)
Yeah, no, think, you know, I think ⁓ that was great. That was really good because I, you know, I sort of had a cursory understanding. I think I'm a little better now. I mean, you know, one of the questions I was going to ask is, and I'll tell you, cheated. asked ChatGPT to help me come up with some good questions. it was, it's like, it's a, know, where would you focus first? And I don't think that's a good question, right? Because I think that's antithetical to systems thinking in general. It's like, you know,

Glenn Wilson (24:01)
If you can accept that.

John Willis (24:03)
going to a dabbing expert and saying, you I think I'm just going to stick with variation. know, this is what I did. did get punched in the nose by somebody. But it seems like to me, the three and four are very interesting to me because not only, ⁓ you know, the trick to my, when I sound smart, it's because I have smart friends. I've got you, I've got Jabe Bloom. But Jabe Bloom, when we were working together at Red Hat, really sort of dug into Ashby's law, you know,

And I think that tension between, we had some great conversations about which like could be a home of the podcast. So I don't want to go too far here, but how people conflate variation with variability, you know, and they're not the same thing. And it's easy to sort of just conflate it as variation, but that variability leads you into sort of Ashby's law and you know, the

Glenn Wilson (24:46)
Thank

John Willis (24:57)
the latter inference, I think, too, a little bit. think that was his face. I always forget his name. the idea that, and I think what you're saying is that the idea of homeostasis, right, which is this idea like Like the thermometer is a classic example, right? There's sort of one set of variability or tension that increases the temperature. And then there's another set of

Glenn Wilson (25:13)
Yeah.

John Willis (25:24)
that decreases the, and they're just constantly sort of battling each other. Is that sort of a sort of good segue between cybernetics and second order cybernetics?

Glenn Wilson (25:36)
Well, so just touching on homeostasis and the Conan Ashby law. So Ross Ashby came up with the homeostatic system. And the homeostatic system is about being able to balance a system. It's never going to be perfect.

It's always going to be that conflict there, but it's about trying to manage that. that you, in going back to the thermostat, which is the example you just mentioned there, you've got this idea of a thermostat on the wall and it's reading the temperature of the room. When the temperature is too hot, it shuts down the heating. If it's too cold, it the heating

Yeah, so there's that conflict. Then you've got the whole idea then that you've got people in the room that could change that thermostat. You could go in and make it lower or make it higher. But the problem with that is that you're not actually making things any better. You see this quite often. You see people say, it's really hot in here. Turn the thermostat right down. it's getting really cold in here. Turn the thermostat right up.

John Willis (26:38)
describe

my marriage life. ⁓

Glenn Wilson (26:40)
So

yeah, so I guess that's explaining a lot of people's whereabouts. So that's what happens really is that we take on that secondary idea of the homeostasis. We don't let the homeostatic device do its own thing. I mean, I'm always, yeah, especially in the car, you know, like my partner, she whacks the thermostat right up to high. I try to bring it back down to some sort of like temperature I like it at.

But yeah, we're trying to manage the temperature in two different ways there. So I think that's sort of like where we come across this second order cybernetics in that respect is that you've got the homeostatic system itself trying to do what it's trying to do. And then you have this other idea that we can manage that system. The other thing is that

there's an environmental aspect to the homeostasis as well. This is really important to Stafford Beer in his Fibre System model. So, System 4 is outwardly looking, that's outwardly looking to the actual environment. And the environment is full of information. It is absolutely loaded with loads of information. And of course, you talked about the requisite variety. How do we...

manage the requisite variety of all the information. So we need to attenuate the information that's coming in.

John Willis (28:05)
that's Ashbury's law basically, right?

Glenn Wilson (28:07)
Yeah, that's

right. Ashby's law. Ashby Conan's law. Conan, he's co-writer for that paper. yeah, so reducing the requisite variety, you could do two things. You can either attenuate the information that you're coming in, so reduce the amount of information that's coming in. Or you could increase the amount of information that you can manage. a good example of that would be in an organization

⁓ You can either only focus on certain parts of cybersecurity, so you can only look at any CVE that's been published in the last year. That's all you're to focus on. So that's an attenuation of, you know, to reduce the variety down to a very specific group of vulnerabilities. Or internally, you could just have thousands of cybersecurity people all looking at all the CVEs across the globe and trying to disseminate how to affect the system.

So you want this homeostasis here. You want enough attenuation that it makes sense. don't want attenuate too much that you miss important information, but by the same token, you don't want to have an infinite number of resources within your organization to try and capture every piece of information that comes in. So it's important to have that balance. so...

It's difficult to put that into the context of second order cybernetics there. that's the challenge that think organizations have when it comes to that homeostatic type of... and Ross Ashby Conan in particular, the homeostasis.

John Willis (29:29)
I think...

You know, when again, like, you know, getting to work with Jabe Bloom was pretty awesome. You know, now I get calls periodically with him. you know, being we both worked at Red Hat with Andrew Clay Schaeffer and Kevin Bear. you know, the way he explained that to me, which I thought was always a really helpful sort of metaphor, I guess metaphor, yeah, is like a sword fight with par and parrying, right? Like, you you use par and parry and

And it's that sort of like you're sort of in the sword sword fight, you know, and you're sort of like you have to sort of like the Perry is the sort of feedback to action, the pars, the structural sort of, you know, back and forth. But I think where I think I was going with cybernetics versus second order cybernetics is because I never really deeply understood the difference. Right. I know enough about Norbert Wiener's work and obviously my Deming work and now, you know, my book about he's a

critical part of the history of AI, Norbert Wiener. But the thing is, I sort of went back after I read your paper, and I told you this before the call, I wanted to understand how John Osbar was approaching a very similar, you're talking about vulnerability management, he's talking about incident management, they clearly overlap, but they do have sort of different sort of through lines, right? Maybe that's what I'm saying.

And so I went back and I tried to understand what was the difference between what you were trying to say in the tools and the way you were thinking about and he was thinking about, and this is sort of my observation at a high level, but he was not just talking about complete complexity of cybinetics versus sort of cybernetics. What I sort of decoupled that into is sort of emergence versus control.

where cybernetics or at least cybernetics, not second order cybernetics, but cybernetics was about sort of control, right? And, and, and, and, and, and,

Glenn Wilson (31:38)
It's really important to understand what you mean by control. Yeah, so control is a dangerous word. So in cybernetics, control means having the ability to control, say, the environment or control yourself within that environment. So you can either change the environment or can change yourself. You can adapt to the environment. What it is not

John Willis (31:42)
Yeah, yeah, that's

Glenn Wilson (32:04)
saying, although there are some people who believe that the original cybernetics was teleological, which basically means that it was, it was, it had purposeful or purpose, purposiveness. It was, there was, there's something actually controlling something, which isn't really what I believe Wiener and the cyberneticians at the time were talking about.

I don't think they were really saying that I can control this and have influence over something. think what the whole point of this is that cybernetics uses feedback mechanisms to adjust its own way it works. So it's constantly adjusting to the environment around it. And another good example is, is Grey Walters' tortoise. Back in the 1950s, when cybernetics was quite fresh and new,

Gray Walter, he came up with his little robot, three wheel robot. had basically, it had two sensors. It had a light sensor and it had a touch sensor or an obstacle. So if it felt something, it could move around it. And it basically had no programming in it whatsoever. It was all based on electrodes,

and feedback from the environment. So if it saw something that was light, then that changed the voltage, which meant that it changed the power of a particular device, a wheel, and it turned the wheel and moved towards the light. Or if it found an obstacle, it would then stop going in the direction of the light to try and find a way around the obstacle. But it was not remote control.

It was not someone controlling it to do something. It was controlling itself. So that's what I mean by controlling this instance. So, and all of Wayne's book actually on cybernetics was called the communication and control of animals and humans, think, come to the top of head. Just looking over there. I haven't got it. But yeah, so that's what I mean by control here.

John Willis (34:11)
Yeah, and I think I meant that I just wasn't being clear. I mean, we both come. We don't have the idea that there is sort of like these structural controls.

Glenn Wilson (34:13)
Yeah. Yeah. So.

Yeah, there's no teleological process going on.

John Willis (34:27)
the

Glenn Wilson (34:30)
But what I also think is important is, so you mentioned control and you also mentioned, what was it else? You mentioned something else.

John Willis (34:41)
Well, I did talk about ladder of inference and what's this face.

Glenn Wilson (34:44)
Yeah, I can't remember the rest of that as well. I read it for my paper as well. Some things I only touched on and that was something I think I just put to one side. That's it, yeah. But yeah, anyway, I've just dropped my train of thought there. I was going to go talk about something else, about control.

John Willis (34:56)
That's what was good.

No, I think that so what I was sort of alluding to is it sounded like second order cybernetic sort of, know, again, I think I'm trying to figure out what like what is I mean, clearly, I think there's I'm always looking for overlap for different disciplines like that's, you know, I think we're both the same here. I sort of I think this was Deming's greatest strength was what Sidney Decker is great at to write Sidney Decker sort of looks at everything and

to figure out how they fit together. Interesting story. We had him keynote at one of Gene's conferences and it was actually a disaster. I don't think he'll listen to this podcast, so I think I'm safe there. But there was other parts of it that were disastrous. The main part was that he was doing sort of the closing keynote and he apparently found a tugboat operating in San Francisco Bay and went out on his tugboat.

and literally spent the day and got stuck out there because the tugboat had to get an emergency pull a ship in and we had to reschedule for the next day. like if you've ever been around Sydney Decker and I have a couple of times, he just like we did, you know, that two hour thing where we had Steven Spear, me, Richard Cook and Sydney Decker,

Glenn Wilson (36:20)
Richard cook?

John Willis (36:24)
things

And I, you know, as I started understanding, and I think emergence is a really interesting idea, particularly now more than ever with AI, but like, how do we deal with the behavior of emergence? Whether it's, you know, and I don't even like even classify artificial versus not artificial, emergence is emergence. Then, you know, and then versus what cybernetics is trying to accomplish, which is, you know, feedback loop control versus

This is terrible non-scientific way to say it is just deal with it. Which is sort of like what John's paper sort of implies, and I'll say that apologetically.

Glenn Wilson (37:05)
So cybernetics is about control and emergence. So going back to that gray water tortoise, as he called it, the little robot. I'll come back on to you and ask you a question. But basically,

Although the mechanism was very predictable in what it could do, it either went towards light or it went around an obstacle. The actual behavior of the actual robot under certain circumstances, so when he put a mirror in the room around obstacles, the behavior was emergent. It was unpredictable. It was chaotic in some respects or complex, but it was...

but the behavior was emergent. And that was a great example of how to understand emergence in that context. But going back to the whole

John Willis (37:49)
I

love to interject because I think like lot of people listen to this are like on the same level, but if people sort of a new to, you know, just like, I got it. You should listen to this Glenn and John guy. You know, when you talked about chaotic versus complex, basically, we're talking about Kenevan, right? Kenevan structure. mean, but yeah, all right. Maybe I was assuming something that's.

Glenn Wilson (38:09)
Yeah, yeah.

Yeah, I know, Davey Snowden is...

John Willis (38:13)
Killin' cause I know I'm diverting when you're trying

Glenn Wilson (38:16)
But going back to your question about second order cybernetics as well, this is the most important thing about cybernetics and the second order. So this is the whole idea of second order cybernetics, of course, is as an observer, you're changing the system. You are making the system behave in a different way. And I think that's something which was interesting. think was Von Forrester came up with the idea and I think he wrote a book called The Cybernetics of Cybernetics, which is very difficult to get hold of because I've tried.

But yeah, the whole idea that as an observer, and this is very important to me when I was writing my paper, because obviously, I am an observer. And I am looking at how cybernetics works, but I am situated within the system, within the system that I'm observing. So therefore, I have to influence that system. So that's what second order cybernetics came from.

And going back to what you're saying there, know, you know, that panel session you had, you know, like, that was a system that was was functioning there, but you had these different people all observing how it was going on and affecting that whole thing in the room, you know, and, and so probably as you know, yeah, so it was, you know, like Sydney Decker going off to AV stuff and so forth, you know, it's just, know, was definitely you.

John Willis (39:32)
You could look at that in that sort of like second order side of it. You don't know the half of it. You know, I tell a funny story in that video like but I did just look up something from John's paper. I think he was called what they would call or Dr. Woods would call as what John said is one of my favorite phrases ever. And all the things I've ever learned is thematic vagabonding.

Glenn Wilson (39:58)
Yes.

John Willis (39:59)
That's

how he was describing his paper, exactly what you're talking about. You're part of the system, but you're in Slack and you're doing this. And as you're sort of making, and he got that from Dr. Woods, but just looking at my notes, cheating on it's Dornor is whoever that guy is. But the idea is that you, you're sort of like, again, think about incident, you're sort of like, you've got some sort of...

data that's telling you this and so you're trying this and you're just constantly changing the system itself, right?

Glenn Wilson (40:32)
Yeah, exactly. even when I'm, you know, obviously with my research, was interviewing people. The fact of interviewing people is changing how people perceive a system as well. And it may be changing the way that they understand the system and therefore, without me talking to them, they might have an idea already.

I've changed their view or their idea based on what I've said to them, even though I've tried to be as objective as I can be, which is impossible because you're, you know, it's impossible to do that. ⁓

John Willis (41:02)
You know, that's why I, you know, sort of I rag on sort of Dora and again, just to be clear, I think what Dora did for our industry and what Nicole and Jez and Gene originally did for our industry was tremendous. You know, as Nicole, you say, I'm science and the shit out of DevOps, right? And there was a point of which that quantitative analysis, you know, just as more I learned about qualitative analysis is, you even in qualitative analysis, you have to be incredibly careful to be the observer.

Right? know, to truly try to like, like even how you sort of interact, know, John Allspaw says that, you know, we've talked about my sort of limited practice of this and that you, it's very difficult for us as sort of experts and authors not to interject our, even our nuance of an opinion in an analytical observation. But then, ⁓ but that's why I think like, you know, and there's a place for statistical analysis. There's no doubt about it. But like, I just think that, you know,

Glenn Wilson (41:48)
Yeah, yeah.

John Willis (42:00)
quantitative analysis is the worst form. Influence. Subtlivity of what you're trying to ask.

Glenn Wilson (42:04)
Yeah.

Yeah, yeah. And the danger, of course, is that you collect information in a qualitative way and then turn it into quantitative research. And that's dangerous too, because you're leading yourself open to challenges for exactly that reason. You may have influenced the way that people have answered questions, which is OK from a qualitative perspective, because you can't.

analyze that you can dissect that, but when you then quantify it and you just say, well, 25 % of people said this and 10 % said that and so forth, you end up creating too rigid information about the actual interview process and doesn't necessarily reflect the truth of what was said because you need to actually understand that truth in the context. Actually, that's all that brings me on to, I didn't actually touch on this in my paper, but ⁓

It's something I've been interested in. So you've heard of John Boyd?

John Willis (43:00)
You just added another half an hour of this broadcast.

Glenn Wilson (43:03)
No, kind of John Boyd he came up famously with something called the OODA sketch. I'm gonna call it a sketch, not a loop, because unfortunately, a lot of these, like big consultancy firms have sort of hijacked the OODA sketch and turned it into an OODA loop, where it literally does look like the PDSA, the Padoop, the Stadiator loop, where it's like a...

one affects the other. And John Boyd never wrote that, never designed it like that. He designed it as really the focus is on the orientation piece, which is more about who we are. It's about how we understand the world, our world views, our experience, our culture, our genes, our genetic makeup. That is our orientation. That's how we are. And we changed that orientation

based on our observations, the observations that we make around us within the environment will change on that orientation. So you learn something new, you change your orientation about it. And I think that's something which, it's like, I can see an overlap here with John Boyd and cybernetics here. I don't know whether John Boyd ever studied cybernetics. I know that he has a...

He's archiving Quantico, which I'd love to go and visit one day. Oh wow, yeah. If he listening, he can give me a company out there to go and see those archives. I'd love to go see them. But I do wonder how much of Cybernetics and OODA map...

John Willis (44:35)
Well, that's, you know,

I mean, a lot of questions there, right. And I'm glad you clarified the whole sketch concept, because, I, you know, I would have just

referred to it and I have referred to as a neutral loop, right. And therefore, sort of tie it back to, know, like, I'm sort of Deming is sort of like a catcher's mitt in my brain, right? Everything has to go through what I think Deming would think about everything. then I hourglass that sort of consumes my knowledge. But you know, he taught theory of knowledge, right? and it was sort of the pure epistemology in that.

It was this idea that we're always, and it actually just goes back to the Chris Ardress is very similar too, in that we think in terms of a ladder and we sort of cognate and we sort of, fall into a loop. But the idea that, I love the idea, and again, I've turned my book into a history of AI course. I started it out as a local, I was telling you about this when we last talked that.

It started out as for an extended learning and I live in Auburn, so Auburn University. But it's really got me to dig a lot deeper on like what are all these things? And one of the chapters is about a guy that was involved, almost really instrumental in the history of autonomous vehicles. And one of the things when I was just going back, his name's John Warrionak and he's in my damning book and now he's also in my AI book, but fascinating dude, right?

One of things that's really interesting that I was really honing in on a little deeper in this course that I did, that I sort of cursely went over in my AI book, you know, Rebels of Reason, which is how they learned so much from autonomous and sort of like what we would say AI based vehicles today through racing, because in racing, you wired up a human and all these sort of the everything about the telemetry from the car, the turns of speed and all that stuff. And

And you got, because it was the one place you could really use, know, crash dummies is one thing, but when it was okay to basically, the guy was going to, the guy, the woman, the man are going to go around and turn it like 180 miles an hour. Right. That's what they do. And so I wired them up. And so you just made me think about the, not only a connection between cybernetics and OODA but some of the work they did in auto races. And I suspect they never thought about it. I'm going to ask them.

Boyd's work there, to me that's almost the ultimate of what you say OODA is.

Glenn Wilson (47:02)
Yeah.

Yeah, mean, young boy is so misunderstood, unfortunately. I mean, you should really read his, he didn't write very politically because he was an academic. ⁓

John Willis (47:15)
There's

also an airflow instructor,

Glenn Wilson (47:18)
He was originally, yeah. But he did, he saw it, he became a life learner as well, a lifelong learner as well. He was very interested in this stuff. And I know he studied the Toyota production system. You know, there are within the archives, there are books by Taiichi Ono. ⁓ And and John Boyd has annotated it all way through. I think he was also a fan of Shingo Shigeo as well.

John Willis (47:35)
You told me this. Yeah, you know.

Glenn Wilson (47:42)
Wow. came up with the Smed single minute exchange of dye. see that in Japan. that was so cool. If anyone wants to know what Smed is, basically you think about someone says to you, you need to be able to reduce something from 48 hours down to three hours. And you managed to get something reduced from 48 hours down to three hours. And then you're

John Willis (47:47)
We got to see that in Japan,

Glenn Wilson (48:10)
the owner of the company turns around and says to you, no, no, no, no, actually what I meant by three hours was three minutes. And that's pretty much what she did. So the idea is that you can then change a cast, so a cast that shapes a piece that goes into a car. can change that die in minutes, whereas previously it used to take them like hours, even days to change the setup. We then made

made it easy to then use Kanban to pull adjusting time delivery as well, which again is what we saw in Japan. We saw them exchange the die, we saw them print off a number of parts that go into the car and then they exchanged another die and then they did not want to other parts.

John Willis (48:55)
We're all over the map, but Tom Limicelli, you know, is another so great writer and Tom Buccy. used to say, one of my favorite quotes from him is like, doing operations in IT operations is like having to change a tire when you're going down a highway at 80 miles an hour. And I was thinking, what is this sort of SMED equivalent of that? that's what we do.

Glenn Wilson (49:16)
There

probably is, there probably is a spend equivalent to that. But he had to think out of the box, know, had to completely change the way he thought about how dyes work. So rather than trying to optimize what was currently being done, he had to change the whole concept of how dyes were changed. So there was no, you know, he had optimized it down to three hours, but couldn't optimize it down to three minutes. So you had to change the way he did things to squeeze extra out.

John Willis (49:44)
that's a sort of an aneurysm to Andrew, my friend, to Andrew Shaffer, you know, it's like that idea. When you know, he's sort of like the game theory, you know, when you sort of, he has something what he calls a Pareto inefficient Nash equilibrium, sink on that, right? But, but I won't explain it right now. But in other words, when you get to that condition where you're suboptimal, and you can't change the game, you got to blow it up and change the game. But

Glenn Wilson (49:47)
Yeah.

Right. Exactly.

John Willis (50:12)
I was thinking, depending on how long we want to go, and you know me, I'll go forever here, turn it into a two-part, but one of the things I don't want to miss is the thing that I really found fascinating, I'm working on this, what is DevOps and AI, or really more now with Gentix and autonomy and all these issues that are coming up now that have really got me excited. I was struggling with what's my place in AI other than...

my history book and maybe some general practices around rag and stuff like that. And I'm starting to find my footing in what is agentics and polymorphic agentics and the attack vectors. And that's reasonably interesting. But what I'm finding now is I've been doing some webinars and I'm going to do a workshop on this idea of like...

what constitutes sort of moving from an assistant to agentic, what is sort of the classifications agentic. But here's the really interesting part, which is when I was reading your paper, I realized, and I've already alluded to this, but you sort of gave me a lot more fodder to think about like, we've been terrible at this stuff prior to like this insane about, know, like, you

management and businesses like more code, we need solutions, we got this and we're sort of struggling or underwater and you make some really good points there. And then all of sudden, now all of sudden we're going to a level or potentially going to level. So if we're terrible at this stuff, prior to agentics, know, like take Scotty from Star Trek, my God, man, you know, like how could you do this or, know, but...

You know, and I, I think it sort of signals to me that now more than ever, and even playing, going back to Andrew's thing, we're suboptimal, we're Pareto inefficient in the way we handle not vulnerabilities, but incident and just general, the way we do things. And I'll add one more piece to this is what I think the thing that I thought you really drilled in on. So all this I want to put together is like, this is, think what we have to think about.

as we move into this world, which we're not stopping, which is all that, that we're terrible at this stuff now. And if we just continue down the path, think that we're going to like, you know, put your fingers in the dike of like just patching things. Like we're incredibly naive. but then, ⁓ because this thing is just gonna, we're already overwhelmed. But then the other thing I thought you made it, this is what I derived from the paper, which is this idea. And I say this all the time in my presentations.

If there's one thing you can take away from my presentation, particularly on automated governance is don't confuse IT security, IT sec with IT risk. And when I thought I cook away from your paper, which I thought was, ⁓ will help me immensely to help describe that in more detail, which was, think what you were implying in the paper is that vulnerability management and I could sort of turn it into IT sec as a broader point is local Optima.

Glenn Wilson (53:01)
The cool

Yeah.

John Willis (53:16)
And policy is globally controlled. So right off the bat, there, you know, I say it as simple as, people get, it feels like, what does he mean by ITSec versus IT risk? But like the, the idea that we, we really, we don't really have a budget for ITSec or risk and we sort of isolate them in and they luckily don't think systems thinking. And then at the global level, we expect GRC, general policy demand.

policy behave this way. And I think, one, that's a general beautifully, like, but I your paper decouples that on purpose. But like, even to the point now that more than ever, if we don't get this right, we're going to collapse under our own weight.

Glenn Wilson (54:01)
Yeah, that's so true. mean, Russell Ackoff was, I can't remember Harry, but he did say something like, if you did the right thing, it was easier to do the right thing, it's better to do the right thing wrong and do the wrong thing right. And I think we're in that position here with AI that AI could be doing the wrong thing.

more right over and over again and we'll end up in a situation where we have too many weaknesses within our systems to fix them. Yeah, so this whole idea of

separating IT risk and IT security is actually part of the viable system models fractal design. So the idea is you have a system within the system within the system. So system one, if you used to drill into system one, it will contain a whole system in its own self. So system one will contain system one, two, three, four, and five with a subsystem. So if you then extrapolate that outwards, so if you then say that the system of focus I'm looking at,

You've got the system one, two, three, four, and five. And then outside of that, you've got another system three, two, three, four, and five, because you're a part of that system one with the inside another system. So it's easier to visualize than it is to... So you then had this idea that ITSEC could be treated as a system in its own right, and ITRISC can be treated in a system in its own right, but they're not separated.

John Willis (55:16)
Okay.

Glenn Wilson (55:29)
They are fractal. There's one inside the other. And there's communication between them. There's a feedback loops between them. And the whole idea then is that you can then develop a risk appetite and then you can associate your IT security to that risk appetite based on the feedback loops you have and so forth. And obviously the risk will have its own perception of what's happening in the environment. So going to that system four of the risk system.

it's looking at system, so system four is looking outwardly, is looking at the environment and taking what it needs to know about risk. You know, so you could be looking at geopolitical situation, or you could be looking at new technologies that are happening in AI as an example of that. And actually understanding how that works. And in the system three, we're looking internally and saying, well, how does AI play internally within our systems? And what does that do to risk? So you can then have a system that's associated just with risk.

like the system of focus becomes your IT risk system. But an IT security could either be a subsystem within that, maybe a couple of levels further down, or it could be part of the same system, but another system one.

John Willis (56:36)
This is real helpful because I mean, gets like you said, at first I was catching on, but you're right. You almost have to start visualizing it, right? The systems within the systems from the, you know, the five levels. But I think for me, my takeaway on that is we focus more, we don't do system thinking, first off, that was clear in your interviews. so they were, but they weren't, it wasn't codified by any means. We focus more on

Glenn Wilson (56:58)
Yeah

John Willis (57:02)
Security which is focused on tools more and that's point you paper and We we think we focus a lot on risk, but it's sort of like this high-level Checklist globalism and I think maybe what you're saying or my interpretation. What you're saying is, know again, I think this is core is That we should be focusing on the risk system within the risk system

Glenn Wilson (57:24)
Yes.

John Willis (57:27)
we should see security as a critical part. Now, I think you've got to do the other way, like you're saying, but I think we tend to focus on ITSEC system and afterthought of how it's connected to the risk system. As opposed to the risk system is sort of a system of observation, if that's the right word to say it, and then clearly defining, and that's where we break.

Glenn Wilson (57:40)
Yes.

John Willis (57:50)
ridiculously break because we can't understand a breach or these sort of like these things that happened to us that seem like black swans that are you know, they're just could have been well defined if we would have taken a more systems approach

Glenn Wilson (58:06)
Yeah, yeah, definitely. also, you can build more resilience as well by having that systems approach, which is, know, you know, black-combed events do happen, hard to become resistant to them.

could do another podcast just on that.

John Willis (58:18)
back to the whole like all the researchers and safety right and I think that's I think we're learning a lot I mean that that blossom that John has got there was the the handful of people that I'm not recognizing right now that have gone off and gone along university and then what you're doing is like we're adding here you know the question might be you know is it enough to sustain where we might be going with AI and I'm not an anti-AI I'm just really concerned about you know garbage yeah yeah you know

Glenn Wilson (58:47)
Yeah, absolutely. I often ask people as well, is AI, because it's, there's going to be a time when AI starts feeding from the information that it produces, and therefore becomes a closed system. And if it becomes a closed system, it then, well, it's not a question anymore, it is a fact that it becomes, it becomes very much a victim of thermodynamics, the second law of thermodynamics.

and entropy ensues. So it will start to produce more and more garbage if it's a closed system. So, you know, human oversight is important. And that goes back down to the homeostatic thing, isn't it? So if you leave AI to just do its own thing, it will go tend towards chaos, eventually, entropy. So it increases entropy. That's my theory.

John Willis (59:43)
That's interesting.

Glenn Wilson (59:44)
I've read a few papers about that but there's obviously very little evidence of that at moment but I believe that that is...

John Willis (59:51)
I did a podcast with Josh Long and Josh Long is like the extraordinaire Spring, know, Java, but mostly he's Spring. He's been with Pivotal forever. And, you know, and, know, I don't know that he would get mad at me if I said this, I don't know anybody other than Ron Johnson, who created the project or the product, knows more about Spring than Josh Long. And, you we were talking about AI and, you know, he was saying that, you know, that

that he just finds that for the edge things and the things that are easy, AI is brilliant, right? But he finds the things that sort of like are in his head that most people don't know. He's constantly got it. Like, so he's got to decide when he goes into sort of using AI, what are the things I'm just going to be wasting my time because it's not going to do it as good as I do it versus the things that like that's a waste of my time. I should let AI do it. And I think that that plays into your

you your sort of potential second order thermodynamics problem of like, you know, how do we sort of arbitrate that organizationally, right? It's not just the Josh's of the world, but how do we figure that out organizationally? And that's another whole really interesting area.

Glenn Wilson (1:01:01)
When I mentioned Walter Gray's tortoise earlier, and I said that, you know, like it just had two sensors, one for light, one for touch. Yeah. And yet its behavior was emergent and unpredictable. Now, AI is full of different millions of parameters and its behavior is emergent as well. So if we don't have a handle on that, if we don't understand how that emergent behavior

Well, there's a danger that emotional behavior could actually lead to bad decisions along the way. So it's an interesting area of research, I think.

John Willis (1:01:36)
I've that. That's one of the things I covered mostly last year was this sort of polymorphic agentics and how seeing that, we're giving it goals. The example I love to use as a discussion point is, if I tell a bank employee with a reasonable level of seniority, who's been through all the training, knows that there's certain things that you do in a bank, best case you're getting fired, worst case you're going to jail, know that.

but you tell an agent that and say, need you to do this. So the employee, it's something they got to do. They realize, oh, to do that, I'm going to have to go ahead and update a funds database, you know, a tier. And so they're going to like, you know, let me go have a discussion with the team where what we're seeing these agents are basically saying, oh, well, I need, they told me to do this. And she, know, the how, you know, it's a 2001 space sites, like they told me to do this.

to do that, I'm going to have to mutate the production database to change this. Or in some cases we've seen what like the replica example, delete the production database. You know, it is told in all caps, do not change production databases. That's the replica story, right? From a year ago. And, but it's like, you totally do this to do that. In other words, that's already, you know, sort of a system that isn't taking, you know, it's bad emergent behavior, if you will. ⁓

Glenn Wilson (1:03:01)
Recently

someone lost a Github.

John Willis (1:03:04)
That was terrible. you know, I haven't followed through on that, but I posted like, this is scary stuff here. Yeah, it some guy posted on LinkedIn and he was pleading with the world. He's like, I woke up this morning and all my projects are gone. Don't know he's gone. can't get to him. He can't find a person to explain what happened. He, you know, he was a responsible citizen because he already had done like the reporting and the assessment of his projects to make sure they weren't violating any, ⁓

And they were just gone. to me, smells of some sort of agent-based processes that now you can't even find a person to explain why it made a decision. there's a lot of scary stuff going on. I call it polymorphic in that these things can sort of mutate themselves. But in general, it's sort of goal-based agentics.

Glenn Wilson (1:03:42)
Yeah.

Yeah.

John Willis (1:03:58)
that like, they're not taking into account the things, know, like, I don't know. Well, but like this in my epilogue, and my rebels of reason, I kind of sort of focus on, you know, something Jabe Bloom, Dr. Woods, and a guy named Eric Lawson, who wrote the myth of AI talked about that AI doesn't have a doctor of reasoning. And therefore, it's sort of like has, you know, like, that's a blind spot, big time.

Glenn Wilson (1:04:19)
Yeah, yeah.

Yeah, definitely. I don't think a level will have adaptive reasoning. It's something as well.

John Willis (1:04:31)
I mean that's a debate onto itself, the purest sale.

Glenn Wilson (1:04:34)
But yeah, the whole idea that it's either deductive or inductive. Is it inductive even? yeah, it's, as I say, the behavior is emergent and it can go any way it likes if the boundaries are not set.

John Willis (1:04:49)
Right. then goes back

to whole point of we don't have clarity in like in the connection between the things we think are the local optima and how we sort of deal with, you know, generically security, which could include incident, could have a vulnerability, man, all the things. And we don't sort of connect that in a systems thinking way with sort of policy.

Glenn Wilson (1:05:12)
Yeah. Well, I

then if they don't have those boundaries then yeah, the behaviour is going to cause problems.

John Willis (1:05:20)
I, you so I've been sort of really diving in on a lot of that. And then what I'm sort of talking about is autonomy, you know, what is, what is autonomy, you know, what, what is the sort of like, can agents read, can agents read and write, can agents execute, right? And then like, what is, there's a sort of new concept, like they call, know, we talk about human in the loop, there's another thing called human on the loop, right? Like, we move from checkpointing stuff to like, we're giving it some level of autonomy.

Glenn Wilson (1:05:44)
Yeah.

John Willis (1:05:49)
but we have to have clarity on the autonomy and we have to have some sort of way to describe organizationally how we think autonomy is going to work. Like all those things that just one-on-one organizational thought processes, like we're not even, no, people aren't even, I don't think talking about this stuff.

Glenn Wilson (1:06:06)
Yeah. I mean, you mentioned adaptive reasoning. And the other thing as well is it has no tacit knowledge. We have tacit knowledge. Humans have tacit knowledge. We have knowledge that we've never written down. We've never communicated it anyway. But we just do something because we know it's the right thing to do. We're directed by that's where orientation, go back to John Boyd, becomes critical to this. Because our orientation did, you know,

is very influential in the way that we make decisions and how we act. And what is the orientation of an AI agent? If it's just making decisions based on raw data, raw facts, and it has no tacit knowledge, it's, you know, because it doesn't have an orientation.

John Willis (1:06:41)
Yes, no, yeah, no. ⁓

I'm not trying to advertise work I'm doing, but same token, it is my podcast. I shouldn't feel guilty. One of the things I just renewed is sort of last few chapters in the book, is, how does AI dream? was that the early chapter? How does AI see? How does AI read? And all this stuff. How does AI speak? They're all different parts of the book. But in the dream, I cover the AlphaGo.

And like, it's been cool writing this course, because I talked about AlphaGo in 2016, where the Google DeepMind beat Lice et al in that game. But what was really interesting is an example of, I don't know if it's called abductive reasoning, but it is sort of tacit, and Go is a classic game. And so there's two moves. In game two, there's a move, I think, 37.

They call it the god move, which means that AI had figured out something that for a 3,000 year game no other human had figured out, including Lisetov. It made a move that was like, no human. In fact, when they recalculated, the percentage of a human making that move was one out of 10,000. And then the world is over because AlphaGo wins the first three games. It a game of five. It wins the match. The world is over.

where it's like 10 times worse than the deep blue chess match. This is like, no, now we really cooked as a human race. But he wins game four in a move 78. And so this is sort of like something that Dr. Woods talked about in the podcast I did. And I call it, simplistically, the model knows only what the model knows. What happened was this thing had learned, AlphaGo had learned and trained and done run.

over and over and over to the point where he could literally beat a 9-Don, which is like the guy who was literally considered at time the greatest go player in the world. And like that, wow, okay. But what it didn't know is this sort of like this knowledge or this intuition that this human could make at a certain point in the game. And what was ironic is that move 78 when they recalculated it.

The AlphaGo didn't make, it saw that move and calculated that its opponent would be a one in 10,000 chance of making that move. And so it didn't, like it sort of like through the, know, sort of what Herbert Simon would call, satisfying. Like it sort of left that out of the equation and, you know, it fell apart after that. It just started making terrible moves and it lost game four, right?

But yeah, mean, there's a beautiful conversation to have about whether we call it abductive knowledge. Does abductive knowledge include human heuristics and tacit knowledge? I think the answer is yes. And then like, what are the things that are uniquely human? And that's where you get into an interesting debate with the Ray Kurzweil. I don't have to debate him, but the people that believe that AI is going to solve all problems and there's no obstacle. And I can't debate against that.

Glenn Wilson (1:10:08)
Yeah.

John Willis (1:10:13)
are there certainly uniquely human things? And you talk about Boyd, I know what's going on here, but I love this. I think Boyd is a great, what you've described as Boyd's sketch is the biology of a human is intertwined.

Glenn Wilson (1:10:26)
is good.

Yeah, absolutely. Yeah. Yeah. In fact, talking about the biology, mean, cybernetics was actually, you know, originated from our understanding of biology. I mean, cybernetics itself was actually putting a lot of different disciplines together. It's putting together the biology, psychology, know, physiology, it's putting all the different sciences together and saying, let's have an overarching science that explains this. And they turned it cybernetics.

Norbert Wiener turned it cybernetics.

John Willis (1:10:57)
But yeah, the autopoiesis right? That stuff, The sort of, what you're an I don't know if he's the guy that started it, but

Glenn Wilson (1:11:04)
Yeah,

so, autopoiesis is the idea that something is self-adapting.

John Willis (1:11:12)
It was like the frog.

That whole thing about the, what the brain learns from the frog's eye or something like that. Anyway.

Glenn Wilson (1:11:19)
Yeah,

that was one of his theories.

John Willis (1:11:21)
That's what I thought a lot of that came.

Glenn Wilson (1:11:23)
Yeah.

John Willis (1:11:24)
Anyway, you know, to put on it. There's a lot here. You know, I think the fun part is a lot of really hard questions in front of us. We got to do the hard work even though we can't, just because we can't answer questions doesn't mean we can't roll up our sleeves and try to figure this stuff out because we have to. We have to.

Glenn Wilson (1:11:34)
Yes, absolutely.

Yeah, I mean, for me, think we need to one stop anthropomorphizing AI. We need to stop thinking about it in human terms. secondly, we need to stop realizing that it's just a tool. And I think if we just take those two as a guide, then AI is going to be amazing for us. It's going to enhance us.

John Willis (1:12:07)
I tried to make that point in my epilogue, but I think I wish I would have used the way you just phrased it, because I didn't talk about, and I hate that word because it's of those words that I have a terrible time pronouncing, but the anthropomorphism. Sorry, world. I just suck it. There are certain words I try not to say because for some reason my brain can't get them out. I do know what it means.

Yeah, no, think I wish I would have like focused a little more on that because what I tried to do in my set of epilogue is say, know, the quote I love, which is my co-author helped me come up with, is, you know, we've been trying to create a thinking machine for hundreds of years, or a hundred years, we just didn't realize we made a machine that doesn't think like humans. Right. And then, so I use, you know, sort of examples of like Dr. Reasoning and even some Deming stuff. the point was, you're right.

It's an intelligence. They're like, I'm sorry, I will fight that till like somebody like, you know, sort of lays me out on a mat, use your boxing metaphors. But it's like a fantastic box or student of boxing, if you don't know. But but but like, I think to argue that we we've had this conversation recently where to argue that we understand intelligence enough to say something is or isn't intelligence is not intelligent. But but the question is, I think

The clarity is it will never be human intelligence. Now it may be superior in human intelligent activities, but it is not human intelligence. That's the biology of a human, right? Sort of mandates that it has to be different.

Glenn Wilson (1:13:47)
Yeah, and also there's this thing as a free lunch, right? So AI is usually very specific about what it can do, whereas the human is very multitasking. We've got the ability to do anything, really, but ⁓ AI generally tends to be very specific about what it can do. Agents are very specific about what he can do. Chatbots are very specific about what they can do. there's...

I think the term is no such thing as a free lunch.

John Willis (1:14:12)
Yeah, I like that.

I mean, yeah, yeah, I think there's a clever way to say that in that, you like you can get a free lunch, but like the free ones are going to be free, right? Like, you know, it's not a spar grows or whatever is a fancy place. So, ⁓ yeah, no, but again, I think your point about like, you know, the, you know, it's going back to system thinking, you know, like every time we fall into the trap of creating binary discussions, like AGI is I think the biggest waste.

Glenn Wilson (1:14:25)
Yeah, yeah.

John Willis (1:14:40)
of oxygen in 2025 or 26, right? In other words, it doesn't matter. Like comparing it to a human is the wrong way to think about this problem. Not a or opportunity or space. know, like that, we just waste our time debating the binary-ism of will it or won't it, right? And like, it doesn't matter. It's going to do math better than us. It's going to, you know, sort of go out and

Glenn Wilson (1:14:51)
Yeah.

John Willis (1:15:10)
and grab, you know, like I just, forgot Chris Argus's name. So I typed in the chat GPT who wrote the five letters, you know, five, the ladder of inference. And it came to me immediately. We could have a panel with 10 people and we all would have been like, who's that guy? You know, but anyway, but I think you're right. Like the, I think a good way to think about it. All right. So let's put a bow on this segment of John and Glenn's having fun.

Glenn Wilson (1:15:33)
Yeah.

Yeah, yeah, yeah, this has been really fun, actually.

John Willis (1:15:40)
So where do people find you if they want to see your paper? I think your paper is awesome. So I think there's a lot that we can learn as an industry, just like what John's paper was for Incident management How would you summarize what we've all talked about and sort of close it up?

Glenn Wilson (1:15:54)
Yeah, so

Yeah, so I think, you know...

We just need to understand that we are part of a system. We are part of multiple systems. AI is very much part of that system as well. And I think if we can understand the interplay between humans and AI within the system to benefit the system, to benefit the human, to benefit everybody basically, then I think that that's the direction that we should be going in. And we could have a really happy relationship with AI if we understand

it's role in our role, you know, and how we, but it needs that systems thinking approach to it. Otherwise, you start thinking about AI to just do this piece here and just do that piece there and do that piece there. It is going to break out, it is going to start doing stuff that shouldn't be doing, it's going to, behavior is going to emerge from it, talking about AGI here. And so setting the boundaries.

is important, setting the boundaries. And then also for me, understanding where we are situated within the system and making sure that we understand our place.

John Willis (1:17:05)
Yeah, I was going to sort of let you finish and then of course that's never going to happen with me anyway, because now you just triggered a whole other thing. my biggest takeaway from John's work and you know, which is often me, know, derive rap from Dr. Cook and Dr. Woods and is that I think, you know, ⁓ this is why they sort of hate the term situational awareness, right, which implies that the human is, again, this is my words, but decoupled from the system.

In other words, you mean like, know, Decker and I would send each other emails. We don't really converse as much in our online anymore, but like every time we saw, we just call it the title of the email would be pilot error, you know, in other words, which just manifest as like anything where we're blaming somebody. And I think the biggest takeaway always from me, which sort of is embedded in all this stuff, cybernetics, well, second order cybernetics maybe, but, but beer stuff, system thinking, complexity theory.

This is a human, it's just an actor among every one of the other actors in a system.

Glenn Wilson (1:18:04)
Yeah.

John Willis (1:18:05)
sort

of or fluctuating weight to their participants.

Glenn Wilson (1:18:10)
patient.

I would probably use the word situated. it is eventually situational awareness, would say situated awareness. So how how are we situated within the system? Yeah. And I think that's sort of like a better, better way of understanding because then you are then integrated into the systems. If you're situated.

John Willis (1:18:33)
I as opposed to like, I basically have situa- what they hate is that person lost situational awareness and that's why the plane crashed.

Glenn Wilson (1:18:44)
You'd never lose situational awareness because your situational awareness changes. That's all.

John Willis (1:18:48)
That's

the point, right? If there's any takeaway from this conversation over last hour and 20 minutes, it is the thematic vagabonding of life, right?

Glenn Wilson (1:18:57)
Yeah, in John Boy's words, guess you have probably, your orientation has changed.

John Willis (1:19:02)
Yeah. How do people find you Where do find them? How do you want them to reach out to you?

Glenn Wilson (1:19:08)
Yeah, the best place to reach out to me is on LinkedIn. it's easy to find a LinkedIn. And I used to be on Twitter or X, but don't do that anymore.

My paper is a published, if anybody would like me to send them a copy of my paper, I'll be happy to share it with them privately so that they can read it and definitely would love to have a conversation about it if they're interested. So yeah, I watch this space as well because as I said, I finished the book, decided to do a masters, finished the masters. I'm thinking about something else and one of those things that I'm thinking about is doing a PhD. So I'm to up doing a PhD in the next couple of years or so.

So watch this space. If not, it'd be another book.

John Willis (1:19:54)
Yeah,

yeah. That's pretty cool. Yeah, a book probably might be more, you know, watching what Jabe went through with his PhD. That was a grind, man. It was a grind. So, but, well, cool, man. It's always a pleasure. What you don't know is I usually, I don't know why I think, but I'm usually, because the time difference, I'll ping you, like I'll see you post something on LinkedIn. It's Sunday morning. I'll be up working. My wife will be asleep and the kids are already gone out of the house.

Glenn Wilson (1:20:04)
Yeah.

John Willis (1:20:24)
And, and I'll see you know, I wonder if he's around right now. We'll spend an hour and a half on a call just talking about it's been a my pleasure to get to know you my friend so

Glenn Wilson (1:20:27)
I

Bye bye.

Yeah, likewise. So I love our conversations. ⁓ I always come back thinking, my God, I've learned so much from this. New ways of thinking about stuff. So it's always a pleasure to speak to you, John, always.

John Willis (1:20:44)
I think I'm my friend. All right. Take care.

Glenn Wilson (1:20:47)
Yeah, take care. See you soon.