Episode 04: Artificial Intelligence & Leadership
An interview with tech consultant and theorist Michael Hemenway
Show Notes
05:00 Description of machine learning and artificial intelligence.
08:21 Machinic learning as a colleague rather than merely a tool.
15:54 Artificial intelligence requires that we learn new competencies.
24:53 Advice to leaders on how to navigate this new world of AI and machine learning.
Full Transcript
David: 00:01 Hi everybody. Welcome to the Groler Podcast, this is David Worley. You've probably heard a lot about artificial intelligence and perhaps you wonder what's the relevancy for my leadership and organization. On this episode we tackle that question.
David: 00:36 On this episode of the Groler Podcast we welcome Dr. Michael Hemenway. Michael is the Chief Technology Officer at the Iliff School of Theology in Denver, along with doing a wide variety of independent consultancy work. More importantly for this episode, Michael is one of the best thinkers I've ever encountered on the topic of how humans relate to machines. Thanks for being with us, Michael.
Michael: 01:00 My pleasure, David.
David: 01:01 Michael, there's a lot of talk these days about machines. It's usually posed in terms of artificial intelligence and machine learning. I've noticed that several recent issues of the Harvard Business Review and the MIT Sloan Management Journal, have devoted a lot of attention to this. I think one of the things that is interesting about this budding discussion is that very rarely have I seen people go back and discuss the question of how humans and machines interface. By way of introduction. Would you tell us a little bit about yourself and about your own interests on this question?
Michael: 01:45 Sure. Yeah. Well, so I've been interested in interfaces for a very long time beginning back in my undergraduate days in chemistry. And then as I developed a career in technology, it always struck me as a very helpful, interface did that is, it always struck me as a very helpful way to talk about relationship and since relationship is so constituent of what I do as a technologist, both relationship with machines and with other technologists and other people in businesses. And it's also just a big part of, of what we do in organizations. It's become a very useful framework for me to talk about and think about a lot of things. I think most recently what has really piqued my own curiosity around this notion of interface, cause we've been talking about human machine interface for a very long time. Very smart people have done a lot of work on this, in the direction of things like keyboards and mice and screens and all these sorts of sort tangible interfaces as well as machine and machine interfaces, wires, cabling. And now we've got this movement over the last few years where people are building application programming interfaces, "APIs." These interfaces are another interesting way that developers can interface with certain kinds of software to build other kinds of software. APIs are a part of what has made mobile development so accessible to so many people and things like that. So now there's just a lot of talk about interface, so that's become a very useful way for me to raise a question with organizations I'm working with.
Michael: 03:45 What is this thing we're talking about interface? And in particular with machines, we often have an assumption that machines are tools and often machines are used as tools. But as we move into this sort of newer era of our relationships with machines, as machines develop a kind of intelligence, we're being asked to think about new kinds of interfaces with machines. This is certainly not the same kind of interfaces we have with humans. But there seems to be some overlap in how we're thinking about the way we relate to machines, you know, popular culture suggests in movies, right? That robots might behave like humans. So we need to learn how to relate to them, sort of like we do humans. I'm not so sure that's the thing that's the most important right now. But we are certainly moving into an era where we need to think carefully about how we interface with technologies, machines and machine intelligence in particular, in different ways.
David: 04:49 So Michael, for listeners who are less familiar with the technical elements of machine learning or artificial intelligence, is there a way that that you can describe just the fundamentals of how machine learning and artificial intelligence function?
Michael: 05:11 Well, that would be a subject for probably several podcasts. So in broad swaths, right, machine learning is a form of artificial intelligence. Artificial intelligence is a larger category basically denoting the ability for decisions to be made or yeah, decisions to be made, processes to be run toward a goal that are non-natural, right? So meaning a natural one would be something that's, that really is a human or some sort of animal process that happens biologically. And artificial intelligence is an intelligence that we have constructed, right? So there's lots of layers of that. Machine learning, which I think is for my work is the, is the most interesting form of artificial intelligence is really an area of research and work in development that looks at how machines can take data inputs and learn from those data inputs to produce some sort of output. So it's really a pretty basic process where instead of us hard coding all of the decisions structures into a machine's quote unquote brain or processor, which is how a lot of programming has happened in the past, where we have to come up with all the possible situations that machine might encounter and we tell it what to do in all those situations. And machine learning really, we give the machine a set of data and as a few parameters usually, not always. There's also supervised and unsupervised learning, supervised being where we give it more sort of guidance. Unsupervised would be when basically let the machine learn on its own [and] see what comes out. But again, the basic processes, the machine looks at the data, right? It takes all this input and then processes it a bunch of times and then provides some sort of output. The output may be a prediction on the weather, a prediction on stock prices. It may be a classification of whether a movie is liked or not liked by reviewers. It could be, like facial recognition is a really common use of machine learning right now. So with facial recognition, right? You put it in a whole bunch of images, right? And then the machine uses some algorithms, some processes and some models to learn from all those images. And then when it sees another image, it can say, oh, that's a cat or that's a dog or that's my friend David. Right? So really it's about inputs, learning from those inputs and then providing some outputs.
David: 08:12 So that's really interesting. Back to your prior comment about how people tend to look at machines as tools, would it be fair then to think about machine learning as moving machines, specifically computational machines, computers and informational technology devices, from being merely tools to actually being colleagues in the work that we're doing?
Michael: 08:41 Yeah, that's a great question, David. My first response would be that I think we have overestimated the tool-ness of computational technologies a long time. So I'm not sure that was the best way for has been the best way for us to think about machines. Or, at least it's not the only way for us to think about machines and how we related with them for a long time. But certainly now as machines develop a kind of intelligence, right? So when I talk to organizations or my own employees or people learning about machine learning, right? It's very interesting to see people's response when you talk about machines having intelligence, right? And our notion of humanness, right? And particularly our notion of humans being superior to other things, tends to be related to the way we conceptualize human intelligence, right? So we think we're smarter than all the other things, right? So as machines get smarter, they developed some forms of intelligence. I don't think they have developed general intelligence. I don't think they've developed all the kinds of intelligence that humans have. But certainly with computational intelligence, they're actually superhuman. Right? And that makes humans nervous because you know, we have positioned ourselves in our ecosystem as the most intelligent, and that is how we have evolved past other things. So yes, as machines develop some kinds of intelligences, particularly superhuman intelligences, I think it is important for us to think through, okay, how then will I relate to machines as partners in my work? Right? I'm going to learn from this machine, things that I may not have been able to learn on my own. And as this machine contributes its intelligence to our team, how do we embrace that intelligence, learn from that intelligence and cultivate that intelligence, without feeling threatened? Well, I mean without feeling more threatened than we might feel from some other smart intelligence in the room, right. So, yeah, I think it's important for us to start considering that the notion of machines as partners in our work as colleagues.
David: 11:10 That's a really insightful answer there, Michael. The thing that occurs to me is that in the literature about leadership and management and organizational development, there's a lot of work that has been done on things like group think, or elements like how do you get a diverse set of perspectives in a room to inform decision making? And my short answer is, that I think in any typical group of people, particularly the longer they work together, group think and predictable responses become more normal. And so what I'm hearing you say about machine learning and about the potential for machines to be colleagues, is that perhaps they could be the kind of colleague that actually helps break group think helps expand the discussion, identifies potential issues that none of us are thinking about. But all of that requires our ability to learn from them, and to, I think, neither overvalue them nor fear machine learning. And so I'm just curious about what advice you have for people psychologically related to how they relate to machines. What would you say to someone who actually is beginning to use artificial intelligence and machine learning, but you know, either has a lot of anxiety about it or is way over confident that it's going to be, you know, a completely new world. What would you say to them about how to approach machine learning?
Michael: 13:06 Yeah, I think you raise a good point early on there, David. That in one sense we have been negotiating the value of difference of perspective in teams for a long, long time, right? And trying to learn to create spaces in teams for difference, for different perspectives, different approaches, different methodologies, and different views all in the direction of making a team better at what they're trying to do and creating new possibilities and innovation. So in one sense, I think a way for teams and organizations to frame this move toward embracing machines as partners is to consider machines as another form of difference in their team. And then to create enough space both through humility and honesty, probably, would be the two sort of, as you've sort of named it, the two kinds of principles I would approach with is like, well, I don't know all the things and the machine probably doesn't know all the things either. So how do we find a way to work together toward whatever aim we're after? And again, machines are really good right now at very explicit tasks, right? Our machine learning capabilities to this date, right? If listeners are listening to this, a year from now or five years from now, it may be very different landscape. But at this point, machinic intelligence is very effective at defined discreet tasks in a single domain, right?
David: 15:06 Like recognizing a face.
Michael: 15:08 Yes. Like recognizing a face, like, identifying, particular kinds of patterns in speech and things like that. It's very good at those tasks in a defined domain. Knowing that, knowing the sort of talents and intelligences and the limits that machines bring to your team can help a team embrace that partner, right? That part of the team in ways that will be effective. So in one sense, I think it's not all that different than how we've talked about embracing other kinds of difference from ourselves in a team. But the other piece I might add to that is we have some competencies to learn, I think in order to do this well. I mean, we've got years and years and years of learning how to relate to humans and we're still not very good at that all the time. Right? So, I think we have some work to do to unsettle the notion of, "well I want this machine to just be an efficient tool to do this thing." We have some work to do to say, okay, "how do I learn how to communicate with this machine so that I can learn from it?" How do we, how do we change our development process or our product lifecycle or just our team lifecycle? How do we, how do we adjust that team lifecycle to give us a chance to either slow down at some points and reflect on what we might be learning or to listen and learn "machinic" language, whatever that might look like in your situation. And, so I think we have some skills to learn but I, I don't think we're lost without any sort of previous experience on how to negotiate these sorts of different relationships and teams.
David: 17:06 The competency piece is, I think, really critically important for folks to think about. And I was just jotting some notes here as you were talking. These are the things that I think were inferred from your comment. Number one, competency in learning from machines. So, a sense of individual and group humility towards the fact that the machine has something to teach us, not just information to dispense to us, but actual learning for us to undertake. A second one is advisement. Which I think is a little bit different than just simple learning. I think advisement has to do both with the advice that's coming to us from an individual or a machine, but also our orientation towards that person. If Barack Obama and you were talking face to face and you asked him a question and he gave you a very specific answer, I think most people, given the stature of Barack Obama, would take that advice very, very seriously. Whereas someone that you might not respect, you might kind of blow that off even if it's outstanding advice. And so having proper respect for the machine. But then you said something that I'd like you to engage more. And that is this notion of "machinic" language. I think a lot of people when they hear that are going to go immediately to something like a programming language, and perhaps you mean that in part, but can you say more about what you mean by developing the competency of machinic language?
Michael: 18:50 Yeah, that's a good question. I think what I have in mind is not actually programming per se, although I can see how that would be a natural conclusion. Part of the point I'm trying to make there is, and again, it's a bit of a translation from how we already, the work we already need to do human to human, in terms of not assuming that a member of my team shares the same language set that I share, entirely. So certainly most teams speak the same linguistic language. Most if not, they have people who can translate that sort of thing. But, there's all kinds of different cultural dispositions going on in team members that can make the same words mean very different things. And so it's work for team members or organizations to learn those sorts of differences in cultural dispositions or in I would call sort of broadly spoken language. I think machines think differently than humans do. Machines process data differently. They make conclusions differently. They have different methodologies than we do. Even though many, many machine learning projects are built on neural networks. People would probably heard the term neural networks floating around there. And yes, there's a way in which neural networks were imagined as sort of analogous to the human brain, but they don't function in the same way. Right? And there are lots of differences that are technical that we don't need to go in to. But it's enough to say that machines don't think like humans, at least methodologically. And so there's a lot of work right now being invested in teaching machines how to speak human better, right? How to speak my language. How to respond to me. How to be able to communicate in ways that human agents or human users can sort of get what they need.
Michael: 21:07 But I think there's a subtler thing that could be really useful for organizations and just people as machines become more a part of our lives and our teams. Is how do, how do I learn how machines think and communicate? But what, what are the tools I need to learn the dispositions of a machine? So that I may understand a little bit better, "oh, so when a machine tells me this, it's in this sort of kind of frame of reference," right? You know, a machine may offer a prediction and it may not have any feelings about that prediction, right? It may, but it may not. Whereas, you know, a human might offer me a prediction and it may be loaded with sort of a sense of desire for something to happen, right? So that's just a small example. But what I'm trying to communicate to myself and to the teams that I work with are, what's our role in learning the way in which machines think and communicate? So some of that does mean learning a bit of the technology that's at work in machine learning platforms or products, right? So we can't just, I don't think, show up and you know, just take everything for what we see at the end product of a machine; a machine learning platform or a machine that's a part of our team. I think it's worth learning some of what's going on underneath. I don't think people need to be mathematicians or even programmers, but I do think developing a broader literacy in organizations around what is going on in machine learning could be a very useful way for us to develop the kinds of humility and honesty needed to relate to machines in an effective way.
David: 23:08 That's a profound and fantastic point, Michael. You know, it's back to the Harvard Business Review and MIT Sloan Management journals. It's not uncommon to be reading an issue and have one article about something like cultural competency, or dealing well with diversity in your organization. And clearly it goes without saying that those articles are implying human to human competency, human to human diversity. And then the very next article will be something like, "Hey, What's Your AI Strategy?" Right? And so it's interesting, we've covered a lot of ground in this interview because the second of those, "What's Your AI Strategy" implies AI as a tool, a very kind of wooden, mechanical sense of what that technology is and does. The first article is, you know, really one of the core questions that is facing the world. What I'm hearing you say is that those two perspectives are not independent. They are actually related both in kind and also in terms of the project itself. So thank you for, I think any listener to this particular interview is going to come away from this with a really kind of new way of thinking about how we interface with machines.
David: 24:40 By way of closing, let's imagine that you're an executive or an organizational leader and engaging machine learning is something that's important to you. And you're in an elevator and you have 30 seconds to give advice to this person and they ask you, "Michael, what should I be doing to move towards this new world?" What would you say?
Michael: 25:03 Well, you know, I'm not very good with 30 seconds. I think what I would say is, build a team that has both the technical capacity and the psychological or cultural capacity to engage difference at every stage of the development life cycle. From the very beginning of whatever sort of product you're developing or project you're embarking on, begin to think of the AI or machine learning component of your team as just that, a part of your team. And if you begin from there, then rather than just thinking of it as this is a tool for efficiency, I think you'll have a better chance for success at each phase of that product or project lifecycle.
David: 26:07 That's great advice. Thank you. That is very actionable. I think it would be easy to overlook that perspective of starting with AI as a colleague, as a partner in what you're doing. Hey Michael, thank you so much for being with us today. It's been a real pleasure to talk with you and I know I learned a ton. I hope listeners did as well.
Michael: 26:30 It was my pleasure, David. It's always fun to talk.
David: 26:34 That concludes our interview with Dr. Michael Hemenway. Thank you so much, Michael, for bringing your brilliance and insight on this important topic. If you enjoyed this episode, please visit us at the Groler website. That's G R O L E R dot com. There you will find a full transcript of the interview, show notes, and a few other goodies that you may enjoy. You may also want to consider subscribing. You can do so through the iTunes store. And feel free to reach out to us on the website to ask questions or to suggest further topics for future shows. Groler exists to help you continue to learn and grow as a leader. So keep learning, keep growing, keep leading. Until next time, I'm David Worley.