Sean, good afternoon, everybody. My name is Sean McClellan, and I am a senior Training Specialist for DonorPerfect. Welcome to Tim Lockley agency in the age of agents navigating the human stack, when AI takes the wheel. A little bit about Tim. He has over 20 years of experience in nonprofits and tech. As the founder of now it matters and the human stack. Tim is passionate about helping nonprofits succeed with technology. A former Salesforce MVP and partner salesforce.org a quality partner and Microsoft partner. He’s seen how system deficiencies can hinder organizations in 2021 he developed digital guidance, a methodology designed to move nonprofits from tech resistant to tech resilient. Tim’s commitment to innovative solutions and thought leadership continues to inspire nonprofits to embrace human centered digital strategies that streamline operations, foster Alliance and amplify their impact. Just a few quick housekeeping items before I turn it over. You can download today’s presentation from the details section to the right of the presenters window. We please ask that you submit questions in the Q and A tab so we can address them during the session. And all sessions are being recorded and will be available on the DonorPerfect website after the conference. Tim, take it away, apparently.
Anyway, all that is to say that even as good as I am at AI, humans matter more than AI. Humans matter more than everything, and especially AI. And so I want to talk about how they matter by comparing it to a trampoline. And you have to know how to set up a trampoline for this to work. So if you’ve never done that before, there are three parts to it. First you got to put a frame together. It’s made out of metal. Then you got to stretch out the part that you jump on, called the mat, and then you put all the rest of the springs in. And that’s how you have a trampoline that’s ready to use. The way that that compares to AI is that using AI is like a trampoline. There is the frame, which is the six areas of AI. There is a mat, which is our agency, and then there are springs, which are your skills, related to all of them. And I’m just going to talk about the mat today. I’ll tell you what the frame is, and I’ll explain springs and and skills, but we’re really going to focus on agency. So the six areas in the frame are the nature and reality of AI. And very simply put, it is very efficient. It’s really good, and it’s here to stay. It is going to create huge, huge changes in society and in our world. There are benefits to those and there are risks of AI. Those are another section, and then the final section is the beneficial and responsible use of AI. That’s where we have the most control or power. And your agency is like the mat that goes around this, and you stretch it out by using springs. There are six competencies you stretch springs out and they form a whole trampoline like this, with your agency being pulled all the way to the outside. If you don’t add enough springs or in the right part, then your agency really doesn’t have as much fullness as it could. And what I find out is, what I find is that most people talk about AI, and they talk about skills so much that we’ve left out agency in the conversation. So I want to pose the question, what happens with agency when we feel powerless? And I’m framing the question that way, because so many people feel powerless about the changes that are happening, the nature of AI and the impact that it’s having on our world, the huge societal risks, also the benefits. But people feel like this is a this, this is moving really fast, and they’re not wrong to feel that way. I think we’re moving too fast, as much as I love AI. I wish we would slow it down and take our time with it, but that’s not going to happen, and that’s what it means to be powerless, to know I can’t make any changes around that. So what does that look like for us to have agency with powerlessness? And part of what it looks like is knowing that there are parts of AI that you have something that you can change, and there are parts of AI that you can’t, and that’s what we really want to explore with our agency. Because even when you don’t have power, you still have agency. Nobody knows it’s better than the 12 Steps program and their serenity prayer is one of the, I don’t know, the most, most profound sources of wisdom I’ve found. It’s short, it’s easy, but it goes, if you don’t know it, it goes, grant me the serenity to accept the things I cannot change, courage to change the things I can, and the wisdom to know the difference. What that means for us is that there are things we cannot change, there are things that we can, and then there are the things that we don’t know if we can change or not, and we have to figure that out as it applies to AI. We really can’t change AI, but we can. Change ourselves. We can decide how we react, how we respond, how we use it. And this gets kind of complicated in the middle on the wisdom part of things, right? And there are two ways that we really fall out of alignment with our agency, and I want to talk about those two ways. Mallory Erickson talks about these a lot, if you don’t follow Mallory do? She’s one of my favorite people in the world. She’s really wise, and she got me thinking a lot about the nature of limiting beliefs. And limiting beliefs affect you the side of that wisdom, right? You’re not sure if you can change something or not, and limiting beliefs are where there are things that you can change, but you don’t believe you can. So it would be possible for you to change these things, except you don’t think you can, and where there are limiting beliefs in our model, that means that we’re unable to do things we would otherwise be able to do. Because of that, it falls outside of our abilities, right? A good example of this is people that I’ve seen saying I’m too old for AI. There’s people that are saying it doesn’t actually work for me. It’s not a really good writer. There are all of these things about the way we believe that AI can work that limit our abilities to use it. So that’s one side of of wisdom is making sure that we’re not limiting our power or control because of limiting beliefs. That’s one side of wisdom, and on the other side of wisdom, there is the illusion of control. That is where there are things that you cannot change, but you believe that you can. I don’t know if it’s just because I’m from Montana and we’ve got a very can do attitude about life, but I struggle way more with illusion of control than I do with limiting beliefs. I think that there’s all sorts of things that I can change that I really can’t I want to change the way that tech is used in nonprofits. I want to change the way that we use that we roll out tech. I want to change change management. I want to change all of these things. And it has taken me years to understand like there’s only so much I can do that is the illusion of control. Illusion of control pops up in tech in a couple of ways. One of them is where I see people saying, I’m not going to use AI because of environmental harm. Now, there is massive environmental implications with AI. I’m not trying to say that there aren’t but to think that our use of AI or not will affect the environment because of AI is not being realistic. That’s like saying I’m not going to use money because money can do harm, like it just your non participation will not change that outcome, and I know that that’s hard, because we are a lot of us in the nonprofit sector are very values aligned, and we want our actions to follow values. But I think it’s really important to think through is that the best way to engage AI and environment? I would say that one way of doing that is to use the tools to advocate more effectively for changes in the environment. And that’s just one example. There’s all sorts of other issues that happen with AI that people are thinking, Okay, I I don’t want to use it for this reason, or I’m not going to participate in it for this reason, that can be the illusion of control, and I’m not telling you that you should use AI, even if you’ve got environmental concerns like what I am telling you is be clear about why you’re choosing that, and be clear about where you have power or not, because that is really that is really important to understand what’s going on there. And ultimately, my main goal with this is to help people just understand where there are limits on your agency and where there are limits on the things you can control, so that you make informed decisions about what part of AI you’re going to be engaged with. And the last thing I’ll say is that courage is a really powerful word. You can’t be afraid and have courage at the same time. It’s something I learned from Mallory. So if you have courage, you are not existing with fear and fear is not the same thing as risk. Fear is when you have the ability to respond to risk rather than just reacting to fear. So if you stay courageous, you actually mitigate risk in the best way possible. So in order to engage AI with the highest amount of. Agency. We need to have courage, wisdom and acceptance. I don’t know if we’ve got time for any questions. I rolled through that even faster than I was going to, but if there is a question that I could answer, I would love to all
right, can you hear me? Yeah. All right. I’m just keeping an eye on Q and A, and I’ll take a moment to scroll through the chat here. Somebody accused AI of hijacking your presentation. That was funny, but I don’t see any questions yet. But anybody who is typing take a moment. But yeah, otherwise, we might be good to go.
Tim. Thank you everybody for coming, and I’m so sorry for the tech that I had no ability to control. So good example of it.
All good, all good. Yeah. Folks are saying thank you, so I want to extend that. Thank you back to our audience. So thank you for attending Tim’s session. Our next power session up on stage one is Vanessa chase with here’s the thing to know about, email, how to connect with readers right from the start, and we will see everybody in a few minutes. All right, take care. Bye.
Read Less