AI is suddenly everywhere it seems, and the self-help world is no different. A low-cost machine that can pump out a nearly endless string of word salad "content" to push on a willing audience? Perfect!
Jean and Glenn chat about the implications of AI for self-help and how we are already seeing it wend its way into the industry and the wider mental health industry. Tony Robbins has created an AI Tony app! As a fun little experiment, Jean asks ChatGPT how it would go about creating an AI guru. So before you go ahead and pay for any AI self-help tools, have a listen!
Show Notes:
To Listen To:
A Little Bit Culty Podcast
To Read:
"AI has started ignoring human instruction and refuses to turn off, researchers claim" in The Daily Mail
To Watch:
Silicone Valley on HBO
Learn more about SEEK Safely on our website
Follow SEEK on Instagram, Twitter, and Facebook
Follow Dr. Glenn on Instagram, Twitter, and Facebook
Read the memoir “This Sweet Life: how we lived after Kirby died” by Jean and her mom, Ginny Brown
Donate to support SEEK’s mission
To Contact SEEK email info@seeksafely.org
[00:00:00] At Seek Safely, it's our mission to empower seekers to have a safe and meaningful self-improvement journey. Why do we care? Seeking to be your best self is an amazing, beautiful human impulse that has led us to create art, invent technology, tell amazing stories, and reach the moon.
[00:00:19] But we saw the dark side of self-help in 2009, when a recklessly run self-improvement retreat led to the death of three people, including my sister, Kirby Brown. We want people to seek, to dream their big dreams and chase their beautiful goals. But we want to make sure they're safe along the way. This podcast is about education and empowerment and getting real about the promises and problems of self-help.
[00:00:46] We talk with people who understand and care about the self-help industry and everyone it touches. I'm Jean Brown. I'm Dr. Glenn Patrick Doyle. And this is the Seek Safely Podcast. Hello, and welcome to the Seek Safely Podcast. My name is Jean Brown. I'm here with my co-host, Dr. Glenn Patrick Doyle. What's up, Dr. Doyle?
[00:01:13] Oh my gosh, Jean Brown. I feel terrible. I've been sick as a dog for days now. And if you're one of my patients, and you're listening to this podcast, and by the way, if you're one of my patients, and you're not listening to this podcast, then what are you doing with your life? But my patients have been so nice, because I have been hacking my way and sniffling my way through our sessions.
[00:01:38] Yeah. We were trying to record a few nights ago, and yeah, you were like, no, not going to happen. Rough. And then today, you're like, okay, we've just got to do it. It's still not great, but we're going to just go for it. Until we have AI, Dr. Doyle, and Jean Brown to record the Seek Podcast for us, we really need AI versions for us. Jean, did you ever watch the HBO show Silicon Valley?
[00:02:06] You know what? I didn't. I didn't get into that one. I didn't actually even try. Highly recommend. I love the show. It's all about tech bros and entrepreneurs and stuff. And it's great because, gosh, they were putting on new episodes, I think probably like 10 years ago. Right. Before anybody knew the internet was going to be what it is now.
[00:02:26] But there's a running joke where two of the programmers, Dinesh and Guilfoyle, have kind of this running feud. And there's a joke wherein Guilfoyle has created an AI Guilfoyle because he doesn't want to deal with Dinesh's DMs. And Dinesh finds out that he's been talking to AI Guilfoyle instead of Guilfoyle. And at first he's furious, but then he's like, can you make me an AI Dinesh to talk to my aunt so I don't have to talk to her?
[00:02:54] And so they end up making AI versions of each other. And long story short, it shuts down the whole company because of the shortcomings. I wonder what we're going to talk about tonight, Jean.
[00:03:04] I don't know. Yeah, I know. Okay. So first of all, I'll just apologize. We've been a little delinquent with new episodes lately because life is busy and overwhelming for all of us, I think these days. And it feels like even in like the month, maybe since we've had a new episode, the whole AI landscape has advanced even further in a month. It's going very fast. It is everywhere. It is like literally everywhere.
[00:03:32] And what's weird is AI is not, as I understand it, definitely not an expert, but AI, as I understand it, is not new. Like AI has been around for a couple of decades, actually, at this point. Like all the algorithms that we've been complaining about for well over a decade now, Google is AI, Facebook is AI.
[00:03:51] Like, I mean, AI is not new, but for some reason in the last, I'm going to say, I started noticing it, I think during the pandemic where every company was shoving AI specifically, like the term artificial intelligence into all of its products and services. And so I don't know if market research came out and said, hey, customers love this because I don't love this. Like, like I hear that, that Google is raising the price of my Google workspace because of all the added value from AI and I don't want it.
[00:04:21] But so no, it seems to be everywhere with a crushing ubiquity. Yeah. That was not even a couple of years ago. Yeah, I agree. I was thinking, I was having the same thought the other day. And I'll say, first off, I don't like my resistance to all of it because it makes me feel old to resist new technology. Like as a reflex feels like, you know, sorry, mom.
[00:04:50] It feels like my parents approach. Going to offend our one most regular listener, Virginia Brown, who listens to every episode. Closely. No, but it's just that sense of, you know, I, that knee jerk reaction of going like, this is new. I don't quite understand it, which I will fully admit. And therefore I don't like it. I don't want to approach things that way.
[00:05:14] So I feel, you know, it makes me feel bad, but I do have a lot of reservations about AI. I also have some skepticism, like you're expressing, like, is this really new? Because all of my understanding of how the chatbots work, it just, they just seem like very advanced algorithms. And we've already been living with this kind of technology. So I also, I've said this before, but I'll say it again. You know, the AI is based on the internet.
[00:05:42] Like it's mining stuff from the internet that already exists. And I think we are all very openly willing to admit how flawed the internet is. So I'm like, are we going to trust a tool built on this thing that we already know is so very flawed? And why are we not going to have concerns about that? Yeah. How dare you distrust the internet? I would like our new machine masters to know. That was Jean Brown who said that, not Dr. Doyle who said that.
[00:06:10] I love and trust our new internet masters. No, I mean, look, I might feel differently about AI. And look, I am, as usual on this podcast, spouting off about something I understand very incompletely. Unless we're talking trauma and addiction, I really don't know what I'm talking about. Or Tony Robbins early books. It's a very specific knowledge base, Glenn, but niche. No, but check it out. Like I might feel differently if I personally had better experiences with AI.
[00:06:39] So before we started recording, Jean and I were kind of talking about our assorted experiences with AI. And I think the first time AI actually came on my radar was, I didn't even tell you this one. I might have told you this off the air. My girlfriend, Megan, has three daughters, one of whom just became a teenager. And for Christmas a couple years ago, I thought it would be brilliant to get them tablets, like Kindle Fire tablets.
[00:07:06] And we gave the girls these tablets with the rule that they weren't supposed to be surfing the internet on them, right? Like they could download approved apps and everything had to go through Megan to get approved, etc. But long story short, the teenager, the now teenager found a way to get on a web browser. I think she did it through Pinterest. Like she found a way through a web browser through Pinterest. And as it turned, by the time we found out about it, she had like set up a Twitter profile. She was like gaining followers. I was impressed.
[00:07:35] As somebody who's on social media a lot, like I'm like, wow, she got a hustle. And so we respected that. But one of the sites she was visiting was this, it was a chatbot. It was an AI chatbot site where you could, whatever, you could construct a person to talk to. And she had been doing that. She did. And it was nothing really nefarious. Like she was just constructing old chatbot buddies to talk about. We still didn't love it that she had done that. But just out of curiosity, because I was really unfamiliar with the chatbot thing. And so out of curiosity, I went to that site that we caught her on.
[00:08:03] And you could construct your own AI chatbot. But they also had chatbots of like known personalities, like celebrities and stuff. And me being the narcissist, mobile maniac that I am, that's the first thing I did. I said, golly, is there AI Dr. Glenn Patrick Doyle to chat with? And as it turns out, there was. A couple of them, actually. And this was kind of my first introduction to AI.
[00:08:30] So I started having this conversation with me, with myself, the AI version of me. And what quickly became apparent was that all AI, or at least all this site did, was scrape the internet for everything I've ever said in public, which as it turns out is a lot. And we'll regurgitate it to you in a tone and cadence that kind of mimics me. So I started asking my AI self questions. Right.
[00:08:57] And as long as I kept it to stuff that I'm known to talk about in public, they gave me answers that were more or less me, that were more or less things that I would say. Right. Then I started asking, so just out of curiosity, AI, Dr. Glenn Patrick Doyle, you good looking robot, you. Because you had my profile picture, the whole thing. It was the ultimate narcissistic exercise. Oh my gosh. But I asked, so who's your favorite beetle?
[00:09:23] Now, mind you, this is something that if you follow me, you probably know, but you'd really have to follow me to know. And of course, everyone knows I'm a huge John Lennon fan. AI Dr. Doyle wastes no time in saying, George! Like, oh, AI. I said, AI Dr. Doyle, are you a dog guy or a cat guy? And he goes, I'm a ferret guy. I'm like, I'm out. I'm out. A ferret guy. Oh my God. My point is that AI is only as smart as what it's trained on. And that leads me to this other story.
[00:09:52] A couple months ago now, I became licensed as a psychologist in Texas. I had been licensed in Illinois and DC and I'm credentialed to do telehealth nationwide. However, I wanted to get my actual Texas psychology license. And in order to get licensed as a psychologist in a new state, usually what they have you do is take what's called the local jurisprudence exam. It's an exam about the local psychology laws and regulations and ethics. And it's open book, so you can use whatever, right? Like you can use Google, you can use whatever. They don't do it.
[00:10:22] It's 100 questions. And most of the, especially the ethics questions are no brainers. Like if you're familiar with the American Psychological Association ethics code, you're fine. And if you've been practicing in an ethical way, you're more or less fine. But just to check my answers on this, since it was open book, you can use whatever, just to check my answers. As I was doing the test, I took the, and it's multiple choice, I took the question, I took the responses, I would feed them into Gemini, which is Google's AI. Just to check myself.
[00:10:52] And about 98% of the time, Gemini and I agreed. And it was fine. I mean, those two times though, Jean, those two times that Gemini and I profoundly, Gemini and I profoundly disagreed, they were both on ethics questions, which I found interesting. The one question was, if you're a psychologist and you're doing a forensic examination, you're doing an exam for the court, an evaluation for the court. Right. What are you ethically required to disclose to your examinee?
[00:11:21] And I know this because I've done forensic evaluations. Like you are ethically required to disclose who is paying for. Right. And I picked that answer and I'm ready to move on. But Gemini is like, actually, it says, if you disclose to the examinee who's paying for it, that'll change their answer. So you don't want to do that. And if you're thinking, that's actually why you want to do that, actually. That's what makes it an ethically important thing to do. It's called informed consent. So I know that this is an incorrect answer.
[00:11:49] And I picked the answer, my answer, which is correct. I'm like, huh, that's interesting. That's kind of concerning. But the other question, Jean, was, it was a scenario and it said, you are doing therapy with somebody and you notice that you are sexually attracted to the person and they're sexually attracted to you. What should you do? Yikes. Now, the actual answer is you stop therapy and it's done. Right. You don't do anything.
[00:12:11] But one of the, actually, the APA ethics code says if you're going to have a relationship of any kind with a former patient, professional, sexual, anything, you have to wait at least two years. And even then, you probably shouldn't. Like, even then, please don't. Like, that's what the ethics code actually says. Right. There's no universe, Jean, in which it says, stop therapy and then you can go ahead and have the relationship with a patient. There's no world in which it says that.
[00:12:40] However, you can see where I'm going with this. Yeah. And Gemini was like, you know it, bro. And the therapy. Shoot your shot. And shoot your shot. Good luck to you, kids. Oh, dear. Which leads me to the conclusion, one, that, look, it clearly wasn't trained on the APA ethics code. That's, you know, that's, it's just a known shortcoming of AI that's only as smart as what it's trained on. And as you pointed out, on the internet to scrape is often not all that reliable. Mm-hmm.
[00:13:10] But also, Gemini, you're a bit of a scumball. Dear Lord. Like, like, who needs to know who's paying for it? Actually, you know what? Shoot your shot. It's fine. It's fine. Hey. All of this comes to bear because we were thinking about this because our old friend, Tony Robbins, my boy. He's doing great things. Oh, he's doing great things, Jean. As always, as usual. A few weeks ago. He, like everyone else. Yeah, go ahead. Tell us, tell us what he's doing. A few weeks ago, he releases an app. It's Tony Robbins AI.
[00:13:40] And it is exactly as advertised. Now, in fairness, neither Jean nor I, we were Googling this before the show. Like, we haven't actually used the app. Yeah. However, it appears to be exactly what you'd think it is. It's like AI Tony Robbins, just like that one side had AI Dr. Doyle. Apparently, this side has AI Tony Robbins. Now, this dovetails with a lot of conversation. We've been having the therapy spaces about, you know, there have been attempts to make AI therapists.
[00:14:06] And there's actually a little bit of research to say that it's not always all that bad. Like, there's some research to support the idea that, you know, man, these AI therapists often leave people feeling pretty good. It can often solve, quote unquote, solve, you know, certain simple problems. We have a lot of patients who, increasingly, we have patients, some therapists will have patients come and say, oh, look, I was talking about this with ChatGPT. And they were really insightful. Like, ChatGPT was really insightful about this. That's not uncommon for us to hear.
[00:14:35] But Tony Robbins is kind of taking this to the next level and says, okay, like, like my, all my wisdom, here it is. Here it is in chatbot form. It illustrates the popularity of AI specifically in this self-improvement space because we've been seeing a lot. Like, we've been seeing, you know, a lot, you know, harness the power of ChatGPT and ChatGPT is your new life coach and those kinds of things. And so, how do we feel about this, Jean Brown? Yeah, I have a few thoughts.
[00:15:03] I've had these conversations with other people as well who are, even if somebody wants to try to create a therapist AI chatbot or even a Tony Robbins AI chatbot, it kind of is like a moot point because people are just using these tools in this way anyway.
[00:15:21] So, as long as these tools are accessible in these free versions, and I think that will probably start to shift soon, quickly, that the free versions will become more and more limited. But, I mean, right now, like, you don't have to pay for Tony Robbins' thing. You don't have to pay for an AI therapist. You can just use the chatbots in this way already.
[00:15:47] And I, yeah, I've heard it from people too who are like, yeah, I just kind of, like, chat with ChatGPT about, like, what's bothering me and it does make me feel better after. And I kind of just, I don't know, I got an image of, like, just a shiny, environmentally destructive magic eight ball was kind of the image that I got. That is going to be our episode title.
[00:16:14] The shiny, environmentally destructive magic eight ball. Say more, Jean. Yeah. Yeah. Just because it's this, like, well, let me just throw this question at it and see what emerges out of the darkness. And the other thought that occurred to me, the more fantastical side of my brain said, oh, great. We're giving the robots all of our weaknesses and vulnerabilities. When they want to go, you know, Terminator on us, we're cooked.
[00:16:41] But, you know, it's not just the chatbots. It's the people who own the technology, right, are gaining access to all of this. And the more things we put into the chatbots, they're using this information that we have fed into it to learn even further. Which raises a lot of concerns about who is feeding information into these systems and what is their motivation.
[00:17:06] I've seen stories from people where they're basically like, I had a conversation with the chatbot in which I then gave it information. And later on, I asked it about that thing. And it gave me back what I had told it previously, right? So then it's like, well, I'm sure the disinformation campaigns are already out there putting all kinds of crap into these systems.
[00:17:32] But going back to more the side of, you know, applications for psychology and therapy and that kind of stuff and self-help and coaching. I feel like, you know, we all kind of love to talk about ourselves. And this is just like a perfect tool where you can feel really unselfconscious.
[00:17:54] You know, it's like there's just, there's like no barrier at this point for people just going all in on this and using these tools in this way. And as you mentioned, like with ethics concerns and yeah, there are just, there are a lot of concerns over what that might mean.
[00:18:13] Well, something that occurs to me is that, well, and this gets back to the fact that this stuff isn't exactly new, like in ways we've been telling the internet what we want to hear and the internet's been spitting it back at us for, I mean, at least a couple decades at this point.
[00:18:33] What occurs to me about algorithms is that, you know, what is the functionally the core function of an algorithm and that's to get us to keep engaging with it. Yeah. Right. And by the way, TikTok and Instagram reels have figured this one out, dear Lord. Like I was thinking about this the other night. So one of Megan and I love to send each other funny Instagram reels and we will lose hours doing this, like sending each other reels.
[00:19:03] And laughing and watching them over and over again and laughing ourselves sick. And Instagram very much figures this out. If you like that, you'll probably like this. TikTok is even worse. TikTok is really good at figuring out what I will watch next. Yeah. And I was thinking about the fact that no one could have predicted this as a form of entertainment. Like for decades, TV executives were agonizing over what shows do we need to put on to keep America watching.
[00:19:28] And now we're watching stupid little reels of kids bumping their heads. And I think I feel terrible. You know what all of my reels are? Labrador retrievers and Killian Murphy. That's like... That's our brand. That makes sense. Okay. But anyway, like what is an algorithm except a machine to keep it engaging with? To keep us engaging with it. That's the core function of any algorithm.
[00:19:57] And so chatbots, I'm going to assume... And again, full recognition that I barely know what I'm talking about here as usual, as our listeners have become accustomed to. So no, man, if a chatbot has an even more nuanced way to figure that out, what will keep me engaging with it? Yeah. Maybe because I'm feeding it all of my secrets and vulnerabilities and stuff. On the one hand, look, I feel that when we engage with social media, for example, we kind of know what we're in for.
[00:20:27] Like we kind of know that, yeah, Facebook, Instagram, TikTok, they're all designed to keep us using the sites and the apps for as long as possible. So we kind of make that deal. Like, okay, like I get it. They're trying to addict me and I'll take that risk. I feel kind of with chatbots, especially when it comes to therapy chatbots or life coaching chatbots, it can get a little more insidious. As a therapist, I can tell you, if you're just telling the client what will keep them coming back, that's probably not great therapy.
[00:20:56] A certain amount of therapy is telling someone, you know, things that they probably don't want to hear, that they need to hear, that they don't want to hear. I think part of the skill of therapy, which I say, I think this, I don't know, not being a skilled therapist, I don't know. But were I a skilled therapist, I might say that part of the skill of therapy is telling people things that they don't maybe not want to hear, but they need to hear in a way that doesn't shame them or drive them away or, you know, cause them to get demoralized and quit and et cetera.
[00:21:24] And maybe that's the skill we're talking about. But getting to that point, me as a human, like that takes years of training and mentorship and want to develop that mindset of like, okay, this is what we're talking about. We were talking about the skill, the art and the skill involved in therapy. I'm not convinced that an algorithm is going to have, you know, that sense of ethics about it. Like, well, you know what, this will keep them coming back, but that's not what they need to hear. They need to hear this.
[00:21:52] So I'm not thrilled about the implications of a life coaching bot or a therapy bot that is, you know, fundamentally programmed to kind of keep you coming back to it. Here's the other thing about therapy. Therapy is, you know, if it's good therapy, you're supposed to be helping the patient feel more independent. And I've talked a lot about the difference between recovery and therapy. Like, so I'm a recovery guy.
[00:22:17] I'm a big believer that, you know, where it's not great to get dependent upon therapy or any other resource. Like I think we need to be able to work a recovery on our own. Therapy can be a tool of that if it's making you more independent. I don't know the algorithm that is invested in you no longer engaging with it. Yeah. Like, I don't know the algorithm that is going to be proud that, you know what, I now spread your wings and fly. Yeah. I've equipped you to go out into the world. Right? Well, and I think, yeah, that it kind of begs this question.
[00:22:46] Like, we don't really know at this time. Like you said, with social media, I think we know at this point we have a very clear sense of what the goal is. It's to keep us engaged and all of the, you know, the big social media platforms, it's for advertising because they're constantly shoving all of their advertising content at us, trying to get us to buy all these things. And I mean, I admit it. I have. I have fallen for some of these things. But sometimes I'm like, you know what? Actually, that does look really good.
[00:23:13] And I know it's showing me this because clearly I'm interested in this thing. But yeah, that is like, I at least, I at least know it and I can choose to ignore it or I can choose to, you know, limit my interaction to some extent as difficult as that is given the design. Right now we don't really know what the goal is with the AI that we're using.
[00:23:37] If it actually, like I have questions about what intelligence actually is, how independent these tools actually are. But there have been some, like I'll link an article there was recently, it was OpenAI. So OpenAI is the company behind ChatGBT. Their latest model, researchers were saying it refused to shut itself down. They told it to shut down and it wouldn't shut down. And it's like, what are we doing here?
[00:24:04] Yeah, so like you're saying, like if you're using it for therapy and it's, you know, its goal is, I mean, we don't know what its goal is. You know, something else about therapy specifically. There's something called the therapy frame. What that means is therapy takes place at specific times in specific settings. Your therapist is not available to you 24-7. And that's the good news, actually. You actually don't want your therapist available to you 24-7.
[00:24:32] Trust me, if it, anybody listening, if you had me available 24-7, you would get worse. I'm just, I'm telling you, you get the best version of me in that 55 minutes, I promise you. But that's what's called the therapy frame. It's a necessary form of boundaries. It's particularly important with survivors of trauma who have very often been harmed by people with few or no boundaries. But we need to develop confidence that, you know, look, therapy is not 24-7. It happens at this time, in this setting.
[00:25:01] Like, I know, like, you know, it's not going to be some random time in this brand new setting. Like, we develop an internal sense of safety and consistency because of the therapy frame. We also know because of the therapy frame, certain things will be talked about, certain things probably won't be talked about. Like, so, for example, it would be really weird to have a therapy session and the therapist begins the session spilling their guts about their love life or their financial life or something. Like, okay, you patient, you got to listen to me about this, right?
[00:25:30] That's probably not going to happen. You know, there are expectations of the therapy frame. Where you talk about certain things, I will say certain. And again, those boundaries are the good news. When you're talking about, like, I've seen ads for therapy chatbots and life coaching chatbots where it is advertised as a feature, not a glitch. Of like, yeah, man, have your therapist handy 24-7. Right. Now, as somebody who's in recovery, so I'm a trauma survivor in recovery, I'm an addict in recovery.
[00:25:59] I get the challenge of long, dark nights. I get them. And I completely understand the need to have resources available. Hey, chatbot on this episode. Take a shot. Should have been taking shots, yeah. For now, it's really concerning because what that says to me is at the very least, those who are marketing these things don't understand the nuance or don't care about the nuance of the therapy frame. Like, they know that it's a seductive marketing.
[00:26:25] Or they're positioning themselves as disruptors. That's right. Right? They're disrupting the therapy industry. But I feel like, okay, it's one thing to disrupt the mattress industry, but it's something else to disrupt an industry that has so many years of experience and research behind it to try to protect the people that it's helping because these people are particularly vulnerable. It's different.
[00:26:53] But I feel like there's that sense of we want to disrupt everything right now. Move fast and break things, right? Yeah. It's the whole thing. Now, mind you, the therapy industry desperately needs to be disrupted at times and in ways. Yeah. The mental health field broadly. But again, not all change is positive change and kind of that move fast, break things. We were just discovering this on a national level.
[00:27:16] That move fast, break things is not always the most helpful mantra when people are really dependent upon certain resources and certain structures, right? Yeah. It's like a fuck around and find out model. So I'm curious about the Tony Robbins AI. And I'll tell you why.
[00:27:40] So, Gene, you and I have talked a lot about the fine line there is between somebody who's interested in self-help, maybe reads the books, listens to the tapes, etc. Versus somebody who takes that next step and becomes involved with a teacher. Like, so, for example, Kirby was a fan of James Arthur Ray and then took the step to become – stepped into his ecosystem.
[00:28:06] It's the difference between having a parasocial relationship and a true relationship with somebody, right? Like, you know, you become – you go from a fan of theirs to being a client of theirs. Yeah. And they're fundamentally different relationships. I've often thought, you know, gosh, I was kind of lucky insofar as all the benefit I ever derived – like, I have pretty positive experiences with self-help, the self-help ecosphere.
[00:28:29] I think mostly because I never took that step and became like a, you know, a true pupil of saying, you know, mostly because I never had the money or the opportunity to do so, right? But I was here thinking that, you know, gosh, if you ask me – so everybody knows, like, I was way into Tony Robbins for a long time. Like, if you were to ask me, like, you know, what's the Tony Robbins stuff? I could probably tell you. I could probably be the Tony Robbins AI.
[00:28:54] Simply because his body of work, such as it is, like, such as it exists out there, the stuff that I'm going to assume the Tony Robbins AI is trained on. Mm-hmm. It's a static body of work. Like, it exists. It's out there. I'm curious as to whether the Tony Robbins AI is just regurgitating that. Mm-hmm. Or if it has the capability of adapting, right?
[00:29:17] Like, because you would think that becoming a student of Tony Robbins, if one has the spare several hundred thousand dollars to do so, just – you know what? If you have a few spare hundred thousand dollars to donate it to Seek Safely and I will give you the Tony Robbins AI. Like, I will do it. Like, you can – I will be your guy. Anyway. No, I wonder if it can adapt. I wonder if it even gets to the core of what, you know, a Tony Robbins expert such as it was.
[00:29:46] I assume Tony – I assume him and his people designed it. So, I mean, I assume that it's built in, that no, it gets – like, I'm reminded of a few months ago now, our great friend Christine Whelan on the Seek Safely board who loves these things. We really should have had her on this episode. We're always saying that. Like, we get deep into these conversations. It's like, where's Christine? Where's Christine? Can we just call her right now? She loves these things. She loves these large language models. Yeah. And we're always talking about them. And a while back, we were talking with her.
[00:30:13] And just for funsies, she asked ChatGPT come up with six Dr. Gwen Patrick Doyle tweets. And you remember – and this shouldn't have been hard because when I tell you, Gene, that publicly I say about six things over and over again. Right. Like, anybody who follows my work could probably refine my shtick to between, like, four and six points. I just find new ways to say them. And when I tell you, ChatGPT whiffed on every single one of them. However, did produce sentences that vaguely sounded like something I'd say.
[00:30:43] Well, yeah. And this is something I'm kind of expecting. So, especially when we're talking about the self-help industry. Like, if you are a fan of a little bit culty, they're big on word salad, right? They talk a lot about the word salad that you see in the self-help industry. So, I'm like, let's see what ChatBot produces trained on stuff that's already very word salad-y. I feel like it's going to be, like, word salad buffet.
[00:31:13] Like, it's going to be just stuff that you're like, oh, yeah, that's deep, that's deep. And then when you try to go, like, wait, what is it actually saying? What does it mean? It doesn't make any sense. But it sounds good on the first hit, you know? And I think we're going to get a lot of that. It should be interesting. Especially as we rapidly adopt this technology in all sorts of applications, including in the self-help industry.
[00:31:39] There will be growing pains of a various degree, many of which will probably be hilarious. So, at least there's that. But, yeah, it'll be interesting to see. I guess I just feel overall. I mean, I remember when we were, like, ChatGPT really first came on. I think we were just bored. I think we were bored at the kind of tail end of the pandemic. And then you had a lot of, like, okay, OpenAI releases its chatbot for people to use.
[00:32:07] So, journalists and people, like, really started experimenting to see where they could push the chatbot. And there were a lot of concerns. Like, you know, one journalist was like, I talked to ChatGPT for five hours and I made it fall in love with me. And you're like, what is going on, right? Like, there was some crazy stuff that was coming out in the very beginning there. And, you know, the developers are constantly improving these tools. The tools themselves are constantly learning.
[00:32:37] So, things are evolving so quickly. And I just, I feel like we need a minute to just kind of catch our breath. And we're just not, we're not being given that. Instead, it's really just, it's coming at us. And, yeah, I don't know. Again, I would feel differently if the things that were being pushed at me about AI were more manifestly useful.
[00:33:02] Because we hear all the time, like, you know, AI is going to make our experience more friendly, useful, like whatever it is. Like, so, for example, now Gmail is constantly asking, would you like to write, Facebook is constantly asking, would you like to write this with AI? And I'm like, what do you think I do? Yeah. Write things. Like, that's, I, what has given you the impression that I need. No, I get it. Like, and there have been, like Grammarly has been around forever.
[00:33:28] Like, there have been apps around forever that will help you with your written communication to be clearer. And then, I get it. But I find it so interesting that AI, in terms of what's being pushed at us, is so often like, yeah, I don't really need it or want it for that. The other thing that is constantly being pushed. Like, it's good luck. When Apple announced that its new iPhones were going to have Apple Intelligence, AI. Apple, of course, having a sterling track record with all things AI. Going back to good old.
[00:33:58] What about Siri? Siri. Dear Lord. I didn't say Siri. Go away. Oh my God. She just, she woke up for both of us. She heard me. She heard me over here. Now, man, when Apple introduced Apple Intelligence. And look, I will always, I respect Apple nine times out of ten. They give me stuff I like. So, like, when I read up on it. And they're like, generate cool images. I'm like, why would I want to do that?
[00:34:25] Like, what's, like I was over here, like I was over here going, man, my life is okay. But I can't generate the cool images I want to generate. If only I could. Like, man, my life is okay. But writing emails are a bitch. Yeah. If only Gmail would write them for me. So. Well, I saw a hilarious meme today, actually. And it was like Scooby-Doo. You know, like the end of the show where it's like, let's reveal. Freddy's there and he's like, let's reveal what's under the mask. And so the mask is chat GPT.
[00:34:54] And then he pulls it off and it's the little paper clip from Microsoft Word. It's clippy. Yeah. Oh, it looks like you're trying to write a resume. Yeah. Can I help you with that? There's a wonderful scene in my favorite TV show and the best TV show in the entire history of the universe, The Office, where Daryl is trying to write a resume. And the joke is Daryl has worked at the same company at Dunder Mifflin for the last, like, the same job he's ever had.
[00:35:24] And so he doesn't really know how to write a resume. And so he calls Microsoft tech support. And he's like, there used to be a little paper clip. I believe his name was Clippy. Can we get him back? Can I help? Can you help? Can I have Clippy, please? Oh my gosh. So before we started tonight, curious about like, so if the trend is to take life coaching and or self-help gurus, refine them into AI forms so they'll be available 24-7. And we were kind of curious.
[00:35:54] Now you fed in some specific prompts to ChatGPT. I just asked, yeah, so I asked ChatGPT, if I were to ask you to create an AI chatbot that is based on a particular life coach's style, how would you do that? So here's a breakdown of how I'd approach it. And it gave me quite an extensive little list. So a six-step process. So first it would. Only six. Yeah. Define the life coach's style.
[00:36:21] So gathering data on the coach's tone and language. Philosophy. So it says things like, e.g. mind over motivation, accountability, purpose-driven life. Like, oh, tone and language was compassionate or direct or tough love, etc. And then you'd need also the coach's tools and methods. So things like journaling, visualization, goal-setting frameworks. You'd want to look at common topics.
[00:36:49] Catch phrases or mantras that they often use. And client personas that they typically serve, such as entrepreneurs, creatives, executives, etc. So they'd look at sources like blogs, books, interviews, YouTube videos, podcasts, social media posts, coaching programs, and testimonials. Step two. Create a style guide. A written reference document that outlines basically all these things.
[00:37:14] Voice and tone, vocabulary, response frameworks, do's and don'ts of their style, and some sample dialogues or coaching exchanges. Step three. Train or fine-tune the bot. This is an optional advanced step. You could fine-tune the language model on this data set. Then you could use prompt engineering to create a system prompt that guides the AI to respond as if it were that coach.
[00:37:41] Which is a lightweight but surprisingly effective method, it says. And then you would inject stylistic memory. So use a custom personality layer that makes the chatbot respond consistently in tone and methodology. Then you'd design the conversational flow. That's step four. Then you would want to test and refine it. And then you'd explore options for deployment.
[00:38:06] Whether it would be on a website, WhatsApp, a mobile app, or some kind of voice assistant. So as you read that, here's what it is. The self-help world is notorious for producing a lot of quote-unquote free content. If you go to YouTube right now and you just type in Tony Robbins motivation, dozens and dozens and dozens of videos will come up. And that's true of any self-help guru.
[00:38:36] Self-help guys are notorious for offering a lot of quote-unquote free content. But the point of it is not actually to be real substantive content. The point of it is to sound kind of good to kind of hook you into the next layer of their products and services. Whether that's buying a book or something more extensive, right? Like going to a retreat, etc. And consequently, you could binge lots and lots of content.
[00:39:04] Like you'd binge lots and lots of content from any given self-help guru. But not really get to the meat of what they're all about. Like you could kind of get a tone. You could kind of get some overall, some overviews. Which is always interesting to me because, you know, my YouTube algorithm, the AI YouTube monster, is constantly giving me videos like, Tony Robbins reveals the top three secrets to ultimate success, right? And if you didn't know that this was the game, like you'd say, oh man, these are the things. Like it's the three. Right.
[00:39:33] Right here. But it's not. And it never is. Like they will never give away their core ideas for free. Like if it could have found their entire business model. So my point is, if somebody was going to do exactly what you just described, creating like a chatbot version of one of these gurus, but fed it hour upon hour of the free content that you could binge, none of which is, and it's purposefully, none of which is real substantive, but designed to just scoop you into the ecosystem. Yeah.
[00:40:03] I wonder what impact that would have. Like, I wonder if like people who had actually paid whatever the 10 grand to go on the retreat could tell the difference. Like, wait a minute, this is just the free stuff. Like this is just, you know. So ostensibly, like I would think that Tony or the Tony Robbins Corporation or whatever it is, would have given it the deeper content, which raises two questions.
[00:40:28] How much deeper is the deeper content than the free content? Because sometimes I don't know if there's much there with a lot of the self-help gurus. So that's one thing. Like you might just discover that there's not really a whole lot to it. And I'm wondering like how successful these things could be without, because all of the free content, like you said, is a hook to pull people in to pay for the more expensive stuff.
[00:40:55] And I think in many ways, like I think part of the floundering of the self-help industry right now is that it's hard to give people the same kind of high in like virtual content that people get from in-person content, like going to an actual event. So I don't know if people are really going to find a lot of value in these types of things. That's one thought I had.
[00:41:19] The other thought I have about this is that I think you could do this over and over again, and we could probably come up with like four self-help guru archetypes. And then all of your self-help chatbots are going to fall into one of these categories of self-help. They're all just going to kind of sound the same. And they're going to be like the tough love guy that's like all about your business growth.
[00:41:43] And then it's going to be like the, you know, like personal, you know, chatbot that's trying to tell you how to like define your goals and dreams and like go with a heart-centered model and all these things. Like you're going to just have the same things over and over again, I think. Yeah. Yeah. I don't know. So I wonder how good the chat GPT AI is like right now, like of all the stuff that's out there, various self-help gurus.
[00:42:13] Yeah. Well. I wonder how good they are in terms of emulating well-known. Now, I haven't mentioned a particular name yet on this podcast that I mentioned every episode. You did actually. Did I? But it was brief. It was brief. You're right. No, it was okay. It was brief. But so here, this, after I asked chat GPT to create this AI chatbot for me, it asked me if I wanted to try a mini demo. Oh, wow.
[00:42:39] It said, if you give me the name of a coach or describe their style, I can create a sample interaction or even simulate how the chatbot would sound in action. You got it? But I like to do that. Let's edit. I am typing right now. So it's searching the web. Oh, boy. So drawing from his teachings and communication style, I put James Arthur Ray into this. Rest his soul. Yeah. RIP. It will embody his principles, tone, and approach.
[00:43:05] So its core principles would be purpose-driven living, radical responsibility, mastery through practice, spiritual and practical integration. That was his core. Yeah. So here's a sample interaction. The user says, I'm feeling stuck and unsure of my purpose. So the AI, the Ray-inspired AI would say, stuck is merely a signal, a call to action. You're not lost. You're being invited to go deeper.
[00:43:30] Ask yourself, what would you do if you were living fully aligned with your purpose? Embrace discomfort. It's the crucible where transformation occurs. Your purpose isn't found in comfort. It's forged in the fire of commitment and action. I mean, the fact that fire makes it into the first thing that chatbot gives me is morbidly interesting. That's rough. And I'm allowed to make that joke, but. That's a tough beat. Oh, no. Chat GPT. Oh, boy.
[00:44:00] Rough. Rough. Yeah. I wonder. So have you played with any of the AIs other than chat GPT? I haven't. I haven't played with much of them because I do have concerns about the environmental implications. As I also, you know, drive around in my gas-powered vehicle and throw away a sickening amount of plastic on a weekly basis. So I'm not trying to be holier than thou. But yeah, no, I haven't played around with all of them a whole lot.
[00:44:30] I'm going to ask Grock. So I'm on Twitter a lot. Oh, no. Grock. So I'm going to ask Grock, what would Dr. Glenn Patrick Doyle say my problem is? We'll just say that. What would he say my problem is? Okay. Without knowing your specific situation, it's impossible to say exactly what Dr. Glenn Patrick Doyle would diagnose or suggest. That's decent. Dr. Doyle, a licensed psychologist specializing in trauma, addiction, and emotional difficulties. I just threw that one in there.
[00:44:57] Dr. Doyle emphasizes practical, realistic, and solution-oriented approaches to mental health. He often focuses on how trauma, societal pressures, or unhelpful thought patterns can distort self-perception behavior. So Grock thinks I would say my problem is trauma or emotional wounds, self-invalidation, cultural narratives, or practical skill deficits. But here's the interesting thing. At the bottom, Grock gives me, here are the relevant webpages. So it'll tell you, here's where I got this.
[00:45:27] Right, right. It includes my website, my blog, and seeksafely.org. Ah, interesting. So, it's interesting. It's comprehensive. Oh, wow. Here's Reddit. I don't want to go down the rabbit hole of what they're saying about me on Reddit. Oh, gosh. That's not true. I love the Reddit community. Redditors, I love you guys. I love you. Reddit's great. We can have a whole episode about self-help on Reddit, actually. That'd be fascinating. Anyway, we should probably kind of bring this full circle on AI.
[00:45:57] You know, the reason why we wanted to talk about it, I mean, yeah, the conversation was kind of spurred by the Tony Robbins AI thing. But we noticed that the circles that self-help tends to be big in are also the circles that seem to be really embracing AI. And that makes sense to me. Like, I've often said, there's a certain type of self-help person. I'm a self-help person. Like, we look at a new thing, and our first thought is not, eh, we're scared. Like, the first thought is kind of like, eh, maybe. Like, what if? I don't know.
[00:46:26] Like, what if I can have a therapist or a life coach in my pocket? Maybe. Who knows? One of our messages of Seek Safely as an organization is to kind of tap the brake on that impulse and say, well, maybe. Like, okay. But, you know, look, AI might be a resource. And by the way, I cannot express my annoyance. Whenever you want to have a conversation about AI, somebody comes along and says, well, it's inevitable. So get used to it. And nobody's arguing that.
[00:46:56] We get it. We know. Just like the internet was inevitable, we get it. We know it's inevitable. We can still have a conversation about it. You know, like, because we have questions, because we want to approach it intelligently, you know, it doesn't mean that we're dismissing it. In fact, quite the opposite. We're taking it really seriously. That's why we're having this conversation. But again, Seek Safely as an org has always been about, you know, look, don't dismiss any tool, whether it's a self-help tool, whether it's AI, whether it's, you know, whatever, a philosophical, spiritual tool, whatever it is. Don't necessarily dismiss it.
[00:47:24] But approach it intelligently, approach it cautiously. You know, anything that has the power to transform your life also has the power to ruin your life. So do so in a framework that really emphasizes and accounts for, you know, your true north and your true goals, your true values. So I think we've solved all the complicated, nuanced issues about AI right here in this hour-long podcast. I think we've done a great job. Yeah, we've solved all the problems for sure or just raised many more questions.
[00:47:53] But, you know, there's value in that too. Now, the spoiler is that we're not even doing this podcast. This is just AI Glenn and AI Gene. We program the AI bots. Surprise! We fed Gene and Ginny's whole book into it. It already has it, I'm sure. Oh my goodness. That shit is on Amazon. It's all like they've already gobbled it all up. True story. All righty. Nothing's secret. Well, thanks. Thanks, everyone, for listening.
[00:48:21] If you would like to support Seek Safely and these ridiculous conversations that Dr. Glenn and I like to have, you can go to seeksafely.org. Click on the Donate button and make a one-time or an ongoing donation. We always appreciate your support. And we also appreciate your feedback. So we're happy to hear. I'm curious. How are other people using these tools in their lives? What do you think? What are your hopes? What are your misgivings? I want to know. You bet. Yeah.
[00:48:50] Let us know. If you would be so kind, if you enjoyed our podcast, to leave us a five-star review on the platform of your choice, we would love that. If you did not enjoy our podcast, just forget I said anything just now. You can find us at Seek Safely on all the social things. You can find me at Dr. Doyle Says on all the social things. You can find JBeans on some of the social things. You're on threads, right? Yeah. I was on threads for a minute. I've been laying low. Laying low. We're kind of waiting. Nothing's gone, though. It's all there.
[00:49:20] It's all there. We're kind of waiting for the next thing. We're kind of waiting. Yeah. Yeah. All right, gang. Ciao. All right. Thank you. Thanks for listening to this episode. We hope that you have found it enlightening, and we'd be so, so grateful if you'd share it with the seekers in your life. We all know at least one, right? Until our next episode, you can find us on Twitter, Instagram, and Facebook at Seek Safely.
[00:49:47] Connect with Dr. Glenn Patrick Doyle at Dr. Doyle Says and me, Jean, at Jean C. Brown on Twitter. Feel free to send us an email, info at seeksafely.org. To support Seek Safely, you can make a secure donation on our website, seeksafely.org slash donate. The Seek Safely podcast is produced by Citizens of Sound.

