Sarah Sloat 

AI as a life coach: experts share what works, what doesn’t and what to look out for

It’s becoming more common for people to use AI chatbots for personal guidance – but this doesn’t come without risks
  
  

an illustration of the side profile humanoid figure against a numbered list of new years goals on a sheet of paper
Can ChatGPT really help you change your life – or just flatter you? Composite: Rita Liu/The Guardian/Getty Images/Unsplash

If you’re like a lot of people, you’ve probably ditched your new year resolutions by now. Setting goals is hard; keeping them is harder – and failure can bring about icky feelings about yourself.

This year, in an effort to game the system and tilt the scales toward success, some people used AI for their 2026 resolutions. It’s the latest step in an ongoing trend: in September 2025, OpenAI, the company behind ChatGPT, released findings showing that using the AI chatbot for personal guidance is very common.

The company’s interpretation of this was that “people value ChatGPT most as an adviser rather than only for task completion.”

But just because you can ask AI for life advice, should you? And is there an art to it? Here’s what experts say are the dos and don’ts.

The pros and cons of chatbot guidance

AI-driven goal-setting isn’t inherently good or bad, explains Zainab Iftikhar, a Brown University PhD candidate, whose research examines artificial intelligence and users’ wellbeing. Artificial intelligence can lower the barrier to self-reflection and be genuinely empowering for some, she explains. For people who feel stuck, overwhelmed, or unsure of where to begin, prompts “can act as a scaffold” for expressing and understanding your ideas, says Iftikhar.

If the AI has access to information you’ve either shared or asked it to generate, it’s also an efficient tool at synthesizing that information, explains Ziang Xiao, an assistant professor of computer science at Johns Hopkins University. The compilation and interpretation of your previous data could help you efficiently organize the thoughts that initiate your goals.

But there are also drawbacks to using AI for goal-setting, says Iftikhar. Navigating the potential harms can come down to how well you know yourself – and how well you can navigate bad AI advice.

The risks of using AI for personal growth

Because large language models (LLMs), the type of AI that drives these systems, are trained on large-scale human-generated data, they can reproduce assumptions about success, self-improvement and relationships, Iftikhar explains. LLMs are also predominantly trained on English text and tend to exhibit a bias toward western values.

AI-suggested goals risk being over-generic, reinforcing “dominant cultural narratives, rather than what is meaningful for a specific individual”, says Iftikhar.

It can be very difficult to detect this bias. AI chatbots can be persuasive in a way that individuals may have difficulty detecting if they are being nudged toward mismatched goals, says Xiao. These tools may “inappropriately affirm goals that may not actually be a great fit for you”, he says.

Even if you use a chatbot frequently and request that it specifically base its responses on previous conversations, there’s still a chance that the chatbot’s replies will incorporate insights that have nothing to do with the information you’ve already shared, he explains.

During her research, Iftikhar noticed that the people who are routinely correcting or ignoring bad AI responses are at an advantage in using AI itself. Those who are not, for a variety of reasons, including technical expertise, are “more likely to suffer from incorrect or harmful responses”, she explains.

AI can also reflect the bias of the user asking it for guidance. In a 2024 study, Xiao and colleagues observed that LLM users were more likely to become trapped in an echo chamber, compared with those who use traditional web searches.

AI chatbots are designed to make us happy, explains Xiao. In a 2025 paper published in the journal npj Digital Medicine, researchers show LLMs often prioritize agreement over accuracy. These tools are typically optimized with human feedback that rewards agreeableness and flattery.

In turn, chatbots engage in sycophancy, or excessive agreement, with users. (In May 2025, OpenAI announced it was rolling back an update that made ChatGPT too sycophantic.)

How to be better at goal-setting with AI

Iftikhar says it’s worth being wary of tools that skip self-reflection or emotional processing in favor of tidy action plans.

That said, AI can help brainstorm the actionable goals we want to set for ourselves, says Emily Balcetis, an associate professor of psychology at New York University. She recommends prompting AI to consider what obstacles you might face as you attempt to accomplish these goals, as well as back-up plans you might need.

“Have it be a collaborator in how you’ll track your progress and monitor performance along the way,” says Balcetis.

Xiao recommends critically analyzing the chatbot’s responses – and then giving it feedback. Does this plan actually fit with your life? Is it aligned with your priorities and hopes?

“Try to give informative, quality feedback to the AI just as you would give feedback to another person,” says Xiao. “This process will help AI generate a more personable, realistic goal and help you consider the things that you really want.”

Good goal-setting also includes a review of why you haven’t pursued these goals already, explains EJ Masicampo, an associate professor of psychology at Wake Forest University.

“When it feels like we’re failing at a goal, it’s often that we’ve just prioritized the other things we’re trying to do,” says Masicampo. Multiple goals are difficult to juggle, he explains. It can be more productive to examine one ambition and what’s obstructing your motivation to achieve it.

Ultimately, chatbots may work best as reflective partners, albeit partners that don’t actually care about your success.

“These tools sound very human-like, but by design, they can’t take responsibility for your actions,” says Xiao.

For better or for worse, that is up to you.

 

Leave a Comment

Required fields are marked *

*

*