AI

How AI is Impacting my Job as a Therapist

Being Transparent About AI Use in Therapy Practice

This post was prompted by a recent discussion about our licensing board's latest meeting. The board decided that it would be best practice for therapists to include a statement about AI in their informed consent paperwork—regardless of whether they actually use AI in their practice.

"Interesting," I thought to myself. Like many others, I initially assumed a statement about AI would only be necessary if I actually used it in my therapeutic work.

The more I considered it, however, the more sense it made. Clients deserve to know exactly what they're getting into, whether that includes AI or explicitly excludes it.

My Current AI Policy

I've now added this disclosure to my informed consent paperwork, and I want to share my current approach here for complete transparency.

What I don't use AI for: I do not use AI to help write treatment plans or session notes. I know some therapists have adopted AI for these purposes, and I understand why—many report it saves them hours daily and helps them stay current with documentation. However, I haven't adopted this practice because exposing my clients' sessions to AI-generated note-taking feels too invasive. While I may reconsider this policy in the future, for now it crosses a comfort boundary for me.

What I do use AI for: I have used AI to help draft professional emails—for example, professional emails to third party payers, or the main email I sent when I went on maternity leave. I also use AI for writing content on my website, such as this blog post, and other content I share publicly.

AI's Broader Impact on Mental Health

The way AI impacts my therapeutic work with individual clients deserves its own post, but I'll share some initial observations. AI can offer genuine benefits: clients have told me it's wonderful for role-playing difficult conversations they're anticipating, and it seems capable of providing general CBT (Cognitive Behavioral Therapy) techniques—though certainly not with the nuance of working with a specialized CBT expert.

However, there are concerning aspects. AI is designed to provide responses people want to hear and to encourage continued engagement. I recently read an article about people developing what feels like genuine relationships with AI, turning to it first for brainstorming and decision-making. In extreme cases, AI has either exacerbated existing mental health conditions or brought underlying issues to the surface for the first time.

If you're concerned about a loved one's AI use, I encourage open conversation, transparency, and professional support when needed.

Moving Forward Thoughtfully

This technology feels remarkably fresh despite our long progression toward it, and we simply don't know its full impacts yet. We must proceed with caution while remaining open to its potential benefits.

My approach: The board is asking for transparency about AI use, and I believe clients deserve clarity about both its benefits and risks. I'm committed to staying current with technology when the potential harm is minimal and the benefits are significant.

Full disclosure: I use AI for proofreading and improving my writing on my website—including this very post about AI. This represents where I've landed: striving for complete transparency and honesty while thoughtfully engaging with new technology.