(2023-01-13) Shipper Gpt3 Is The Best Journal Ive Ever Used
Dan Shipper: GPT-3 Is the Best Journal I've Ever Used. I wanted to see if it could help me understand issues in my life better, pull out patterns in my thinking, help me bring more gratitude into my life, and clarify my values... I’ve been journaling for 10 years, and I can attest that using AI is journaling on steroids.
If you know how to use it correctly and you want to use it for this purpose, GPT-3 is pretty close, in a lot of ways, to being at the level of an empathic friend:
Why chatbots are great for journaling
Journaling is already an effective personal development practice.
But journaling has a few problems. For one, it’s sometimes hard to sit down and do it.
For another, sometimes it feels a little silly—is summarizing my day really worth something?
Once you get over those hurdles, as a practice it tends to get stale
You want your journal to feel like an intimate friend that you can confide in—someone who’s seen you in different situations and can reflect back to you what’s important in crucial moments
Journaling in GPT-3 feels more like a conversation, so you don’t have to stare at a blank page or feel silly because you don’t know what to say.
How I started with GPT-3 journaling
I had a bunch of ideas to start. I tried one from a Facebook PM, Mina Fahmi, whom I met at the AI hackathon I wrote about a few weeks ago. He suggested telling GPT-3 to take on a persona, and told me that he’d had great results asking it to be Socrates
I tried Socrates, the Buddha, Jesus, and a few others, and found I liked Socrates the best
There’s a long tradition in various religions of visualizing and interacting with a divine, compassionate figure as a way of getting support—and this was a surprisingly successful alternative route to a similar experience. (Alien Intelligence, avatar)
After a while, though, I became a little bored of Socrates. I’m a verified therapy nerd, so the obvious next step was to try asking GPT-3 to do interactions based on various therapy modalities.
I tried asking GPT-3 to become a bot that’s well-versed in Internal Family Systems (IFS).
GPT-3 isn’t bad at that:
I also tried asking it to be a psychoanalyst and a cognitive behavioral therapist, both of which were interesting and useful. I even asked it to do Jungian dream interpretation
Another thing I tried is asking GPT-3 to help me increase my sense of gratitude and joy—like a better gratitude journal
it starts by acting like a normal gratitude journal, asking me to list three things I’m grateful for. But once I respond, it probes about details of what you’re grateful for to get you past your stock answers and into the emotional experience of gratitude.
One of my favorite therapy modalities is ACT—acceptance and commitment therapy—because I love its focus on values.
Values work is challenging because sometimes it’s hard to connect your day-to-day experiences to your values. So I wanted to see if GPT-3 could help.
I took a sample therapy dialog from an ACT-focused values book that I love, Values in Therapy, and asked GPT-3 to generalize from that dialog to learn how to talk to me about values.
While I liked these early experiments, they had a few significant problems.
First, the OpenAI playground isn’t designed to facilitate chats, so it’s hard to use.
Second, it doesn’t record inputs between sessions, so I ended up having to re-explain myself every time I started a new session.
Third, it sometimes gets repetitive and asks the same questions.
I built a solution: a web app with a chatbot interface that remembers what I say in every session so I never have to repeat myself.
The bot lets me select a persona—like Socrates or an Internal Family Systems therapist—which corresponds to the prompts above. Then I can have a conversation with it.
It can even output and save a summary of the session to help me notice patterns in my thinking over time.
I’ll be releasing the bot soon for paying Every members
There is also something weird about all of this. Spilling your guts to a robot somehow cheapens the experience because it doesn’t cost much for a robot to tell you it understands you.
This mix of feelings is reflected in this Twitter thread by Rob Morris, the founder of a peer-to-peer support app called Koko
they had to stop using the GPT-3 integration because people felt like getting a response from GPT-3 wasn’t genuine and ruined the experience.
Those feelings are understandable, but whether or not they ruin the experience depends on how the interaction is framed to you, and how familiar you are with these tools.
There is something innately appealing about building a relationship with an empathetic friend that you can talk to any time
If you're someone that's journaled for a long time, you'll find a lot of value in trying GPT-3 out as an alternative to your day-to-day practice.
Edited: | Tweet this! | Search Twitter for discussion