(2023-01-20) Shipper Can Gpt3 Explain My Past And Tell My Future
Dan Shipper: Can GPT-3 Explain My Past and Tell My Future? What do you want to know about yourself? asks the bot. The bot is built with GPT-3, and has access to hundreds of my old journal entries from the last 10 years that it can use to answer my questions. (journaling)
I start to type a question into my laptop. “When in his life has the author been the happiest?” GPT-3 doesn’t know my name yet, so I have to refer to myself as the author. “Please be specific about moments and situations. Identify what caused the happiness. Be truthful, don't make anything up. If you can't find a specific moment of happiness, don't summarize. Find moments that are surprising.”
Jackpot
this is by far my most personal experiment—and, in my opinion, one of the most valuable.
I was writing an article for Every and having trouble finding the main thread, so I pasted what I had so far into ChatGPT and asked it to summarize my key points.
got the results—they were great. I saw the article I was writing in a totally new way, and I was able to finish it quickly and easily.
In my next therapy session I had an idea. I asked my therapist if I could record our session and feed the transcripts into GPT-3. It’s been a tough few weeks
The results were stunning. “From this session, it appears the client is experiencing a lot of stress due to a variety of life events, such as [REDACTED]. The client is feeling overwhelmed, exhausted, and scared
On a deeper level, the client is struggling with feelings of inadequacy and fear of disappointing others, and a fear of conflict.”
It correctly identified each of the things that I was struggling with, and then, in its own words, it expressed how I’d been feeling—but far more precisely than I had been able to do on my own. In some strange way, it felt like the AI knew me better than I knew myself.
The experience left me even more curious to experiment.
GPT-3 might be good at summarizing text, but, at least for now, it has a poor memory. You can only feed it a few pages of text at a time to get summaries back. If you try to, say, feed it an entire journal, it'll error out.
Luckily, I discovered a new library called GPTIndex that makes this easy with just a few lines of code.
When I ask a question, it retrieves the most relevant chunks. It summarizes the chunks, and then synthesizes the summaries repeatedly until it gets a final answer.
using GPTIndex’s GPTSimpleVectorIndex, a data structure—wrote a short script that could break them into chunks and store them in a way that made them easily searchable.
It starts by asking the user for what kind of journal chunks it should load.
Once the documents are returned, I can ask my actual question
I have to ask it not to make stuff up so that it stays as close to what it finds in the entries as possible
It's far from perfect: its answers are sometimes repetitive. Sometimes they’re generic. Sometimes they’re just plain wrong
But sometimes, it provides that valuable “aha!” moment where something clicks
Is it insight? Or is it confirmation bias dressed up in insight’s clothes. Does it matter?
Edited: | Tweet this! | Search Twitter for discussion