(2024-10-04) Horning Enough About Me

Rob Horning: Enough about me. The thing to write about in the “tech commentary” space this week is Google’s NotebookLM, a tool that lets users explore a set of documents through an LLM interface instead of reading them. It was just one AI summarizing tool among many until Google recently added the ability to make it generate a podcast with AI voices bantering their way through the material it’s been given to capsulize

Perhaps it speaks to the general overall reputation of podcasts that most commentators are very ready to concede that NotebookLM’s results are passable: If you want to hear two hosts speaking with total confidence and a touch of condescension about topics they have a cursory grasp of at best, NotebookLM can make that happen for you on demand

Like all the generative models, NotebookLM can quickly make a good-enough version of a half-baked idea that most human creators would decide wasn’t worth the effort if they had to do it themselves. Maybe that is what AI’s supposed “democratization” of creativity amounts to: No idea is too stupid to realize.

It seems like the goal is to offer users a way to become even more checked out of their exposure to ideas — if reading summaries and prompting chatbots is too arduous for you, you can just lean back and listen instead

Max Read treats NotebookLM as a generic example to sketch what he calls the “five common qualities of generative-A.I. apps.” (GenAI) Please click through to see what they are, but basically they boil down to “AI apps are unreliable for most cognitive work but fun, until they become bulk slop generators.”

I would never have the stomach for this, but Henry Farrell fed his own posts into NotebookLM and listened to the LLMs explain his work, which pointedly revealed to him how generative models’ summaries “tend to select for features that are common and against those that are counter, original, spare, strange.” (2024-10-03-FarrellAfterSoftwareEatsTheWorldWhatComesOutTheOtherEnd)

no one expects me to know anything and I don’t have to fake my way through anything. As it is, I usually read things because I want to read them, carefully or willfully or angrily as the case may be, and look for the less obvious patterns of thought in them, even if they are merely my projection

I want to read reparatively and read paranoid

Summarization tools seem like a rebrand of what this 2019 ACLU report called “robot surveillance.” The report imagined existing surveillance infrastructure being joined with machine learning to bring it to life


Edited:    |       |    Search Twitter for discussion