(2024-06-07) Johnson Revenge Of The Humanities

Steven Johnson: Revenge Of The Humanities. Last week I had the honor of delivering the commencement address at my youngest son's high school graduation... the somewhat paradoxical idea that, thanks to the AI revolution, we are entering a period where it will be a great time to be a humanities major with an interest in technology.

there is a case to be made that college and grad students are over-indexing on the math and the programming, just as the technology is starting to demand a different set of skills.

interacting with the most significant technology of our time—language models like GPT-4 and Gemini—is far closer to interacting with a human

People who have command of clear and persuasive prose have a competitive advantage right now in the tech sector.

What is the most responsible behavior to cultivate in the model, and how do we best deploy this technology in the real world to maximize its positive impact? What new forms of intelligence or creativity can we detect in these strange entities?

Perhaps someday it will be possible for a code-illiterate person like myself to conjure an entire application into being just by describing the feature set to a language model, but we are not there yet. And of course building the models themselves will almost certainly continue to require skills that are best honed in engineering and computer science programs, not writing seminars.

There's a wonderful illustration the kinds of skills that are now at a premium in the conversation I had a few days ago with Dan Shipper for his AI & I podcast (best viewed on video so you can see what's happening on screen.) (2024-06-06-ShipperIsNotebooklmgooglesResearchAssistanttheUltimateToolForThought)

The sample use case I brought to the show was a notebook that I had filled largely with interview transcripts from the NASA oral history project. The notebook has something like 300,000 words of interviews with astronauts, flight directors, and other folks from the Apollo and Gemini programs

Dan and I decided that we would try to use this notebook to gather ideas for a potential documentary project about the Apollo 1 fire that tragically killed three astronauts in early 1967. I recommend watching the video starting around the thirty-minute mark where we really dive into the exercise -- I think it's probably the best example to date of the kind of high-level creative and conceptual work that NotebookLM makes possible,

But at a certain point, Dan really takes the wheel, and says, effectively: "This is a Steven Johnson project, and so it's got to have some surprising scientific or technological connection that the reader/viewer wouldn't expect; let's ask NotebookLM to help us find that angle."

The skill that Dan displays here is basically all about being able to think through this problem: Given this body of knowledge, given the abilities and limitations of the AI, and given my goals, what is the most effective question or instruction that I can propose right now?

The other thing worth noting in the exchange—and I take a step back to reflect on it in the middle of the exercise—is the range of intelligences involved in the project. On the one hand you have the intelligence of all the astronauts and flight directors contained in the interview transcripts themselves; you have the intelligence of all the authors whose quotes I have gathered over the past two decades of research and reading; you have the intelligence of two humans who are asking questions and steering the model's attention towards different collections of sources. (cyborg)

I used to describe my early collaborations with semantic software as being like a duet between human and machine. But these kinds of intellectual adventures feel like a chorus. (chorus of voices)

The artist Holly Herndon made a persuasive case for calling artificial intelligence "collective intelligence" in a recent conversation with Ezra Klein. My friend Alison Gopnik has been talking about AI for a long while as a "cultural technology,"

Another way to put that—which I will probably adapt into a longer piece one of these days—is that language models are not intelligent in the ways that even small children are intelligent, but they are already superhuman at tasks like summarization, translation (both linguistic and conceptual), and association. And when you apply those skills to artfully curated source material written by equally, but differently, gifted humans, magic can happen.


Edited:    |       |    Search Twitter for discussion