(2020-12-28) Chapman How To Think Real Good

David Chapman: How To Think Real Good. Thinking about thinking: I enjoy thinking about thinking. That’s one reason I spent a dozen years in artificial intelligence research. To make a computer think, you’d need to understand how you think. So AI research is a way of thinking about thinking that forces you to be specific

Soon after, I realized that AI was a dead end, and left the field. Although my work in AI was influential, it seemed worthless in retrospect

Maybe the most useful thing I could do would be to write a book about how to think? I began. My jokey placeholder title was “How To Think Real Good.” I had a lot of ideas and some sketchy notes, but wound up abandoning the project.

My fascination and frustration with LessWrong comes from my long-standing interest in the same general project, plus liking much of what LW does, plus disliking some of it, plus the sense that LW simply overlooks most of what goes into effective, accurate thinking. LW suggests (sometimes, not always) that Bayesian probability is the main tool for effective, accurate thinking. I think it is only a small part of what you need.

I think that the problem Bayesianism addresses is a small and easy one.

My answer to “If not Bayesianism, then what?” is: all of human intellectual effort. Figuring out how things work, what’s true or false, what’s effective or useless, is “human complete.” In other words, it’s unboundedly difficult, and every human intellectual faculty must be brought to bear. We could call the study of that enterprise “epistemology”; and “rationality” is a collection of methods for it.

This whole web site, and the book it puts online, are a return to that project, and a vast expansion of this 2013 post

I’ll tell some anecdotes about thinking. These concentrate on the application of formal methods of thought, mostly because that’s LW’s orientation. This is probably a wrong emphasis; most insights result from informal reasoning and observation, not technical rationality.

The anecdotes concern academic research, because that’s what “How to think real good” was going to be about. Nowadays, I’m more interested in the everyday understanding of non-academics. That’s the subject of Meaningness, and largely of LW too.

Before the anecdotes, I’ll talk in general about problem formulation, because that’s an aspect of epistemology and rationality that I find particularly important, and that the Bayesian framework entirely ignores.

Problem formulation (problem definition)

Finding a good formulation for a problem is often most of the work of solving it.

A bewildered Bayesian might respond: You should consider all hypotheses and types of evidence! Omitting some means you might get the wrong answer!

Unfortunately, there are too many

Before applying any technical method, you have to already have a pretty good idea of what the form of the answer will be. Part of a “pretty good idea” is a vocabulary for describing relevant factors

Choosing a good vocabulary, at the right level of description, is usually key to understanding.

A good vocabulary has to do two things. Let’s make them anvils:

1. A successful problem formulation has to make the distinctions that are used in the problem solution.

2. A successful problem formulation has to make the problem small enough that it’s easy to solve.

Truth does not apply to problem formulations; what matters is usefulness

All problem formulations are “false,” because they abstract away details of reality. (All Models Are Wrong, But Some Are Useful)

there are no objects in the real world.

A system of discourse about the real world must involve approximations of some kind. This is quite unlike the case of mathematics, in which everything can be defined. —The Feynman Lectures

There’s an obvious difficulty here: if you don’t know the solution to a problem, how do you know whether your vocabulary makes the distinctions it needs? The answer is: you can’t be sure; but there are many heuristics

Work through several specific examples before trying to solve the general case.

Problem formulation and problem solution are mutually-recursive processes.

You need to go back and forth between trying to formulate the problem and trying to solve it.

The difficulty then is that you have to recognize incremental progress in both the formulation and the solution. It’s rare that you can use formal methods to evaluate that progress

A medium-specificity heuristic, applicable mainly in computer science: If you are having a hard time, make sure you aren’t trying to solve an NP-complete problem. If you are, go back and look for additional sources of constraint in the real-world domain.

Rationality without probability

When I say “Bayesian methods are a tiny fraction of rationality,” somehow people don’t get it. So let’s look at an example

I was interested in “classical planning,” a technical problem in robotics research

the robot has to plan ahead. It needs to figure out that it has to move the green block first.

Morals? You can never know enough mathematics.

the classical planning problem was easy once it was recast in logical terms. Probably none of the previous researchers in the field happened to have that background.

Put another way, An education in math is a better preparation for a career in intellectual field X than an education in X.

You should learn as many different kinds of math as possible. It’s difficult to predict what sort will be relevant to a problem.

Look, Ma, no Bayes!

First, the classical planning problem is definitely a problem of rationality

This is a problem Bayes won’t help with at all.

At the meta level: the year of hard thinking I did to solve the classical planning problem involved huge uncertainties. Was a general solution even possible? What sort of approach would work?

none of these uncertainties could usefully be modeled with probabilities, I think. The issues were way too amorphous for that.

I find it problematic, though, when Bayesians posit unconscious probabilistic reasoning as an explanation for rationality in cases where there is no evidence. This is dangerously close to “the God of the gaps”

Reformulating rational action

My next example comes from work with Phil Agre, which led to both our PhD theses. Phil had an extraordinary series of insights into how effective action is possible (with some contributions from me).

In my Master’s thesis, I had proven that there can be no efficient solution to the classical planning problem. (Formally, it’s NP-complete.) Since people obviously do act rationally, this seemed a paradox.

One of Agre’s insights was that the problem formulation was wrong. That is, the classical planning problem is dissimilar to most actual situations in which people act rationally.

Just learning to think like an anthropologist, a psychologist, and a philosopher will beneficially stretch your mind. (model agnosticism)

One key idea came from a cookbook. Fear of Cooking emphasizes “the IRIFOY principle”: it’s right in front of you. You know what scrambled eggs are supposed to be like; you can see what is happening in the pan; so you know what you need to do next. You don’t need to make a detailed plan ahead of time.

IRIFOY doesn’t always work; sometimes you paint yourself in a corner if you don’t think ahead. But mostly that doesn’t happen; and Phil developed a deep theory of why it doesn’t. One aspect is: we can’t solve NP-complete problems, so we organize our lives (and our physical environments) so we don’t have to.

Phil and I spent a couple years in careful observation, recording, and analysis of people actually doing things. From that, we developed an entirely different way of thinking about action—both what the problem is, and how to address it.

We applied as many different intellectual tools as we could find. In the end, ethnomethodology, an anthropological approach to describing action, was the single most useful

Probability theory is sometimes an excellent way of dealing with uncertainty, but it’s not the only way, and sometimes it’s a terrible way. One reason is that it collapses together many different sources of uncertainty.

We recognized that our approach could generate five or so years of further work, but would then fizzle out. Evaluate the prospects for your field frequently. Be prepared to switch if it looks like it is approaching its inherent end-point.

An AI model of problem formulation

Machine learning” is basically a collection of statistical techniques. As with other formal methods, they can work well when a problem is framed in terms that expose relevant features. They don’t work if your formalization of the problem is not good enough. That is fine if you view them as tools a scientist can use to help understand a problem; but our interest was in making minds, autonomous creatures that could figure out how to act effectively by themselves.

Leslie Kaelbling, working with Stan Rosenschein, independently developed a theory of action similar to Agre’s and mine; and then independently recognized the same limitations we did. Around 1990, she and I hoped these limitations could be overcome using machine learning techniques, and we did many experiments on that, independently and in collaboration.

Our idea was that the creature could incrementally construct a formulation of the problem it faced by recognizing inputs that behaved statistically differently relative to action and reinforcement

*When we did this research, neither of us knew much about statistics. In particular, we’d never heard of Student’s t-test, a basic statistical tool.

However, we did know enough about what statistics is about, and its vocabulary, that we could formulate one of our sub-problems statistically*

having described it that way, it took half an hour of flipping through Leslie’s stats text together to find out that Student’s t was the tool for the job.

It’s more important to know what a branch of math is about than to know the details.

this suggests: Get a superficial understanding of as many kinds of math as possible. That can be enough that you will recognize when one applies, even if you don’t know how to use it.

Math only has to be “correct” enough to get the job done.

After I decided that “strong” AI research (making minds) (AGI) was going nowhere, and after the “what should I do with my life!?!” existential crisis, I figured I’d apply what I knew to something actually useful. Pharmaceutical drug discovery (finding new medicines) seemed the best bet.

I worked on this problem in a team in the early ‘90s. Many of our conceptual advances were due to Ajay Jain, who is perhaps the best problem solver I’ve collaborated with. I learned a lot from him. I’ve found that pretty smart people are all smart in pretty much the same way, but extremely smart people have unique cognitive styles, which are their special “edge.”

What I observed about Ajay is that he always went for the simplest, most obvious, least interesting approach, and made it work.

I’ve just found Gian-Carlo Rota’s “Ten lessons I wish I had been taught,” which includes the “bag of tricks” idea. It’s very funny, and has some good advice

So anyway, back to drugs. Medicinal chemists think about a molecule in terms of its connectivity graph: its atoms and covalent bonds. That is entirely irrelevant

This is an example of problem formulation failure.

Part of the difficulty was that no one had a good idea about how to represent shape

If I had more time, I could do better. But, figuring out how to figure stuff out is even way harder. This is where the LessWrong internet collaborative approach shines brilliantly. It really needs to be a community effort. (Scenes, Collaborations, Inventions, And Progress)


Edited:    |       |    Search Twitter for discussion