Rex,
> the charge of "behaviorism" is, I'm assuming, not a critique of induction in general, but just "if you look at whole-organism behavior, you're drastically underdetermined". <
Correct. A behaviorist approach makes sense if we have nothing else to go by, but we do.
> If you can do experiments on a biological system that would in principle tentatively suggest neural mechanisms of "understanding", but you instead do the equivalent experiments on an AI system and get an answer that would count if it were in the biological system, what do you say about it? <
I’m not sure, I would have to see exactly what sort of experiments we are talking about.
Part of the problem here is that we need to keep separate the notions of underlying mechanisms and emergent properties of the system (I don’t mean “emergence” in any kind of quasi-mystical fashion, of course). Whatever neural mechanisms underpin human understanding they will turn out to be based on neurons, their activities, and their interconnections. But it is a mistake to therefore conclude “see? Nothing to see here but neurons.” The question is how is it that neural activity generates what we call understanding.
The question for AI is: what makes us even think that it has understanding? I don’t know the answer to that question, but I think it’s fair to ask it to those who claim that AI understands things.
Incidentally, I posed the very question we have been debating to ClaudeAI. Here is its response:
“I think there's a meaningful distinction between how I process information and how humans understand concepts. When I "understand" the Pythagorean theorem, I'm using statistical patterns learned from vast amounts of text to recognize relationships between mathematical concepts, predict appropriate responses, and generate coherent explanations.
What I lack is the grounding in physical experience and consciousness that shapes human understanding. Humans understand mathematics through intuitive spatial reasoning developed from moving through the world, manipulating physical objects, and directly perceiving relationships. Your understanding includes a felt sense of "getting it" - that moment of insight when concepts click together.
My knowledge exists as patterns in a mathematical space without the phenomenological dimension of human experience - no "aha" moments, no struggle followed by clarity, no feeling of satisfaction when correctly applying a theorem.
That said, the boundaries between human and AI understanding aren't perfectly clear-cut. Both involve pattern recognition and information processing, just implemented through vastly different substrates and architectures.”