Rex, in the context of this discussion (referring to Chomsky and Searle), “syntax” means based on manipulation of rules and/or patterns; “semantics” means that the agent has a grasping, or understanding, of what’s going on.
Think of Searle’s Chinese room: the room appears to understand Chinese (semantics) but in reality it’s something akin to Google Translate, all syntax.
My point is that we have good reasons to think that humans are capable of semantic grasping while we have no such reason in the case of Chat.
Here is another way to think of it: we can’t judge whether an agent is responding only on the basis of syntax or whether it has semantics just on the basis of the agent’s behavior, because AI is designed to simulate semantics. Behaviorism in psychology, the notion that we can study just observable behavior and do without any inference about internal (mental) states failed. That’s also why, in my opinion, the famous Turin test is entirely irrelevant: it only measures how well a machine can imitate a human, not whether it functions like a human.
Hence my references to biology, neuroscience, and so forth. We think that humans, but not AI, are capable of understanding because of a combination of behavior, experience of internal states, and knowledge of mechanisms (as limited as it is).
About consciousness and intelligence: I actually said that understanding, not intelligence, requires consciousness. But yes, in us. But we are the only known examples, so it’s again an issue of burden of proof, and so far I don’t think enthusiasts of AI have come even close to meeting it.