1

The Chinese room reacts just to syntax, or shape of symbols (is purely syntactic). But brains are full of structure. In the room, Chinese symbols sit scattered in "piles" on the floor or are moved around in "batches" or "bunches", or are stored jumbled up in "baskets" with no structural connections between the symbols.

The things computers process are called "symbols". Computers can build structure between symbols and react to, or follow, it, and often do. Virtual connections between memory locations can be established using pointers, and algorithms can follow the connections using the methods of direct memory addressing and indirection.

This structural, or relational, ability of the computer program can be mirrored in the Chinese room by adding to the room's ontology a new object type: string. Instances of string in the room can then connect tokenised Chinese symbols. Every piece of string has the same characteristics including length. They are the embodiment of structure, are relational elements of structure.

In the room, if the connections established between symbols are a causal consequent of temporal contiguity at the sensory surface resulting in contiguous sensory symbols exiting the sensor then entering the room, the connections between the sensory symbols record as internal structure the external instances of temporal contiguity at the sensory surface. Is such an internal structure an element of semantic content?

In the computer, if the internal memory structures built with pointers are trees, a program can walk the trees and emit as output copies of the leaves (symbols), without reacting to (identifying) the shapes of the symbols. The program merely copies and emits whatever it arrives at that has no children. The program contains no conditionals indexed on symbol shape.

Suppose Searle is blindfolded then walks a tree by following the string with his hands. When he arrives at a leaf (a card inscribed with a Chinese ideogram which card has no downward strings attached) he emits the card then continues on his tactile tree walk. Since the rules he is following do not instruct a reaction to the shape of any Chinese symbol (and hence do not contain an example or description of any Chinese symbol shape), does this mean the program in the rule book is non-syntactic with respect to Chinese symbols, and Searle manipulates the symbols non-syntactically?

In 2014, Searle says (his emphasis): "...a digital computer is a syntactical machine. It manipulates symbols and does nothing else" ("What Your computer Can't Know", in The New York Review of Books, October 9, 2014, section 2, para 7). String is not symbols. Is his careful avoidance of structure his fundamental mistake?

Roddus
  • 749
  • 4
  • 15

2 Answers2

1

The only way that adding structure could make Searle’s Chinese Room Argument (CRA) semantic is if one could imagine Searle understanding Chinese by going through the programmatic process with this additional structure, whatever it is, included. Searle does not specify what a program might be asking him to do. It may be so advanced it is beyond our imagination today. It may be highly successful and convince everyone it understands Chinese. Even with all this, Searle claims, and I would agree, he would not understand Chinese after imitating the process. So, I conclude that “adding structure” does not help. Searle has already implicitly added it.

Consider the final question: “Is his [Searle’s] careful avoidance of structure his fundamental mistake?” I don’t think Searle is making any mistake with the CRA. However, he may be making a mistake with his physicalism, but that is independent of the CRA. An idealist or a traditional mind-body dualist could use the CRA to get the same two results Searle does in his “Minds, Brains, and Programs”, namely, that machines cannot understand and the machine and its programs do not explain our human ability to understand. There may be many ways to explain our ability to understand besides Searle's preferred “certain brain processes”, but AI programs are not one of them.

Frank Hubeny
  • 19,952
  • 7
  • 33
  • 100
1

I am aware that this response is slightly off topic, however I hope it still helps.

I think viewing Searls Chines room as "Intuition Pump", a concept introduced by Daniel Denett, is a usefull approach. Where thought experiments are entities that give us better or worse intuition of a certain phenomena. By slightly changing parts of the thought experiment in question one sees if, it is a good intuition pump or not. By analyzing if the changed thought experiment sustains the same intuition.

My conclusion is that the CRA is dependending strongly on it's intial form to create the demanded intuition. Meaning adding new entities like you suggest f.e. "strings" shows the limitied validity of the CRA for the analog phenomena it tries to describe.

I disagree with your statement that there are:

no structural connections between the symbols in the CRA.

Since the ordering them, guided by the rulebook, creates a structure that contains meaning for the reciever. The key point seems rather to be an unawareness/unintrestedness of/in the structure by the person in the room. This creates the clear cut between syntax and semantics. This clear cut also is caused by the rulebook containing 2 languages, which are superimposed by someone who isn't the person in the room that just understands one and shuffles expressions of the other language around.

This unintrestedness poses the question, that given the temporal structure of the sensory input, does the person in the CRA have the desire to derive the semantic property? Seemingly not he just does his work.

Note that the part where you discuss software you seem to distance yourself from what Searle seems to mean since you are arguing about the structures used in the rulebook to transmit the desired semantic properties. Not the CRA itself.

To me it seems as if the CRA would mainly focus on the analogy of a single CPU core. So demanding an intresed for the mechanism flipping bits seems problematic.

Due to the mentioned above intuition pump your approach seems appropriate but inappropriate aswell. Appropriate since you restructure the intial CRA to make it give better intuitions for possibly more complex computers. However the intial CRA still holds for sympler systems like normal calculators.

Others have chosen simular approaches f.e. trying to identify the overall system as relevant, laying more importance on the structure of the rulebook(software). I myself tryed this by reformulating the CRA to appear more like a nervcell and adding it with other modiefied CRA's together to get a 3D brain like structure.

My conclusion is that the CRA illustrates the wrong level of analysis for complex systems. Therefore I view your approach as inappropriat since the choosing the CRA as model seems unnecessary to general questions you seem to express. Like how does a semantic in a system arise. Or what exactly is semantics, how does complexity affect semantics ect.

CaZaNOx
  • 182
  • 6