jamest wrote:
How does any of this actually address the problems of assumption and semantics?
The easy problem of consciousness regards trying to explain how different mechanisms within the brain might fulfill specific functions, such as memory and attention, but even Chalmers said that if this were achievable the 'hard problem' would still remain. Anyway, you disregard that part of my reasoning where I discuss how brain 'processing' leads to my conclusions. That is, the easy problem isn't even relevant to addressing my posts. As far as I can tell, you're just name dropping and resorting to utilising red herrings in an attempt to distract from the actual focus of this discussion. Certainly, you haven't addressed the aforementioned problems in the slightest.
SO your theory this morning is that I'm purposefully using name dropping and red fish to distract you from the argument? Stop it james or we are going to get thrown out.
Semantics is the easy problem. It has nothing to do with the hard problem. But that doesn't matter.
The first problem that we have is one of you being fuzzy about the brain as a state machine(at least). It's actually not a finite state machine (FSM) but some of your suggestions about what the physical brain can't do could be done even by an FSM.
Starting with the idea of an FSM and then talking about how the brain is more complex is a good starting point for clarity.
The second problem is with semantics.
jamest wrote:The issue is how that entity could know that its internal states were caused by something external to itself.
In the brain as well as an FSM the meaning is built into the structure of the system. You know that you are seeing something because the neruons that respond to seeing are actually physically hooked up to your eyes and have been since your brain formed in the womb.
You are going a little higher and talking about the Association Cortices (AC) of the brain where multi-modal sensory input is combined. It still doesn't matter. As the connections in your brain developed and the AC's became further etched by experience where the connections came from constructed meaning.
Examples of where this fails are found in synaesthetes. They have some overlap between two different association channels. The interesting thing that Ramachandran found out about that is that the confusion was always between two areas that had been found to be physically next to each other. This is strong evidence that physical wiring is where meaning comes from. Meanidn such as 'is it a number or a color'.
A third issue is your complaints about the model not being capable of things like context and intent.
So, now it comes to the crunch: can you explain all human behaviour in terms of that behaviour being nought other than automatic responses to NNs?
That is going to require a great deal more talk about structure and we should try and separate the issues into easy stuff and difficult stuff.