
26:47
Does this mean that the artificial neural network models for machine learning, once parametrized to function as feedforward nets, are philosophical zombies?

31:48
Thanks so much! 😄

50:26
I guess similar experiments could be done with auditory illusions. Have you tried any?

50:56
The argument lacks calibration. How much more phi is required to get an identifiably different experience and does this effect yield that much additional phi?

01:00:36
@José, yes, this could be done for auditory illusions and likely many other kinds of illusions, as well! We just focused on visual filling-in here to demonstrate the idea/argument

01:01:55
Great talk! I have a 1pm to prep for. Thank you.

01:02:47
There is more brain feedback in one case, even though the experience is identical

01:03:54
Thank you all so much for the fantastic presentation, organization, and discussion! ^^

01:04:33
@Matthew, whether or not the difference in phi generated by these pairs is ‘enough’ such that it is not ‘excluded’ in the whole-brain correlate of consciousness is an open question! Would require careful phi calculations and comparisons/modeling.

01:06:20
@Amber Hopkins Thanks

01:06:44
Thank you, this was very interesting!