I'm taking a different view on things. "Emergence" for example shows that many, many small actions that are discrete in nature and often very simple, interact together to eventually become a massively complex system that no single, descriptive, general rule can describe. It is easy to describe the simple behaviour from a single agent, but it's impossible to understand the actions and consequences of the system as a whole. I reckon that we may not need to understand this entire system, but can start from the bottom by replicating certain behaviours and look at certain clusters in the detail that is still fathomable. Then attempt to replicate those clusters and move upwards in the chain.
Oh well, to prevent a rant on the same, on to the whitepaper then:
This whitepaper draws a comparison between Restricted Boltzmann Machines and human consciousness using a quantitative analysis of the capacity for the integration of information. The probability that computers can become somewhat conscious of their inputs is discussed. Consciousness of computers implies the capacity to interpret data, understand it, manipulate it and possibly to produce new data based on previous examples.Happy reading!
The same neurons activated by observation are also activated when dreaming or imagining. Restricted Boltzmann Machines work in a similar way; [....] This makes it plausible to construct computers that have some kind of imagination, [....] the type of consciousness isn't necessarily equal to our own [...]