Posts

Are bigger bits of brains better?

Image
We scoff at the folly of phrenology – the simplistic idea that the size and shape of bumps on the skull could tell you something about a person’s character and psychological attributes. It was all the rage in the Victorian era (the early to mid-1800s) in the UK and the US especially, with practitioners armed with calipers claiming to measure all kinds of personal propensities, from Acquisitiveness and Combativeness to Benevolence and Wonder. The skull bumps were just a proxy, of course – the idea was that they reflected the size and shape of the underlying brain regions, which were what was really associated with various traits. It all seems a bit quaint and simplistic now (apart from the entrenched association with racism), but while we may like to think we have moved on, a lot of modern human neuroscience is founded on the same premises. - The first premise is that different mental functions or psychological traits can be localised to specific regions of the brain. - The second is t…

Escaping Flatland - when determinism falls, it takes reductionism with it

For the reductionist, reality is flat. It may seem to comprise things in some kind of hierarchy of levels – atoms, molecules, cells, organs, organisms, populations, societies, economies, nations, worlds – but actually everything that happens at all those levels really derives from the interactions at the bottom. If you could calculate the outcome of all the low-level interactions in any system, you could predict its behaviour perfectly and there would be nothing left to explain. It’s turtles all the way down.
Reductionism is related to determinism, though not in a straightforward way. There are different types of determinism, which are intertwined with reductionism to varying degrees.
The reductive version of determinism claims that everything derives from the lowest level AND those interactions are completely deterministic with no randomness. There are things that seem random, to us, but that is only a statement about our ignorance, not about the events themselves. The randomness in t…

How much innate knowledge can the genome encode?

Image
In a recent debate between Gary Marcus and Yoshua Bengio about the future of Artificial Intelligence, the question came up of how much information the genome can encode. This relates to the idea of how much innate or prior “knowledge” human beings are really born with, versus what we learn through experience. This is a hot topic in AI these days as people debate how much prior knowledge needs to be pre-wired into AI systems, in order to get them to achieve something more akin to natural intelligence. 

Bengio (like Yann leCun) argues for putting as little prior knowledge into the system as we can get away with – mainly in the form of meta-learning rules, rather than specific details about specific things in the environment – such that the system that emerges through deep learning from the data supplied to it will be maximally capable of generalisation. (In his view, more detailed priors give a more specialised, but a more limited and possibly more biased machine). Marcus argues for more…