Understanding understanding – could an A.I. cook meth?
![Image](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjs69g_EKlUJUwiSDBH_le39vIJhxCkQkK7jWA0OwbEycQM_9FjfNi-lzPoB1aKDrsFYpHYyosvacQWvRhtIDw8gXHBlV7OPpkG6k_9pOh7DhE-c_P2sYVS75XugYjC_IiS2ub8VWuMtOP/s400/Screen+Shot+2019-02-22+at+5.06.28+PM.png)
What would it take to say that an artificial system “understands” something? What do we mean when we say humans understand something? I asked those questions on Twitter recently and it prompted some very interesting debate, which I will try to summarise and expand on here. Several people complained that the questions were unanswerable until I had defined “understanding”, but that was exactly the problem – I didn’t have a good understanding of what understanding means. That’s what I was trying to unpick. I know, of course, that there is a rich philosophical literature on this question, but the bits of it I’ve read were not quite getting at what I was after. I was trying to get to a cognitive or computational framework defining the parameters that constitute understanding in a human, such that we could operationalise it to the point that we could implement it in an artificial intelligence. So, rather than starting with a definition, let me start with an illustration