A few weeks ago I was giving a talk about art and surveillance to a group of engineers. Afterwards they told me about their work developing new image identification algorithms. I've been thinking a lot recently about what it means to have created machines that are able to look back at us, and spurred on by the conversation decided to try an experiment. I have thousands of images (collected as part of my ongoing Backdoored project) which actually were taken by robots with no human intervention. What might the bots actually be able to see or understand from the images they have created?

I took about 50 images from my collection and manually fed them through an experimental image recognition engine. I wasn’t necessarily expecting anything that surprising, after all we’ve many of us seen aspects of image recognition starting to appear in various social media platforms. But I was fascinated by the results. What came through strongly were the biases – the things the machines saw and recognised, and the things they ignored. The mistakes were equally revealing – in one innocuous scene of a dog sitting on a pavement the machine was convinced that it saw only military equipment.

It seemed that the machines were echoing back the way they are being trained to see the world. Of course, their embryonic visual landscape is bound to be coloured by the attitudes and obsessions of their teachers – the military and homeland security with their paranoid focus on threat identification, and the worldviews of the research engineers.

In the coming years and centuries this new visual landscape will form part of the conceptual heritage of machines way smarter than our current ones. What traces of this early vision of the world will endure in the DNA of future AIs?

You can see my initial experiment at http://www.backdoored.io/category/dreamers/