What with my interest in artificial intelligence, I wrote a Hopfield network implementation early this morning, and tested it.
Well the first thing I learned is that I'll have to write my Python module in C if I want to use it on any substantial data, because it's slow as a pig for anything but small images. That being said, I started out with one image:
When I asked the Hopfield network for the same, it gave me:
...which, when negated, is the same as the original. I remember reading that Hopfield networks have a tendency to remember the inverse of their pattern. So far so good.
Next thing I tried was memory of one of several images, which proved more difficult. After making a poor selection of Elder Futhark
runes, I attempted to remember Ansuz and got this:
Messed up. Looks like Thurisaz and something else. My selection of runes was too self-similar. I tried again, consciously trying to pick out a distinct set: Raidho, Algiz, Sowilo, Tiwaz, Dagaz and Othala. I tried to remember Tiwaz.
Closer, but it looks like Othala ruined the image. After removing Othala:
"I know a twelfth ..."
I tried Sowilo but Tiwaz jinxed that and after a while I figured out that my network could only remember Raidho and Tiwaz very well together. Here's Raidho:
Well that experiment kind of sucked, but it showed me that, especially if I have monochrome images, the originals shouldn't look too similar. I'm going to try with Chinese characters next time, and introduce distortion if that next experiment turns out to be successful. Stay tuned!!