Real-time Machine-Learning Painting
An image represented on an LCD screen is updated in real-time based on tracking the viewers/users eye movements (using some sort of eye-tracking algorithm, probably with a pretty good camera). The image is seeded with a random distribution of colour, and the feedback based on the eye-tracking informs the image making algorithm which are the areas of interest. It iterates based on this feedback to increase on the areas of interest, whether by amount or intensity or something, and updates the image accordingly.
This could be extended to the seed image being made up of images, and then rather that learning which combination of colours are of interest, it could iterate based on key words; deep-dream style?
Blind Occulus Rift
The same physical concept of an occulus rift, but does not provide the user with visual feedback, but instead with vibrations/shocks/brightness?/haptic stuff. Maybe based on sensor feedback of the surrounding environment, perhaps of an imagine environment which the user must navigate. This could develop into some sort of game.
Josh