Tags: birds
Global animation project coordinated by Universal Everything.
Animation Sequences:
01, Block: Universal Everything –
02, Burst: Patch d. Keyes – http://patchdkeyes.co.uk/
03, Fill: Drew Tyndell – http://drewtyndell.com/
04, Chase: Tymote – http://tymote.jp/
05, Multiply: Nicolas Ménard – http://nicolasmenard.com/
06, Climb: Parallel Teeth – http://parallelteeth.com/
07, Rise: KClogg – http://kclogg.tumblr.com/
08, Twist: Matt Scharenboich – http://mattscharenbroich.com/
09, Wind: Váscolo – http://vascolo.com.ar/
10, Swarm: Universal Everything
11, Calm: Zutto – http://zuttoworld.com/
12, Float: Cindy Suen – http://cindysuen.tumblr.com/
13, Power: Masanobu Hiraoka – http://vimeo.com/user6065152
14, Slide: Ori Toor – http://oritoor.com/
15, Spin: Ori Toor
16, Wave: Váscolo
17, Construct: DXMIQ – http://dxmiq.com/
18, Growth: DXMIQ
19, Flow: Ruff Mercy – http://ruffmercy.com/
20, Scatter: Caleb Wood – http://vimeo.com/calebwood
21, Shrink: Bee Grandinetti – http://behance.net/grandinetti
22, Noise: Takcom – http://takafumitsuchiya.com/
23, Attack: MixCode – http://mixcode.tv
24, Boom: nöbl – http://nobl.tv
25, Bounce: Matt Abbiss – http://abbiss.co/
26, Ricochet: Matt Frodsham – http://mattfrodsham.com/
27, Splash: Guille Comin – http://guillermocomin.com/
28, Bang: Universal Everything
29, Embrace: Matt Frodsham
30, Melt: Caleb Wood
31, Stripe: Matt Abbiss
A visualisation of what’s happening inside the mind of an artificial neural network.
In non-technical speak:
An artificial neural network can be thought of as analogous to a brain (immensely, immensely, immensely simplified. nothing like a brain really). It consists of layers of neurons and connections between neurons. Information is stored in this network as ‘weights’ (strengths) of connections between neurons. Low layers (i.e. closer to the input, e.g. ‘eyes’) store (and recognise) low level abstract features (corners, edges, orientations etc.) and higher layers store (and recognise) higher level features. This is analogous to how information is stored in the mammalian cerebral cortex (e.g. our brain).
Here a neural network has been ‘trained’ on millions of images – i.e. the images have been fed into the network, and the network has ‘learnt’ about them (establishes weights / strengths for each neuron).
Then when the network is fed a new unknown image (e.g. me), it tries to make sense of (i.e. recognise) this new image in context of what it already knows, i.e. what it’s already been trained on.
This can be thought of as asking the network “Based on what you’ve seen / what you know, what do you think this is?”, and is analogous to you recognising objects in clouds or ink / rorschach tests etc.
The effect is further exaggerated by encouraging the algorithm to generate an image of what it ‘thinks’ it is seeing, and feeding that image back into the input. Then it’s asked to reevaluate, creating a positive feedback loop, reinforcing the biased misinterpretation.
This is like asking you to draw what you think you see in the clouds, and then asking you to look at your drawing and draw what you think you are seeing in your drawing etc,
That last sentence was actually not fully accurate. It would be accurate, if instead of asking you to draw what you think you saw in the clouds, we scanned your brain, looked at a particular group of neurons, reconstructed an image based on the firing patterns of those neurons, based on the in-between representational states in your brain, and gave *that* image to you to look at. Then you would try to make sense of (i.e. recognise) *that* image, and the whole process will be repeated.
We aren’t actually asking the system what it thinks the image is, we’re extracting the image from somewhere inside the network. From any one of the layers. Since different layers store different levels of abstraction and detail, picking different layers to generate the ‘internal picture’ hi-lights different features.
All based on the google research by Alexander Mordvintsev, Software Engineer, Christopher Olah, Software Engineering Intern and Mike Tyka, Software Engineer
Andy Baker animated Hattie’s illustrations.
A fine Data-Mosh-Trip, ‘Metal Fence with Dry Leaves’ from Spanish video artist Mateo Amaral. Translates: “Jungle wind, ocean waves extending as much as they can reach the inner senses and much more too. A network is stirred by sounds of leaves, the network separates things. DMT circulating in the brain.”
Following up on last week’s post featuring another recent pairing of Andrew Huang and Bjork, here’s a recently released video from the MOMA show. Andrew is quoted as saying, “I’m very proud of ‘Black Lake’,” says Los Angeles fashion and music filmmaker Andrew Thomas Huang, who shot the video filmed in Iceland’s highlands for good reason. An article on the show appears in “Dazed” stating, “Björk wrote the song while sat in a ravine. It is our first chapter in creating the character for Björk’s epic soul journey about loss, healing and the promise of solutions.” You can read the full article here:
http://www.dazeddigital.com/music/article/25005/1/bjork-black-lake
Cinematographer/Editor: Darren Pearson
Sound editor: Ryan Gerle
Sound re-recording mixer: Brennan Gerle
Music: Dead Horse Beats “Clouds” – Single People
http://www.flong.com/projects/augmented-hand-series/
The hand is a critical interface to the world, allowing the use of tools, the intimate sense of touch, and a vast range of communicative gestures. Yet we frequently take our hands for granted, thinking with them, or through them, but hardly ever about them. Our investigation takes a position of exploration and wonder. Can real-time alterations of the hand’s appearance bring about a new perception of the body as a plastic, variable, unstable medium? Can such an interaction instill feelings of defamiliarization, prompt a heightened awareness of our own bodies, or incite a reexamination of our physical identities? Can we provoke simple wonder about the fact that we have any control at all over such a complex structure as the hand?