0

March 8, 2016 AT 6:00 pm

Google’s Artificial Brain Is Pumping Out Trippy (And Pricey) Art #ArtTuesday

Memo akten

Last month members of Google’s research and virtual reality divisions organized an event called DeepDream. DeepDream is an art exhibition of work produced by “neural nets” – the artificial intelligence tool that mimics the human brain. Via Wired

Today, inside big online services like Google and Facebook and Twitter, neural networks automatically identify photos, recognize commands spoken in smartphones, and translate conversations from one language to another. If you feed enough photos of your uncle to a neural net, it can learn to recognize your uncle. That’s how Facebook identifies faces in all those photos you upload. Now, with an art “generator” it calls DeepDream, Google has turned these neural nets inside out. They’re not recognizing images. They’re creating them.

Google calls this “Inceptionism,” a nod to the 2010 Leonardo DiCaprio movie, Inception, that imagines a technology capable of inserting us into each other’s dreams. But that may not be the best analogy. What this tech is really doing is showing us the dreams of a machine.

To peer into the brain of DeepDream, you start by feeding it a photo or some other image. The neural net looks for familiar patterns in the image. It enhances those patterns. And then it repeat the process with the same image. “This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird,” Google said in a blog post when it first unveiled this project. “This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.”

The result is both fascinating and a little disturbing. If you feed a photo of yourself into the neural net and it finds something that kinda looks like a dog in the lines of your faces, it turns that part of your face into a dog. “It’s almost like the neural net is hallucinating,” says Steven Hansen, who recently worked as an intern at Google’s DeepMind AI lab in London. “It sees dogs everywhere!” Or, if you feed the neural net an image of random noise, it may produce a tree or a tower or a whole city of towers. In that same noise, it might find the faint images of a pig and a snail, creating a rather frightening new creature by combining the two. Think: machines on LSD.

Read more

Mike tyka10


Screenshot 4 2 14 11 48 AMEvery Tuesday is Art Tuesday here at Adafruit! Today we celebrate artists and makers from around the world who are designing innovative and creative works using technology, science, electronics and more. You can start your own career as an artist today with Adafruit’s conductive paints, art-related electronics kits, LEDs, wearables, 3D printers and more! Make your most imaginative designs come to life with our helpful tutorials from the Adafruit Learning System. And don’t forget to check in every Art Tuesday for more artistic inspiration here on the Adafruit Blog!

Check out all the Circuit Playground Episodes! Our new kid’s show and subscribe!

Have an amazing project to share? Join the SHOW-AND-TELL every Wednesday night at 7:30pm ET on Google+ Hangouts.

Join us every Wednesday night at 8pm ET for Ask an Engineer!

Learn resistor values with Mho’s Resistance or get the best electronics calculator for engineers “Circuit Playground”Adafruit’s Apps!


Maker Business — How Intel Makes a Chip

Wearables — FOSSHAPE familiarity

Electronics — Stay disciplined with ERC

Biohacking — Itch Tracker for Apple Watch

Get the only spam-free daily newsletter about wearables, running a "maker business", electronic tips and more! Subscribe at AdafruitDaily.com !



No Comments

No comments yet.

Sorry, the comment form is closed at this time.