You can see what this looks like on a webpage the researchers put up on Tuesday. If you need a description, imagine someone filming a game of Doom off of a CRT computer monitor with a glob of Vaseline smeared on the lens.
The machine learning set up for this task had three components: First, a model that comes up with a compressed version of the game environment based on a snapshot (like a low bitrate MP3 or deep fried JPEG), and then another model that takes that information to output a probability distribution of what the next frame might look like. These two models, taken together, make up the virtual agent’s abstract view of the “world.” Finally, there’s a controller model that has access to the reward functions of the game to make choices about what to do next in the game based on the previous model’s predictions.
Stop breadboarding and soldering – start making immediately! Adafruit’s Circuit Playground is jam-packed with LEDs, sensors, buttons, alligator clip pads and more. Build projects with Circuit Playground in a few minutes with the drag-and-drop MakeCode programming site, learn computer science using the CS Discoveries class on code.org, jump into CircuitPython to learn Python and hardware together, or even use Arduino IDE. Circuit Playground Express is the newest and best Circuit Playground board, with support for MakeCode, CircuitPython, and Arduino. It has a powerful processor, 10 NeoPixels, mini speaker, InfraRed receive and transmit, two buttons, a switch, 14 alligator clip pads, and lots of sensors: capacitive touch, IR proximity, temperature, light, motion and sound. A whole wide world of electronics and coding is waiting for you, and it fits in the palm of your hand.