Whether you have a closet that would make Cher from Clueless green or you have the Steve Job’s uniform you’ve likely struggled to look polished at some point. In a recent post, Facebook AI announced Fashion ++ a machine learning model that suggests small changes to make an outfit more fashionable (like removing the green sweater in the photo above).
Whether removing an accessory, selecting a blouse with a higher neckline, tucking in a shirt, or swapping to pants a shade darker, often small adjustments can make an existing outfit noticeably more stylish.
The model, “consists of a deep image generation neural network that learns to synthesize clothing”. Fashion++ ingests a picture which is split into shape and texture features. Splitting the image into two components allows for edits in fit, presentation, color, patterns and material. Once the model makes enhancements to the image shape and texture (separately) the texture is mapped back onto the shape to produce the final fashionable photo.
We first obtain latent features from texture and shape encoders Et and Es. Our editing module F ++ operates on the latent texture feature t and shape feature s. After an edit, the shape generator Gs first decodes the updated shape feature s++ back to a 2D segmentation mask m++, and then we use it to region-wise broadcast the updated texture feature t++ into a 2D feature map u++. This feature map and the updated segmentation mask are passed to the texture generator Gt to generate the final updated outfit x++.
If you’d like to take a look at Fashion++ in action checkout the publication written by Wei-Lin Hsiao, Isay Katsman, Chao-Yuan Wu, Devi Parikh (@deviparikh) and Kristen Grauman. The results in the paper are pretty impressive and beg the question…when can we use it? @deviparikh commented on Twitter that this will be open source soon! You can take a look at some other cool fashion projects by Wei-Lin (Kimberly) Hsiao here. You can also find more computer vision work by Kristen Grauman’s lab here.
Adafruit publishes a wide range of writing and video content, including interviews and reporting on the maker market and the wider technology world. Our standards page is intended as a guide to best practices that Adafruit uses, as well as an outline of the ethical standards Adafruit aspires to. While Adafruit is not an independent journalistic institution, Adafruit strives to be a fair, informative, and positive voice within the community – check it out here: adafruit.com/editorialstandards
Stop breadboarding and soldering – start making immediately! Adafruit’s Circuit Playground is jam-packed with LEDs, sensors, buttons, alligator clip pads and more. Build projects with Circuit Playground in a few minutes with the drag-and-drop MakeCode programming site, learn computer science using the CS Discoveries class on code.org, jump into CircuitPython to learn Python and hardware together, TinyGO, or even use the Arduino IDE. Circuit Playground Express is the newest and best Circuit Playground board, with support for CircuitPython, MakeCode, and Arduino. It has a powerful processor, 10 NeoPixels, mini speaker, InfraRed receive and transmit, two buttons, a switch, 14 alligator clip pads, and lots of sensors: capacitive touch, IR proximity, temperature, light, motion and sound. A whole wide world of electronics and coding is waiting for you, and it fits in the palm of your hand.
Have an amazing project to share? The Electronics Show and Tell is every Wednesday at 7:30pm ET! To join, head over to YouTube and check out the show’s live chat and our Discord!
Python for Microcontrollers – Adafruit Daily — Select Python on Microcontrollers Newsletter: PyCon AU 2024 Talks, New Raspberry Pi Gear Available and More! #CircuitPython #Python #micropython @ThePSF @Raspberry_Pi
EYE on NPI – Adafruit Daily — EYE on NPI Maxim’s Himalaya uSLIC Step-Down Power Module #EyeOnNPI @maximintegrated @digikey