The Neural Networks Behind DeepNude and Their Impact
DeepNude raised important ethical questions about AI use.
What is DeepNude and why is it still talked about today?
DeepNude is an artificial intelligence program that has sparked discussion for its ability to create modified versions of photographs of clothed individuals, making subjects appear as if they are nude. The software was developed by a Russian programmer known by the pseudonym Alberto. The idea, according to statements, originated as a technical experiment inspired by the famous "X-ray glasses" from advertisements in the 1960s and 1970s. However, DeepNude was withdrawn shortly after its release due to criticism related to the non-consensual use of images and the ethical risks associated with this type of artificial intelligence.
What AI technologies make DeepNude possible?
At the core of DeepNude are two key concepts of machine learning: GANs (Generative Adversarial Networks) and pix2pix technology. GANs are a type of neural network composed of two models that work together: a generator that creates images; and a discriminator that evaluates whether the generated image looks real or not. Over time, the generator becomes increasingly skilled at producing realistic images, while the discriminator improves at recognizing errors. It’s a two-player game that leads the network to learn how to create credible images. Pix2pix, on the other hand, is a technology for image-to-image translation. It serves to transform a photo from one visual format to another — for example, from "clothed photo" to "nude photo" in the case of DeepNude.
How was the DeepNude algorithm trained?
The algorithm was trained on over 10,000 photographs of nude women collected from the internet. This dataset allowed the system to "learn" how various parts of the female body appear — skin, shapes, proportions, and colors — in different lighting conditions and poses. Thanks to this information, the neural network learned to recognize the areas covered by clothing and to digitally reconstruct them by imagining what would be underneath, based on the learned models. It is important to note that DeepNude works almost exclusively on images of women because the training dataset consisted almost entirely of female photos. When attempting to use the algorithm on male images, the results are distorted and unrealistic.
What exactly does the neural network do when it receives a photo?
When a user uploads a photo of a clothed person, the neural network performs a series of automatic steps: 1. Recognition of body contours. The algorithm identifies the shape of the body and the areas covered by clothing. 2. Image segmentation. It divides the photo into areas: clothing, skin, background, and details. 3. Artificial reconstruction. Based on the data learned during training, the network generates new pixels that represent skin and anatomical details consistent with the original pose and lighting. 4. Final composition. The new pixels "digitally replace" the clothing, creating the illusion of a nude body, while never showing a real photo of the subject.
DeepNude uses GANs:
what does it mean in simple terms?
GANs work like a game between two artificial intelligences. One (the Generator) creates false images, while the other (the Discriminator) tries to determine whether they are real or false. At first, the generator produces raw images filled with errors. But, with thousands of iterations, it learns to better imitate reality. This type of learning is what allows DeepNude to create coherent and visually realistic images, even though they are entirely artificial. GANs are now used in many other fields: from restoring old photos to creating virtual characters in video games, to photographic filters on social networks.
What does it mean that DeepNude uses the "pix2pix" technology?
Pix2pix is a type of artificial intelligence model that transforms a photo into another visual version of the same image. For example, it can convert a sketch into a realistic face; or a road map into a satellite image. In the case of DeepNude, pix2pix is used to transition from "clothed image" to "nude image," following the logic learned during training. It is not a simple "overlay" of graphic layers: the model generates new pixels based on the statistics of the data it has studied.
Why was DeepNude withdrawn?
After its initial release in 2019, DeepNude was withdrawn by its creator. The criticisms were immediate and strong: the app could be used to create non-consensual images, violating the privacy and dignity of individuals. The risk was that anyone could upload a photo of a clothed woman and obtain a manipulated version in seconds. The creator of DeepNude stated that he did not foresee the malicious use that would be made of it and removed the project, explaining that "the world was not ready for such technology."
What is the ethical and social impact of technologies like DeepNude?
DeepNude raised fundamental ethical questions about the relationship between artificial intelligence, privacy, and consent. The same techniques used to create manipulated images are now employed in positive contexts, such as medicine, cinema, or digital restoration. However, without rules and control, they can become tools of abuse. This has prompted governments, companies, and researchers to discuss AI Ethics, calling for greater responsibility in the development and use of generative neural networks.
Are technologies like DeepNude still used today?
Although DeepNude has been removed, the underlying technology — GANs and pix2pix models — is still widely used. There are hundreds of generative AI projects that leverage similar principles for legal and creative purposes: restoring damaged photos, creating realistic avatars, or reconstructing faces from partial images. At the same time, illegal versions and clones of the original software have emerged, operating on the dark web or private channels, often with harmful purposes.
Is it possible to use techniques like those of DeepNude ethically?
Yes, but with clear limits and rules. The same neural networks that DeepNude used to generate images can be employed to reconstruct missing faces in ancient paintings, recreate landscapes, or model objects in 3D. The problem is not the technology itself, but the use made of it. Many artificial intelligence companies are now implementing control systems to prevent unauthorized manipulations of images of real people.
How are generative neural networks trained responsibly today?
Today, developers seek to use ethical datasets, composed of artificially generated images or content released under open licenses. Additionally, systems for filtering sensitive content and tracking models are adopted to know how and with what data they have been trained. This transparency is crucial to avoid repeating cases like DeepNude and to maintain public trust in artificial intelligence.