WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

(post is archived)

[–] 0 pt

Here's some text from the official OPENAI website , they say that:

DALL·E 2 has learned the relationship between images and the text used to describe them. It uses a process called “diffusion,” which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of that image.

How can this DALL-E take "ideas" from a database of stock photos, and through a text command, create images that are entirely unique? How does "it" understand?

[–] 4 pts

It has to have a reference.

An AI can’t create something out of nothing.

The drawings shown in the video look more like 3D scenes with filters.

[–] 2 pts (edited )

I tested DALL•E Mini. Results below in screen capture.

Try for yourself: https://huggingface.co/spaces/dalle-mini/dalle-mini

It’s the only site I could find with any version of the software. If you find a different/better one, please share.

[–] 1 pt

Here’s “Hitler playing guitar.”

https://pic8.co/a/81956172-1199-49af-9416-5ba197aaa277

[–] 1 pt

Hey that's pretty cool and super simple to use. I'm going to add that to the sticky if will unarchive the for me.

[–] 0 pt

starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of that image.

That's pretty novel. It's like the person who says, "I don't know how to make it, but I knows it when I sees it."