News

Diffusion models like OpenAI's DALL-E are becoming increasingly useful in helping brainstorm new designs. Humans can prompt ...
But here’s the thing, DALL-E 3 is still an AI model, not a mind reader. If you want images that actually look like what you’re imagining, you need to learn how to speak its language.
So that's what I'm doing in this article. I'm feeding the same prompt to Photoshop, Midjourney, and DALL-E inside ChatGPT using GPT-4o.
An update to DALL-E 3 lets you refine your AI-generated images within the ChatGPT interface. You’ll also find new style prompts in DALL-E to help kick-start your creativity.
One key aspect of DALL-E 3 is its improved understanding of complex and abstract prompts. Compared to earlier versions, it can generate more nuanced and contextually appropriate images.
So, let's have ourselves a time-honored showdown, AI style. We'll pit DALL-E 3 in ChatGPT against Midjourney in eight image comparison tests. And because it's Halloween season, that's our theme.
Dall-E 3 produced more than a dozen images of ghouls wearing heavy metal outfits and mountain biking through a post-apocalyptic urban landscape, but it struggles with pedals and gears.
But trying the same prompt in DALL-E 3 running on Bing, and the people look far more realistic, with believable lighting, closer skin textures, and even tangible flour dust on their hands.