Stability AI launches Stable Diffusion 3, its ‘most sophisticated image generation model yet’
Stability AI today launched Stable Diffusion 3 Medium, which the British startup calls its “most advanced text-to-image open model yet.” Comprised of 2 billion parameters, SD3 Medium promises photorealistic results without complex workflows. Crucially, the model can generate these images while running on individual consumer systems. It also overcomes common artefacts in hands and faces, Stability said. The company built SD3 Medium to understand complex prompts involving spatial relationships, compositional elements, actions, and styles. Typography has also been enhanced. Stability described the text generation accuracy as “unprecedented.” The company attributes these improvements to the Diffusion Transformer architecture. Another core attraction is the model’s size.…This story continues at The Next Web
Stability AI today launched Stable Diffusion 3 Medium, which the British startup calls its “most advanced text-to-image open model yet.” Comprised of 2 billion parameters, SD3 Medium promises photorealistic results without complex workflows. Crucially, the model can generate these images while running on individual consumer systems. It also overcomes common artefacts in hands and faces, Stability said. The company built SD3 Medium to understand complex prompts involving spatial relationships, compositional elements, actions, and styles. Typography has also been enhanced. Stability described the text generation accuracy as “unprecedented.” The company attributes these improvements to the Diffusion Transformer architecture. Another core attraction is the model’s size.…
This story continues at The Next Web