Uncategorized

Figma explains how its AI tool ripped off Apple’s design

Image: Cath Virginia / The Verge

Figma recently pulled its “Make Designs” generative AI tool after a user discovered that asking it to design a weather app would spit out something suspiciously similar to Apple’s weather app — a result that could, among other things, land a user in legal trouble. This also suggested that Figma may have trained the feature on Apple’s designs, and while CEO Dylan Field was quick to say that the company didn’t train the tool on Figma content or app designs, the company has now released a full statement in a company blog post.
The statement says that Figma “carefully reviewed” the Make Designs’ underlying design systems during development and as part of a private beta. “But in the week leading up to Config, new components and example screens were added that we simply didn’t vet carefully enough,” writes Noah Levin, Figma VP of product design. “A few of those assets were similar to aspects of real world applications, and appeared in the output of the feature with certain prompts.”
Once Figma identified the issue with the design systems, “we removed the assets that were the source of the similarities from the design system and disabled the feature,” Levin says. The company is working through “an improved QA process” before bringing back Make Designs, though Levin did not provide a timeline. (In an interview with The Verge earlier this month, CTO Kris Rasmussen said the company expected to re-enable the feature “soon.”)

Figma launched Make Designs in a limited beta as part of its Config event announcements, but shortly after, the Apple-like mockups were posted on X. Figma pulled the feature, with Field taking responsibility for pushing the team to meet a deadline for Config. In our interview, Rasmussen said that Figma didn’t train the AI models powering the tool — which include OpenAI’s GPT-4o and Amazon’s Titan Image Generator G1 — at all.
In the blog post, Levin also went into some detail about the design systems powering the tool.

To give the model enough freedom to compose designs from a wide variety of domains, we commissioned two extensive design systems (one for mobile and one for desktop) with hundreds of components, as well as examples of different ways these components can be assembled to guide the output.
We feed metadata from these hand-crafted components and examples into the context window of the model along with the prompt the user enters describing their design goals. The model then effectively assembles a subset of these components, inspired by the examples, into fully parameterized designs. From there, Amazon Titan, a diffusion model, creates the images needed for the design. It’s more or less as simple as AI helping you identify, arrange, fill out, and theme small composable templates from a design system to give you a jumping off point.

Levin didn’t specify who Figma commissioned for the systems. The company declined to comment.

Image: Figma

Image: Figma
Figma’s caption: “Example components in our handmade design system.”

At Config, Figma announced other AI tools, like one that generates text for designs, and those features are still available. The company also laid out its AI training policies. Users have until August 15th to opt in or out of allowing Figma to train on their data for potential future models. (Users on Starter and Professional plans are opted in by default, and users Organization and Enterprise plans are opted out by default.)

Image: Cath Virginia / The Verge

Figma recently pulled its “Make Designs” generative AI tool after a user discovered that asking it to design a weather app would spit out something suspiciously similar to Apple’s weather app — a result that could, among other things, land a user in legal trouble. This also suggested that Figma may have trained the feature on Apple’s designs, and while CEO Dylan Field was quick to say that the company didn’t train the tool on Figma content or app designs, the company has now released a full statement in a company blog post.

The statement says that Figma “carefully reviewed” the Make Designs’ underlying design systems during development and as part of a private beta. “But in the week leading up to Config, new components and example screens were added that we simply didn’t vet carefully enough,” writes Noah Levin, Figma VP of product design. “A few of those assets were similar to aspects of real world applications, and appeared in the output of the feature with certain prompts.”

Once Figma identified the issue with the design systems, “we removed the assets that were the source of the similarities from the design system and disabled the feature,” Levin says. The company is working through “an improved QA process” before bringing back Make Designs, though Levin did not provide a timeline. (In an interview with The Verge earlier this month, CTO Kris Rasmussen said the company expected to re-enable the feature “soon.”)

Figma launched Make Designs in a limited beta as part of its Config event announcements, but shortly after, the Apple-like mockups were posted on X. Figma pulled the feature, with Field taking responsibility for pushing the team to meet a deadline for Config. In our interview, Rasmussen said that Figma didn’t train the AI models powering the tool — which include OpenAI’s GPT-4o and Amazon’s Titan Image Generator G1 — at all.

In the blog post, Levin also went into some detail about the design systems powering the tool.

To give the model enough freedom to compose designs from a wide variety of domains, we commissioned two extensive design systems (one for mobile and one for desktop) with hundreds of components, as well as examples of different ways these components can be assembled to guide the output.

We feed metadata from these hand-crafted components and examples into the context window of the model along with the prompt the user enters describing their design goals. The model then effectively assembles a subset of these components, inspired by the examples, into fully parameterized designs. From there, Amazon Titan, a diffusion model, creates the images needed for the design. It’s more or less as simple as AI helping you identify, arrange, fill out, and theme small composable templates from a design system to give you a jumping off point.

Levin didn’t specify who Figma commissioned for the systems. The company declined to comment.

Image: Figma

Image: Figma
Figma’s caption: “Example components in our handmade design system.”

At Config, Figma announced other AI tools, like one that generates text for designs, and those features are still available. The company also laid out its AI training policies. Users have until August 15th to opt in or out of allowing Figma to train on their data for potential future models. (Users on Starter and Professional plans are opted in by default, and users Organization and Enterprise plans are opted out by default.)

Read More 

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top
Generated by Feedzy