Figma, a popular design tool, recently faced backlash after its AI tool, Make Designs, generated weather app designs that closely resembled Apple’s design. This incident raised concerns about the training process of the AI tool and the potential legal implications.
The company claimed that they did not train the AI tool on Figma content or app designs, but it was discovered that some components of the design systems were similar to real-world applications. This lack of oversight in vetting the assets contributed to the issue, leading to the removal of the feature.
Response and Actions Taken
After identifying the issue, Figma promptly removed the problematic assets and disabled the feature. They are now working on an improved quality assurance process before reenabling Make Designs. However, no specific timeline was provided for when the feature will be back.
Figma clarified that the AI models powering the tool, including OpenAI’s GPT-4o and Amazon’s Titan Image Generator G1, were not trained on specific designs. The design systems used for the tool were extensive and included hundreds of components for mobile and desktop applications. These components were used to guide the output of the AI tool, which then created fully parameterized designs using Amazon Titan’s diffusion model.
Despite the setback with Make Designs, Figma announced other AI tools at its Config event. Users were given the option to opt-in or out of allowing Figma to train on their data for potential future AI models until August 15th. The company also emphasized its commitment to improving the oversight and training processes for AI tools.
Figma’s Make Designs AI tool incident serves as a cautionary tale about the importance of thorough oversight and vetting in AI training processes. The company’s response to the issue and commitment to enhancing their quality assurance process are positive steps towards preventing similar incidents in the future. As AI continues to play a significant role in design tools, it is essential for companies like Figma to prioritize transparency and user privacy in their AI training practices.