Google has introduced a significant update to its AI-powered clothing try-on feature, allowing users to virtually test garments using just a selfie, as reported by TechCrunch. Previously, individuals needed to upload a full-body image to experiment with apparel virtually. Now, with the introduction of Nano Banana, Google’s Gemini 2.5 Flash Image model, users can generate a full-body digital representation based solely on a selfie.
By selecting their typical clothing size, users can access multiple generated images, enabling them to choose a preferred option as their default try-on photo. Additionally, users still have the choice to utilize a full-body picture or opt for diverse body type models.
This new functionality is now available in the United States, marking a notable advancement in Google’s AI-driven try-on capabilities. Initially launched in July, the try-on feature enables users to experiment with apparel items from Google’s Shopping Graph across various platforms like Search, Google Shopping, and Google Images.
Google’s commitment to enhancing the virtual AI try-on experience is further evidenced by the development of the Doppl app, dedicated to visualizing different outfits through AI technology. The recent update to Doppl introduces a shoppable discovery feed, suggesting personalized outfit recommendations with direct shopping links.
Through AI-generated videos and personalized style suggestions, Google aims to present products in a familiar format, similar to popular social media platforms. This strategic move aligns with the tech giant’s focus on leveraging AI to enhance user experiences in the digital shopping realm.
Source: TechCrunch
Leave a Reply