On Wednesday, Meta published a new AI model called the Segment Anything Model (SAM) that is capable of isolating items inside images. In addition, the firm released the largest image annotation dataset to date. SAM can distinguish objects in photos and videos, even if they were not present in the training data. This can be done by clicking on the images or entering words that describe the object. For instance, typing “cat” will result in the tool drawing boxes around every cat in the picture.
Recently, OpenAI’s ChatGPT chatbot has become very popular and this has sparked a flurry of investments in the AI space. Meta has also unveiled several AI tools like a video generator that creates surrealist clips from text and an illustration maker that produces images from written stories. CEO Mark Zuckerberg has stated that including these generative AI “creative aids” in Meta’s apps is a priority this year.
Meta has been using technology similar to SAM to tag photos, moderate prohibited content, and determine which posts to recommend to Facebook and Instagram users. The release of SAM will make this type of technology accessible to a wider audience. The SAM model and dataset are available for download with a non-commercial license. If users wish to upload their images to the accompanying prototype, they must agree to only use it for research.