The artificial intelligence that crops your Twitter photos is getting a lot smarter.
In a new blog post, Twitter machine learning researchers Zehan Wang and Lucas Theis describe the company’s new approach to cropping your photos into preview thumbnails.
Twitter has been working on this tool for a while, but the post is the first detailed description of researchers’ methods and process. The feature is currently rolling out to all Twitter users, and aims to put an end to awkwardly cropped thumbnails.
Previously, when deciding which part of your images to display as the preview image, Twitter looked for the most prominent face. For pictures containing no faces, Twitter displayed the center of the image. If you’ve ever seen an awkward thumbnail of a cat’s neck or a white wall, that process was responsible.
Going forward, Twitter will crop using “saliency.” Saliency refers to the interestingness of a region of an image — how likely a viewer is to focus on it. The researchers cite studies showing that people tend to pay most attention to faces, text, animals, and regions of high color contrast.
The researchers have trained Twitter’s neural networks to find the most interesting parts of your photos in a very short amount of time, so you won’t notice delays while posting your photos.
Software engineers used a technique called “knowledge distillation” to train their algorithms to quickly approximate the most salient parts of your photos. While it might take a long time to make fine-tuned pixel-level predictions, Twitter’s neural networks do a speedy, more approximate version to get your photos up on time.
Engineers also used a technique called “pruning” to make sure the algorithm skips over features of your images that will take a while to investigate without yielding much benefit.
It’s a small update, but one that could dramatically improve your Twitter feed. You’ll see fewer weird, random thumbnails, and maybe even save time clicking.