![]() Sometimes, I had luck with typing in more abstract search terms. The root here is machine learning technology, and that base technology is similar to what powers Google Image Search, but Lieb says "the clustering and search quality technologies are specifically tuned to personal photo libraries." So, perhaps, that explains why the teddy bear photos were identified as "Bears," while Google Image Search saves "Bears" for the live, breathing variety of bears, and identified the stuffed animal as a "Teddy Bear". The beginnings of the recognition engine lay in what we saw introduced a couple of years ago with Google+. This explains why searching by some data points (i.e., the original folder name or the location) didn't work until four or five days passed.Īnother interesting point is how the recognition works, period. Lieb notes that sorting begins within 24 hours of backup, and continues on a 24-48 hour basis. Google says the indexing is not instantaneous, and that matches my experience. Photos rightly identified a stuffed puppy as "Dog" (along with live dogs), but a teddy bear ended up under "Sheep" and "Bear." And only as "Bear" five days after the first images were uploaded. It wasn't just the gymnasts that ended up all over the map. Gymnasts were identified correctly as "Gymnastics." And I could almost understand the images that ended up classified as "Dancing" and "Circus." But the same types of images were also identified as "Basketball," "Wrestling," "Ice Skating," "Table Tennis," and "Volleyball." And there was no way to reclassify those images back to "Gymnastics." If an image lacked a geotag, Google Photos was inconsistent recognizing where images were from, and what they represented. I had similar experiences with the images classified under Places and Things. But the lack of consistent identification is a concern, particularly when coupled with entries that just aren't finding all relevant images. It's frankly better than any other free solution today. I appreciate that Photos gets us as far as it does in finding people. ![]() However, doing so is a chore, and can only be done on the smaller mobile screens for now - which makes doing so across large volumes of images even more difficult. Google says Photos will learn from your efforts to manually weed out false positives. In addition to identifying images based on the content, Photos also uses geotags, timestamp, existing metadata that the service can read (some of my images had IPTC captions), and data from the folder something was filed in (but, if there are nested folders, it doesn't capture the info at the top-level). Once uploaded into the Google cloud, the folder structure is flattened out and disappears. But, the service lacks the more random finesse that humans add to the equation. Google Photos' search and retrieval, and tagging, tries to mimic how humans perceive photos. And the much talked about age-progression facial recognition? In one instance where I uploaded images of an athlete from both junior and senior competitions, the algorithm didn't pick up on that. In one case, the algorithm found images of the same young woman, and gave her two different thumbnails - with no duplication of the images between the two. Another reality, and more worrisome: The algorithm found some images, but nowhere close to all of the images uploaded of the same athlete. That said, I looked at the image clusters that perplexed me, and the reality was in many instances, the facial structures that were mistakenly grouped together looked nothing alike. According to Dave Lieb, product lead at Google, the face grouping only uses attributes of the faces, not any details of the hair style or clothing. That I saw certain physical similarities in images clustered as a single person was pure coincidence, based on what Google tells me. Not the same person, yet grouped together
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |