Google and Stanford have neural networks recognize situations in pictures




Researchers at Google and Stanford have separately made considerable progress in computer systems that can recognize what happens to photos and videos. Self-learning systems did during testing numerous situations in photographs and images quite accurately describe.

Google and Stanford researchers trained computers within a neural network initially with a limited number of images that were provided with brief descriptions by people drawn. Then the computers had to invent captions for photos. The Stanford researchers published the findings in a report . So knew computers throbbing captions as “a group of men who Frisbee play” and “a herd of elephants in a dry lawn ‘generation, though the software does bother with a green kite, which was billed as” a man flying through the air on a snowboard.

The researchers from Google and Stanford were unrelated to their conclusions. Google reports its findings in a blog piece . Computers have long been able to recognize objects in photos and videos, but have difficulty recognizing situations. The software of both researchers is only able to recognize patterns that were observed earlier, but does so much better than existing algorithms.

The research could help to classify automatically placed pictures and videos on the Internet, or to help people with little or no eyesight navigate. Software with advanced pattern recognition, however, could also be used for surveillance: the image on cameras may be analyzed automatically with it, The New York Times notes.


In: Technology & Gadgets Asked By: [15597 Red Star Level]

Answer this Question

You must be Logged In to post an Answer.

Not a member yet? Sign Up Now »