Agências de Notícias

Google, Stanford build hybrid neural networks that can explain photos

Gigaom

Two separate groups of researchers at Google and Stanford have merged best-of-breed neural network models and created systems that can accurately explain what’s happening in images.

Although their approaches differ (full papers are available here for Stanford and here for Google), both groups essentially combined deep convolutional neural networks — the type of deep learning models responsible for the huge advances in computer vision accuracy over the past few years — with recurrent neural networks that excel at text analysis and natural language processing. Recurrent neural networks have been responsible for some of the significant improvements in language understanding recently, including the machine translation that powers Microsoft’s Skype Translate and Google’s word2vec libraries.

(Coincidentally, University of Toronto research and Google Distinguished Scholar Geoff Hinton was asked in a recent Reddit Ask Me Anything session, which we recapped here, about how deep learning models might account for various elements and objects present in…

Ver o post original 424 mais palavras

Deixe um comentário

Faça o login usando um destes métodos para comentar:

Logotipo do WordPress.com

Você está comentando utilizando sua conta WordPress.com. Sair /  Alterar )

Foto do Google

Você está comentando utilizando sua conta Google. Sair /  Alterar )

Imagem do Twitter

Você está comentando utilizando sua conta Twitter. Sair /  Alterar )

Foto do Facebook

Você está comentando utilizando sua conta Facebook. Sair /  Alterar )

Conectando a %s

Este site utiliza o Akismet para reduzir spam. Saiba como seus dados em comentários são processados.