Although their approaches differ (full papers are available here for Stanford and here for Google), both groups essentially combined deep convolutional neural networks — the type of deep learning models responsible for the huge advances in computer vision accuracy over the past few years — with recurrent neural networks that excel at text analysis and natural language processing. Recurrent neural networks have been responsible for some of the significant improvements in language understanding recently, including the machine translation that powers Microsoft’s Skype Translate and Google’s word2vec libraries.
(Coincidentally, University of Toronto research and Google Distinguished Scholar Geoff Hinton was asked in a recent Reddit Ask Me Anything session, which we recapped here, about how deep learning models might account for various elements and objects present in…
Ver o post original 424 mais palavras
Categorias:Agências de Notícias