Towards Understandable Neural Networks for High Level AI Tasks - Part 7

103
Опубликовано 22 июня 2016, 19:00
Relating tensor product representations to lamda-calculus, tree-adjoining grammars, other 'vector symbolic architectures', and the brain - Part 7 Topics that will be discussed in this final lecture of the series are: - programming tensor-product-representation-manipulating Gradient-Symbolic-Computation networks to perform function-application in l-calculus and tree-adjunction (as in Tree-Adjoining Grammar) - thereby demonstrating that GSC networks truly have complete symbol-processing (or 'algebraic') capabilities, which Gary Marcus and others have argued (at MSR and elsewhere) are required for neural networks (artificial or biological) to achieve genuine human intelligence. - comparison of the size of tensor product representations to the size of other schemes for encoding symbol structures in actual neural network models: contrary to many claims, tensor product representations are not larger - preliminary neural evidence for tensor product representations (in particular, distributed role vectors)
автотехномузыкадетское