Learning to Construct and Reason with a Large Knowledge Base of Extracted Information

299
Опубликовано 27 июля 2016, 1:39
Carnegie Mellon University's "Never Ending Language Learner" (NELL) has been running for over three years, and has automatically extracted from the web millions of facts concerning hundreds of thousands of entities and thousands of concepts. NELL works by coupling together many interrelated large-scale semi-supervised learning problems. In this talk I will discuss some of the technical problems we encountered in building NELL, and some of the issues involved in reasoning with this sort of large, diverse, and imperfect knowledge base. This is joint work with Tom Mitchell, Ni Lao, William Wang, and many other colleagues.
автотехномузыкадетское