Microsoft Research333 тыс
Опубликовано 22 августа 2019, 16:46
Crowd, Cloud and the Future of Work: Welcome and Updates from Microsoft Research in human AI research
The Future of Work includes innovating on models that allow scaling out complex problems to include micro contributions and curated data from experts and non-experts for consensus driven and /or expert like performance.
How do we effectively combine cognitive resources of many people? We want to achieve an expert like performance to evaluate and compare models, standardize methods to validate and characterize performance. Performance can be measured by quality of resulting data, accuracy of labels and compute efficiency. One solution is having access to citizen science and crowd sourcing services in the cloud as has been proven with some projects in the past few years.
We have recently seen great results from efforts that have proven the power of citizen-science based gaming approaches, such as 1) for advancing Alzheimer’s research, or 2) in the field of neuroscience that is increasingly looking at citizen science approaches as it scales to thousands of samples for data curation and expediting the processing pipeline. Is it possible to have a generalized set of data management and cloud-based services that allow citizen science / crowd sourced approaches to achieve faster and higher quality results than traditional benchmarks? What else is missing from crowdsourcing frameworks to make them useful? This workshop will discuss research efforts in crowd, cloud and the future of work—the positive outcomes observed so far, future directions and aspirations.
See more at microsoft.com/en-us/research/v...
The Future of Work includes innovating on models that allow scaling out complex problems to include micro contributions and curated data from experts and non-experts for consensus driven and /or expert like performance.
How do we effectively combine cognitive resources of many people? We want to achieve an expert like performance to evaluate and compare models, standardize methods to validate and characterize performance. Performance can be measured by quality of resulting data, accuracy of labels and compute efficiency. One solution is having access to citizen science and crowd sourcing services in the cloud as has been proven with some projects in the past few years.
We have recently seen great results from efforts that have proven the power of citizen-science based gaming approaches, such as 1) for advancing Alzheimer’s research, or 2) in the field of neuroscience that is increasingly looking at citizen science approaches as it scales to thousands of samples for data curation and expediting the processing pipeline. Is it possible to have a generalized set of data management and cloud-based services that allow citizen science / crowd sourced approaches to achieve faster and higher quality results than traditional benchmarks? What else is missing from crowdsourcing frameworks to make them useful? This workshop will discuss research efforts in crowd, cloud and the future of work—the positive outcomes observed so far, future directions and aspirations.
See more at microsoft.com/en-us/research/v...