Microsoft Research335 тыс
Следующее
Опубликовано 7 декабря 2022, 2:00
Research Talk
Pascale Fung, Hong Kong University of Science & Technology
The AI “arms race” has reached a point where different organizations in different countries are competing to build ever larger “language” models in text, in speech, in image and so on, trained from ever larger collections of databases. Our society in general, our users in particular, are demanding that AI technology be more responsible – more robust, fairer, more explainable, more trustworthy. Natural language processing technologies built on top of these large pre-trained language models are expected to align with these and other human “values” in because they impact our lives directly. The core challenge of “value-aligned” NLP (or AI in general) is twofold: 1) What are these values and who defines them? 2) How can NLP algorithms and models be made to align with these values? In fact, different cultures and communities might have different approaches to ethical issues. Even when people from different cultures happen to agree on a set of common principles, they might disagree on the implementation of such principles. It is therefore necessary that we anticipate value definition to be dynamic and multidisciplinary. I propose that we should modularize the set of value definitions as external to the development of NLP algorithms, and that of large pretrained language models and encapsulate the language model to preserve its integrity. We also argue that value definition should not be left in the hands of NLP/AI researchers or engineers. At best, we can be involved at the stage of value definition but we engineers and developers should not be decision makers on what they should be. In addition, some values are now enshrined in legal requirements. This argues further that value definition should be disentangled from algorithm and model development. In this talk, I will present initial experiments on value based NLP where we allow the input to an NLP system to have human defined values or ethical principles for different output results. I propose that many NLP tasks, from classification to generation, should output results according to human defined principles for better performance and explainability.
Learn more about the Responsible AI Workshop: microsoft.com/en-us/research/e...
This workshop was part of the Microsoft Research Summit 2022: microsoft.com/en-us/research/e...
Pascale Fung, Hong Kong University of Science & Technology
The AI “arms race” has reached a point where different organizations in different countries are competing to build ever larger “language” models in text, in speech, in image and so on, trained from ever larger collections of databases. Our society in general, our users in particular, are demanding that AI technology be more responsible – more robust, fairer, more explainable, more trustworthy. Natural language processing technologies built on top of these large pre-trained language models are expected to align with these and other human “values” in because they impact our lives directly. The core challenge of “value-aligned” NLP (or AI in general) is twofold: 1) What are these values and who defines them? 2) How can NLP algorithms and models be made to align with these values? In fact, different cultures and communities might have different approaches to ethical issues. Even when people from different cultures happen to agree on a set of common principles, they might disagree on the implementation of such principles. It is therefore necessary that we anticipate value definition to be dynamic and multidisciplinary. I propose that we should modularize the set of value definitions as external to the development of NLP algorithms, and that of large pretrained language models and encapsulate the language model to preserve its integrity. We also argue that value definition should not be left in the hands of NLP/AI researchers or engineers. At best, we can be involved at the stage of value definition but we engineers and developers should not be decision makers on what they should be. In addition, some values are now enshrined in legal requirements. This argues further that value definition should be disentangled from algorithm and model development. In this talk, I will present initial experiments on value based NLP where we allow the input to an NLP system to have human defined values or ethical principles for different output results. I propose that many NLP tasks, from classification to generation, should output results according to human defined principles for better performance and explainability.
Learn more about the Responsible AI Workshop: microsoft.com/en-us/research/e...
This workshop was part of the Microsoft Research Summit 2022: microsoft.com/en-us/research/e...
Feel the Fire 6 Power 120LM Flashlight—turning the darkest night into day in an instant!#ruggedphone