ResponsibleAIResearchAs soon as we develop AI-supported models for analyzing human behavior, we simultaneously influence the shaping of society. However, existing methods for analyzing social phenomena may not be able to cope with this "feedback mechanism". It is therefore particularly important to be vigilant when artificial intelligence is used to analyze patterns in human behavior and complex social phenomena. In principle, it is important to first weigh the risk of each project against its benefit – and, if necessary, to decide against it, as Microsoft Research emphasizes in an “Update on responsible AI research“.

To responsibly promote AI-supported modeling of human behavior, they have now published five recommendations for practice:

  1. When developing projects, it must be ensured that they are accompanied by transparent and participatory processes based on scientific theories and ethical considerations.
  2. Data, contextual information, calculation methods and measurement results must be integrated with these scientific theories.
  3. During development, theory-based analysis models should be used, the underlying assumptions and the correctness of the data should be documented and justified, and the respective algorithmic influences should be reflected.
  4. Quality criteria and datasets that enable the validation of competing measurement results must be developed and justified.
  5. Reflection on possible negative consequences of analysis results and description of damage limitation strategies.


Fair, transparent, comprehensible and data protection-compliant analysis methods should include socio-theoretical findings. For a reliable and thus trustworthy analysis, the assumptions on which a model is based must be precisely documented and well justified. It must also be clear who decides exactly what to analyze and how the results will be used.

To identify risks to justice in AI systems at an early stage, various measurement methods should first be modeled. This allows AI practitioners to identify discrepancies between theoretical concepts and their implementation at an early stage and avoid their AI systems reinforcing societal bias, unequal treatment, or discrimination.

Deficiencies in data sets can also lead to unfair AI systems. To guard against this risk, AI practitioners need to understand how their systems evaluate certain factors such as age, race, gender, or socioeconomic status. For the data sets to be generated in a representative and integrative manner, data collection in various social groups, such as people with disabilities, must be improved.

Traditional AI systems are often not inclusive. For example, speech recognition systems do not work for “atypical” pronunciations and input devices are not accessible to people with reduced mobility. On the road to inclusive AI, researchers at Microsoft Research have designed guidelines for crafting an accessible online infrastructure for collecting data from people with disabilities that respects, protects, and engages those who contribute data.

As early as 2018, Microsoft decided on six principles for the ethical use of AI. A cross-company “AETHER committee” (AI and Ethics in Engineering and Research) monitors their implementation. As part of their commitment to responsible AI, researchers at Microsoft are working on methods to help developers of AI systems translate ethical principles into responsible action. (Source: Microsoft)

By MediaBUZZ