Lomonosov Psychology Journal
ISSN 0137-0936
eISSN 2309-9852
En Ru
ISSN 0137-0936
eISSN 2309-9852

Socio-psychological approach in Human-AI interaction: Тrust in artificial intelligence in economic decision making

Background. Artificial Intelligence is being actively introduced into the economic life of the country in the context of the digital economy. Features of economic decision-making based on programs based on Artificial Intelligence are becoming an extremely relevant area of research in social psychology.

Objective of the study was to identify the features of economic decision-making based on recommendations obtained using artificial intelligence.

Methods. The study took place in two stages: an interview and an experiment.

Sample. The sample of the first stage was 8 people (4 men and 4 women, age 18–45). At the second stage, the sample consisted of 289 people (48 men and 241 women), aged 18–25 years.

Results. At the first stage, possible factors of trust and distrust in Artificial Intelligence-based programs were formulated, including the context of economic decisions.

At the second stage, an experiment was conducted where participants were asked to play a stock exchange simulator. The game had the option of contacting an economic adviser. In the experimental group, the adviser was a program based on Artificial Intelligence, in the control group — a person. 5652 economic decisions of the participants were analyzed in terms of the degree of risk of these decisions.

Conclusion. 1. Using a recommendation in the process of making an economic decision has a significant impact on the willingness to risk a resource. 2. In case of agreement with the recommendation, if the source of the recommendation was a program based on Artificial Intelligence technology, then the willingness to take risks was higher. 3. Such an effect can be explained by the specifics of the situation of making an economic decision: the task is quite formal, the time to solve the problem is limited and the decision-making situation is accompanied by a high degree of uncertainty about the consequences of this decision.

References

1. Averkin A. N., Gaaze-Rapoport M. G., Pospelov D. A. (1992) Explanatory dictionary on artificial intelligence. M.: Radio i svyaz', 1992. (p. 256). (in Russ.). 2. ZHuravlev A.L., Nestik T.A. (2018) Socio-psychological determination of the individual's readiness to use new technologies. Psychological Journal, 39(5), 5–14. (in Russ.). 3. ZHuravlev A.L., Nestik T.A. (2019) Socio-psychological consequences of new technologies adoption: perspective directions of research. Psychological Journal, 40(5), 35–47. (in Russ.). 4. Kupreychenko A.B. Psychology of trust and distrust. Litres, 2021. (in Russ.). 5. Kupreychenko A.B. (2012) Trust and distrust to the technique and socio-technical systems: statement of the problem and the study approach justification. Scientific notes of IWEI, 2(1), pp. 126–137. (in Russ.). 6. Nestik T.A. (2019) Artificial intelligence as a cognitive prosthesis or mediator: transformation of imaginated futures. Educational policy, 4(80), pp. 104–117. (in Russ.). 7. Soldatova G.U., Nestik T.A. (2016) Internet users’ attitudes towards the internet: technophobes and technophiles. Bulletin of Moscow State Regional University. Series: Psychology, 1, pp. 54–61. (in Russ.). 8. Khoroshilov D., Melnikova O. (2020) Thematic analysis method in the study of ideas about women’s leadership. Organizational psychology, 10(3), pp. 85–99. (in Russ.). 9. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A. & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115. 10. Cadwalladr, C., & Graham-Harrison, E. (2018). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The guardian, 17, 22. 11. Colquitt, J. A., Scott, B. A., & LePine, J. A. (2007). Trust, trustworthiness, and trust propensity: a meta-analytic test of their unique relationships with risk taking and job performance. Journal of applied psychology, 92(4), 909. 12. Dwyer, D. B., Falkai, P., & Koutsouleris, N. (2018). Machine learning approaches for clinical psychology and psychiatry. Annual review of clinical psychology, 14, 91-118. 13. Fridman, L., Mehler, B., Xia, L., Yang, Y., Facusse, L. Y., & Reimer, B. (2017). To walk or not to walk: Crowdsourced assessment of external vehicle-to-pedestrian displays. arXiv preprint arXiv:1707.02698. 14. Longoni C., Bonezzi A., Morewedge C. K. (2019) Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), pp.629-650. 15. McCarthy, J. (2007). What is artificial intelligence? [Electronic resource]. http://faculty.otterbein.edu/dstucki/inst4200/whatisai.pdf 16. Searle, J. R. (1980). Minds, brains, and programs. Behavioral and brain sciences, 3(3), 417-424. 17. van der Werff, L., Legood, A., Buckley, F., Weibel, A., & de Cremer, D. (2019). Trust motivation: The self-regulatory processes underlying trust decisions. Organizational Psychology Review, 9(2-3), 99-123.

Recieved: 03/21/2022

Accepted: 06/02/2022

Published: 10/31/2022

Keywords: Artificial intelligence; economic decision making; trust in technology; economic behavior; experiment; human-AI interaction

Available online since: 31.10.2022

Issue 3, 2022