(Why) Do We Trust AI?: A Case of AI-based Health Chatbots

Authors

DOI:

https://doi.org/10.3127/ajis.v28.4235

Keywords:

Artificial Intelligence, Health Chatbot, Trust in Technology, Explainability, Free Simulation Experiment, Contextualization

Abstract

Automated chatbots powered by artificial intelligence (AI) can act as a ubiquitous point of contact, improving access to healthcare and empowering users to make effective decisions. However, despite the potential benefits, emerging literature suggests that apprehensions linked to the distinctive features of AI technology and the specific context of use (healthcare) could undermine consumer trust and hinder widespread adoption. Although the role of trust is considered pivotal to the acceptance of healthcare technologies, a dearth of research exists that focuses on the contextual factors that drive trust in such AI-based Chatbots for Self-Diagnosis (AICSD). Accordingly, a contextual model based on the trust-in-technology framework was developed to understand the determinants of consumers’ trust in AICSD and its behavioral consequences. It was validated using a free simulation experiment study in India (N = 202). Perceived anthropomorphism, perceived information quality, perceived explainability, disposition to trust technology, and perceived service quality influence consumers’ trust in AICSD. In turn, trust, privacy risk, health risk, and gender determine the intention to use. The research contributes by developing and validating a context-specific model for explaining trust in AICSD that could aid developers and marketers in enhancing consumers’ trust in and adoption of AICSD.

Author Biographies

Ashish Viswanath Prakash, Indian Institute of Management Tiruchirappalli, Tamil Nadu, India

Dr. Ashish Viswanath Prakash is an Assistant Professor of Information Systems and Analytics at the Indian Institute of Management Tiruchirappalli. His research explores the complex interplay between technological and social systems, encompassing fields such as information systems, human-computer interaction, and the ethics of technology. His work, which addresses various organizational, behavioral, and ethical issues related to the use of emerging technologies, has been published in leading academic journals, including Information & Management, International Journal of Information Management, Pacific Asia Journal of the Association for Information Systems, and Health Policy & Technology.

Saini Das, Indian Institute of Technology Kharagpur

Dr. Saini Das is an Assistant Professor at the Vinod Gupta School of Management, Indian Institute of Technology, Kharagpur, India. Her major research interests are in managing information security risks in networks, management information systems, e-commerce technology and applications, data privacy, digital piracy, data analytics, and artificial intelligence. She has authored publications in several international journals of repute, including Information & Management, Decision Support Systems, Behaviour & Information Technology, Information Systems Frontiers, Online Information Review, and Journal of Information Privacy and Security. [e-mail: saini@vgsom.iitkgp.ac.in]

References

Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138-52160.

Adam, M., Wessel, M., & Benlian, A. (2021). AI-based chatbots in customer service and their effects on user compliance. Electronic Markets, 31(2), 427–445. https://doi.org/10.1007/s12525-020-00414-7

Adebesin, F., & Mwalugha, R. (2020). The Mediating Role of Organizational Reputation and Trust in the Intention to Use Wearable Health Devices: Cross-Country Study. JMIR Mhealth Uhealth, 8(6), e16721. https://doi.org/10.2196/16721

Akter, S., Ray, P., & Ambra, J. D. (2013). Continuance of mHealth services at the bottom of the pyramid : the roles of service quality and trust. Electronic Markets, 23(1), 29–47. https://doi.org/10.1007/s12525-012-0091-5

AlGhamdi, K. M., & Moussa, N. A. (2012). Internet use by the public to search for health-related information. International Journal of Medical Informatics, 81(6), 363-373.

Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183–189. https://doi.org/10.1016/j.chb.2018.03.051

Asan, O., & Choudhury, A. (2021). Research trends in artificial intelligence applications in human factors health care: mapping review. JMIR Human Factors, 8(2), e28236.

Atkinson, N., Saperstein, S., & Pleis, J. (2009). Using the internet for health-related activities: findings from a national probability sample. Journal of Medical Internet Research, 11(1), e1035.

Bach, T. A., Khan, A., Hallock, H., Beltrão, G., & Sousa, S. (2022). A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective. International Journal of Human–Computer Interaction, 1-16.

Bacharach, M., & Gambetta, D. (2001). Trust in signs. In K. Cook (ed.), Trust in Society (pp. 148–184). New York, NY, USA:: Russell Sage Foundation.

Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. International Journal of Social Robotics, 1(1), 71–81. https://doi.org/10.1007/s12369-008-0001-3

Baumann, E., Czerwinski, F., & Reifegerste, D. (2017). Gender-specific determinants and patterns of online health information seeking: results from a representative German health survey. Journal of Medical Internet Research, 19(4), e92.

Bedué, P., & Fritzsche, A. (2022). Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption. Journal of Enterprise Information Management, 35(2), 530-549.

Benbasat, I., & Wang, W. (2005). Trust in and adoption of online recommendation agents. Journal of the Association for Information Systems, 6(3), 4.

Berezowska, A., Fischer, A. R. H., Ronteltap, A., van der Lans, I. A., & van Trijp, H. C. M. (2015). Consumer adoption of personalised nutrition services from the perspective of a risk–benefit trade-off. Genes & Nutrition, 10(6), 42. https://doi.org/10.1007/s12263-015-0478-y

Bickmore, T. W., Trinh, H., Olafsson, S., O’Leary, T. K., Asadi, R., Rickles, N. M., & Cruz, R. (2018). Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: An Observational Study of Siri, Alexa, and Google Assistant. Journal of Medical Internet Research, 20(9), e11510. https://doi.org/10.2196/11510

Brandtzaeg, P. B., Pultier, A., & Moen, G. M. (2019). Losing control to data-hungry apps: A mixed-methods approach to mobile app privacy. Social Science Computer Review, 37(4), 466-488. https://doi.org/10.1177/0894439318777706

Breward, M., Hassanein, K., & Head, M. (2017). Understanding consumers’ attitudes toward controversial information technologies: A contextualization approach. Information Systems Research, 28(4), 760-774.

Chi, O. H., Jia, S., Li, Y., & Gursoy, D. (2021). Developing a formative scale to measure consumers’ trust toward interaction with artificially intelligent (AI) social robots in service delivery. Computers in Human Behavior, 118, 106700. https://doi.org/10.1016/j.chb.2021.106700

Chiles, T. H., & McMackin, J. F. (1996). Integrating Variable Risk Preferences, Trust, and Transaction Cost Economics. Academy of Management Review, 21(1), 73–99. https://doi.org/10.5465/amr.1996.9602161566

Chiou, J.-S., & Droge, C. (2006). Service quality, trust, specific asset investment, and expertise: Direct and indirect effects in a satisfaction-loyalty framework. Journal of the Academy of Marketing Science, 34(4), 613. https://doi.org/10.1177/0092070306286934

Christensen, C. M., & Raynor, M. E. (2003). Why hard-nosed executives should care about management theory. Harvard Business Review, 81(9), 66-75.

Chui, K. T., Liu, R. W., Lytras, M. D., & Zhao, M. (2019). Big data and IoT solution for patient behaviour monitoring. Behaviour & Information Technology, 38(9), 940-949.

Ćirković, A. (2020). Evaluation of Four Artificial Intelligence–Assisted Self-Diagnosis Apps on Three Diagnoses: Two-Year Follow-Up Study. Journal of Medical Internet Research, 22(12), e18097. https://doi.org/10.2196/18097

Cocosila, M., & Turel, O. (2016). A dual-risk model of user adoption of mobile-based smoking cessation support services. Behaviour & Information Technology, 35(7), 526-535.

Cocosila, M., Turel, O., Archer, N., & Yuan, Y. (2007). Perceived health risks of 3G cell phones: do users care?. Communications of the ACM, 50(6), 89-92.

Cook, K. (ed.). (2001). Trust in society. New York, NY, USA: Russell Sage Foundation.

Cramer, H., Evers, V., Ramlal, S., Van Someren, M., Rutledge, L., Stash, N., ... & Wielinga, B. (2008). The effects of transparency on trust in and acceptance of a content-based art recommender. User Modeling and User-adapted Adapted Interaction, 18, 455-496.

Cronin, J. J., Brady, M. K., & Hult, G. T. M. (2000). Assessing the effects of quality, value, and customer satisfaction on consumer behavioral intentions in service environments. Journal of Retailing, 76(2), 193–218. https://doi.org/10.1016/S0022-4359(00)00028-2

Culley, K. E., & Madhavan, P. (2013). A note of caution regarding anthropomorphism in HCI agents. Computers in Human Behavior, 29(3), 577–579.

https://doi.org/10.1016/j.chb.2012.11.023

Cunningham, S.M. (1967). The major dimensions of perceived risk. In D. F. Cox (Ed.), Risk taking and information handling in consumer behavior (pp. 82–108). Harvard University Press.

Dagger, T. S., Sweeney, J. C., & Johnson, L. W. (2007). A Hierarchical Model of Health Service Quality: Scale Development and Investigation of an Integrated Model. Journal of Service Research, 10(2), 123–142. https://doi.org/10.1177/1094670507309594

De Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A. B., McKnight, P. E., Krueger, F., & Parasuraman, R. (2016). Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied, 22(3), 331.

Delone, W. H., & McLean, E. R. (2003). The DeLone and McLean Model of Information Systems Success: A Ten-Year Update. Journal of Management Information Systems, 19(4), 9–30. https://doi.org/10.1080/07421222.2003.11045748

Denecke, K., Gabarron, E., Grainger, R., Konstantinidis, S. T., Lau, A., Rivera-Romero, O., ... & Merolli, M. (2019). Artificial intelligence for participatory health: applications, impact, and future implications. Yearbook of Medical Informatics, 28(01), 165-173.

Dinev, T., & Hart, P. (2006). An Extended Privacy Calculus Model for E-Commerce Transactions. Information Systems Research, 17(1), 61–80. https://doi.org/10.1287/isre.1060.0080

Eisingerich, A. B., & Bell, S. J. (2008). Perceived Service Quality and Customer Trust: Does Enhancing Customers’ Service Knowledge Matter? Journal of Service Research, 10(3), 256–268. https://doi.org/10.1177/1094670507310769

Epley, N., Caruso, E. M., & Bazerman, M. H. (2006). When perspective taking increases taking: reactive egoism in social interaction. Journal of Personality and Social Psychology, 91(5), 872.

Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: a three-factor theory of anthropomorphism. Psychological Review, 114(4), 864.

Escoffery, C. (2018). Gender similarities and differences for e-health behaviors among US adults. Telemedicine and e-Health, 24(5), 335-343.

Fan, X., Chao, D., Zhang, Z., Wang, D., Li, X., & Tian, F. (2021). Utilization of Self-Diagnosis Health Chatbots in Real-World Settings: Case Study. Journal of Medical Internet Research, 23(1), e19928. https://doi.org/10.2196/19928

Følstad, A., & Brandtzæg, P. B. (2017). Chatbots and the new world of HCI. Interactions, 24(4), 38-42.

Fromkin, H. L., & Streufert, S. (1976). Laboratory experimentation. In M. D. Dunnette (ed.), Handbook of Industrial and Organizational Psychology (pp. 415-465). Rand McNally.

Gambetta, D., & Bacharach, M. (2001). Trust in signs. Trust and Society, New York, NY,USA: Russell Sage Foundation, 148-184.

Gefen, D., Karahanna, E., & Straub, D. W. (2003a). Trust and TAM in Online Shopping: An Integrated Model. MIS Quarterly, 27(1), 51–90. https://doi.org/10.2307/30036519

Gefen, D., Srinivasan Rao, V., & Tractinsky, N. (2003). The conceptualization of trust, risk and their electronic commerce: The need for clarifications. In R. H. Sprague (Ed.), Proceedings of the 36th Annual Hawaii International Conference on System Sciences (HICSS 2003), Article 1174442. Institute of Electrical and Electronics Engineers (IEEE). https://doi.org/10.1109/HICSS.2003.1174442

Gille, F., Jobin, A., & Ienca, M. (2020). What we talk about when we talk about trust: Theory of trust for AI in healthcare. Intelligence-Based Medicine, 1–2, 100001. https://doi.org/10.1016/j.ibmed.2020.100001

Go, E., & Sundar, S. S. (2019). Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Computers in Human Behavior, 97, 304-316.

Hair Jr, J. F., Hult, G. T. M., Ringle, C., & Sarstedt, M. (2016). A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM). SAGE Publications.

Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. European Business Review, 31(1), 2–24. https://doi.org/10.1108/EBR-11-2018-0203

Haluza, D., & Wernhart, A. (2019). Does gender matter? Exploring perceptions regarding health technologies among employees and students at a medical university. International Journal of Medical Informatics, 130, 103948.

Hengstler, M., Enkel, E., & Duelli, S. (2016). Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change, 105, 105-120.

Ho, C.-C., & MacDorman, K. F. (2017). Measuring the uncanny valley effect. International Journal of Social Robotics, 9(1), 129–139.

Hoffmann, C. P., Lutz, C., & Meckel, M. (2014). Digital Natives or Digital Immigrants? The Impact of User Characteristics on Online Trust. Journal of Management Information Systems, 31(3), 138–171. https://doi.org/10.1080/07421222.2014.995538

Holmes, J. G. (1991). Trust and the appraisal process in close relationships. In W. H. Jones & D. Perlman (Eds.), Advances in personal relationships: A research annual, Vol. 2, pp. 57–104). London, UK: Jessica Kingsley Publishers.

Hong, W., Chan, F. K., Thong, J. Y., Chasalow, L. C., & Dhillon, G. (2014). A framework and guidelines for context-specific theorizing in information systems research. Information Systems Research, 25(1), 111-136.

Hu, P., Lu, Y., & Gong, Y. (2021). Dual humanness and trust in conversational AI: A person-centered approach. Computers in Human Behavior, 119, 106727. https://doi.org/10.1016/j.chb.2021.106727

Jiang, J. J., Klein, G., & Carr, C. L. (2002). Measuring Information System Service Quality: SERVQUAL from the Other Side. MIS Quarterly, 26(2), 145–166. https://doi.org/10.2307/4132324

Johns, G. (2006). The essential impact of context on organizational behavior. Academy of Management Review, 31(2), 386-408.

Johnson, V. L., Kiser, A., Washington, R., & Torres, R. (2018). Limitations to the rapid adoption of M-payment services: Understanding the impact of privacy risk on M-Payment services. Computers in Human Behavior, 79, 111–122.

Kizilcec, R. F. (2016). How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI conference on human factors in computing systems (pp. 2390–2395). Association for Computing Machinery (ACM). https://doi.org/10.1145/2858036.2858402

Koller, M. (1988). Risk as a determinant of trust. Basic and Applied Social Psychology, 9(4), 265-276.

Lankton, N. K., McKnight, D. H., & Tripp, J. (2015). Technology, humanness, and trust: Rethinking trust in technology. Journal of the Association for Information Systems, 16(10), 880.

Laranjo, L., Dunn, A. G., Tong, H. L., Kocaballi, A. B., Chen, J., Bashir, R., Surian, D., Gallego, B., Magrabi, F., Lau, A. Y. S., & Coiera, E. (2018). Conversational agents in healthcare: a systematic review. Journal of the American Medical Informatics Association, 25(9), 1248–1258. https://doi.org/10.1093/jamia/ocy072

Laumer, S., Maier, C., & Gubler, F. (2019). Chatbot acceptance in healthcare: explaining user adoption of conversational agents for disease diagnosis. Proceedings of the 27th European Conference on Information Systems (ECIS), 27(1), 88. https://aisel.aisnet.org/ecis2019_rp/88

Li, H., Wu, J., Gao, Y., & Shi, Y. (2016). Examining individuals’ adoption of healthcare wearable devices: An empirical study from privacy calculus perspective. International Journal of Medical Informatics, 88, 8–17. https://doi.org/10.1016/j.ijmedinf.2015.12.010

Li, X., Hess, T. J., & Valacich, J. S. (2006). Using attitude and social influence to develop an extended trust model for information systems. ACM SIGMIS Database: the DATABASE for Advances in Information Systems, 37(2-3), 108-124.

Li, X., Hess, T. J., & Valacich, J. S. (2008). Why do we trust new technology? A study of initial trust formation with organizational information systems. The Journal of Strategic Information Systems, 17(1), 39–71. https://doi.org/10.1016/j.jsis.2008.01.001

Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional research designs. The Journal of Applied Psychology, 86(1), 114–121. https://doi.org/10.1037/0021-9010.86.1.114

London, A. J. (2019). Artificial intelligence and black‐box medical decisions: accuracy versus explainability. Hastings Center Report, 49(1), 15-21.

Malhotra, N. K., Kim, S. S., & Patil, A. (2006). Common method variance in IS research: A comparison of alternative approaches and a reanalysis of past research. Management Science, 52(12), 1865-1883.

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734.

https://doi.org/10.5465/amr.1995.9508080335

McGuinness, D. L., Ding, L., Glass, A., Chang, C., Zeng, H., & Furtado, V. (2006, November). Explanation interfaces for the semantic web: Issues and models. In Proceedings of the 3rd International Semantic Web User Interaction Workshop (SWUI'06). https://hdl.handle.net/20.500.13015/4736

McKnight, D. H., & Chervany, N. L. (2001). What Trust Means in E-Commerce Customer Relationships: An Interdisciplinary Conceptual Typology. International Journal of Electronic Commerce, 6(2), 35–59. https://doi.org/10.1080/10864415.2001.11044235

Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a Specific Technology: An Investigation of Its Components and Measures. ACM Transactions on Management Information Systems, 2(2), 12:1-12:25. https://doi.org/10.1145/1985347.1985353

McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and validating trust measures for e-commerce: An integrative typology. Information Systems Research, 13(3), 334-359.

McKnight, D. H., Kacmar, C. J., & Choudhury, V. (2004). Shifting Factors and the Ineffectiveness of Third Party Assurance Seals: A two‐stage model of initial trust in a web business. Electronic Markets, 14(3), 252-266.

McKnight, D. H., Lankton, N. K., Nicolaou, A., & Price, J. (2017). Distinguishing the effects of B2B information quality, system quality, and service outcome quality on trust and distrust. The Journal of Strategic Information Systems, 26(2), 118-141.

McTear, M., Callejas, Z., & Griol, D. (2016). Conversational Interfaces: Devices, Wearables, Virtual Agents, and Robots. In M. McTear, Z. Callejas, & D. Griol (Eds.), The Conversational Interface: Talking to Smart Devices (pp. 283–308). Cham: Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-32967-3_13

Mesbah, N., & Pumplun, L. (2020). “Hello, I’m here to help you” – Medical care where it is needed most: Seniors’ acceptance of health chatbots. Proceedings of the European Conference on Information Systems. 28(1), 209. https://aisel.aisnet.org/ecis2020_rp/209

Miner, A. S., Shah, N., Bullock, K. D., Arnow, B. A., Bailenson, J., & Hancock, J. (2019). Key Considerations for Incorporating Conversational AI in Psychotherapy. Frontiers in Psychiatry, 10. https://doi.org/10.3389/fpsyt.2019.00746

Minutolo, A., Esposito, M., & De Pietro, G. (2017). A Conversational Chatbot Based on Kowledge-Graphs for Factoid Medical Questions. In New Trends in Intelligent Software Methodologies, Tools and Techniques (pp. 139-152). Amsterdam, The Netherlands: IOS Press.

Moussawi, S., Koufaris, M., & Benbunan-Fich, R. (2021). How perceptions of intelligence and anthropomorphism affect adoption of personal intelligent agents. Electronic Markets, 31(2), 343-364. https://doi.org/10.1007/s12525-020-00411-w

Nadarzynski, T., Miles, O., Cowie, A., & Ridge, D. (2019). Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study. Digital Health, 5, 2055207619871808.

Naneva, S., Sarda Gou, M., Webb, T. L., & Prescott, T. J. (2020). A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. International Journal of Social Robotics, 12(6), 1179-1201.

Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103.

Nicolaou, A. I., & McKnight, D. H. (2006). Perceived information quality in data exchanges: Effects on risk, trust, and intention to use. Information Systems Research, 17(4), 332–351.

Nowak, K. L., & Rauh, C. (2005). The influence of the avatar on online perceptions of anthropomorphism, androgyny, credibility, homophily, and attraction. Journal of Computer-Mediated Communication, 11(1), 153-178.

Nundy, S., Montgomery, T., & Wachter, R. M. (2019). Promoting trust between patients and physicians in the era of artificial intelligence. JAMA, 322(6), 497-498.

O'Connor, Y., Kupper, M., & Heavin, C. (2021). Trusting Intentions Towards Robots in Healthcare: A Theoretical Framework. Proceedings of the 54th Hawaii International Conference on System Sciences, 54(1), 586. https://doi.org/10.24251/HICSS.2021.071

Okoyomon, E., Samarin, N., Wijesekera, P., Elazari Bar On, A., Vallina-Rodriguez, N., Reyes, I., ... & Egelman, S. (2019). On the ridiculousness of notice and consent: Contradictions in app privacy policies. In Workshop on Technology and Consumer Protection (ConPro 2019), in conjunction with the 39th IEEE Symposium on Security and Privacy (pp. 1-7). http://hdl.handle.net/20.500.12761/690

Orlikowski, W. J., & Iacono, C. S. (2001). Research commentary: Desperately seeking the “IT” in IT research—A call to theorizing the IT artifact. Information Systems Research, 12(2), 121-134.

Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1988). SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality. Journal of Retailing, 64(1), 12–40.

Pavlou, P. A. (2003). Consumer acceptance of electronic commerce: Integrating trust and risk with the technology acceptance model. International Journal of Electronic Commerce, 7(3), 101-134.

Pavlou, P. A., & Gefen, D. (2004). Building effective online marketplaces with institution-based trust. Information Systems Research, 15(1), 37-59.

Pelaez, A., Chen, C.-W., & Chen, Y. X. (2019). Effects of Perceived Risk on Intention to Purchase: A Meta-Analysis. Journal of Computer Information Systems, 59(1), 73–84. https://doi.org/10.1080/08874417.2017.1300514

Pennington, R., Wilcox, H. D., & Grover, V. (2003). The Role of System Trust in Business-to-Consumer Transactions. Journal of Management Information Systems, 20(3), 197–226. https://doi.org/10.1080/07421222.2003.11045777

Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of Method Bias in Social Science Research and Recommendations on How to Control It. Annual Review of Psychology, 63(1), 539–569. https://doi.org/10.1146/annurev-psych-120710-100452

Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: a critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879. https://doi.org/10.1037/0021-9010.88.5.879.

Powell, J. (2019). Trust Me, I’m a Chatbot: How Artificial Intelligence in Health Care Fails the Turing Test. Journal of Medical Internet Research, 21(10), e16222. https://doi.org/10.2196/16222

PR Newswire. (2021, March 5). Healthcare Chatbot Market to Reach US$ 967.7 Million by 2027, Globally. PR Newswire. https://www.prnewswire.com/in/news-releases/healthcare-chatbot-market-to-reach-us-967-7-million-by-2027-globally-cagr-21-56-univdatos-market-insights-840043251.html

Prakash, A. V., & Das, S. (2020). Would you trust a bot for healthcare advice? An empirical investigation. Proceedings of the Pacific Asia Conference on Information Systems, 24(1), 62. https://aisel.aisnet.org/pacis2020/62

Pu, P., & Chen, L. (2006). Trust building with explanation interfaces. In Proceedings of the 11th International Conference on Intelligent User Interfaces (pp. 93–100). Association for Computing Machinery (ACM). https://doi.org/10.1145/1111449.1111475

Qiu, L., & Benbasat, I. (2009). Evaluating Anthropomorphic Product Recommendation Agents: A Social Relationship Perspective to Designing Information Systems. Journal of Management Information Systems, 25(4), 145–182. https://doi.org/10.2753/MIS0742-1222250405

Rai, A. (2020). Explainable AI: from black box to glass box. Journal of the Academy of Marketing Science, 48(1), 137–141. https://doi.org/10.1007/s11747-019-00710-5

Razzaki, S., Baker, A., Perov, Y., Middleton, K., Baxter, J., Mullarkey, D., Sangar, D., Taliercio, M., Butt, M., Majeed, A., DoRosario, A., Mahoney, M., & Johri, S. (2018). A comparative study of artificial intelligence and human doctors for the purpose of triage and diagnosis. 1–15. http://arxiv.org/abs/1806.10698

Rice, S. C. (2012). Reputation and uncertainty in online markets: an experimental study. Information Systems Research, 23(2), 436-452.

Robertson, N., Polonsky, M., & McQuilken, L. (2014). Are my symptoms serious Dr Google? A resource-based typology of value co-destruction in online self-diagnosis. Australasian Marketing Journal (AMJ), 22(3), 246–256. https://doi.org/10.1016/j.ausmj.2014.08.009

Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. PUBLISHER London, UK: Penguin UK.

Salles, A., Evers, K., & Farisco, M. (2020). Anthropomorphism in AI. AJOB Neuroscience, 11(2), 88-95.

Sbaffi, L., & Rowley, J. (2017). Trust and Credibility in Web-Based Health Information: A Review and Agenda for Future Research. Journal of Medical Internet Research, 19(6), e218–e218. https://doi.org/10.2196/jmir.7579

Scorici, G., Schultz, M. D., & Seele, P. (2024). Anthropomorphization and beyond: conceptualizing humanwashing of AI-enabled machines. AI & Society, 39(2), 789–795. https://doi.org/10.1007/s00146-022-01492-1

Seele, P., & Schultz, M. D. (2022). From greenwashing to machinewashing: a model and future directions derived from reasoning by analogy. Journal of Business Ethics, 178(4), 1063-1089.

Shah, H., Warwick, K., Vallverdú, J., & Wu, D. (2016). Can machines talk? Comparison of Eliza with modern dialogue systems. Computers in Human Behavior, 58, 278–295. https://doi.org/10.1016/j.chb.2016.01.004

Sheehan, B., Jin, H. S., & Gottlieb, U. (2020). Customer service chatbots: Anthropomorphism and adoption. Journal of Business Research, 115, 14–24. https://doi.org/10.1016/j.jbusres.2020.04.030

Shin, D. (2020). User Perceptions of Algorithmic Decisions in the Personalized AI System: Perceptual Evaluation of Fairness, Accountability, Transparency, and Explainability. Journal of Broadcasting & Electronic Media, 64(4), 541–565. https://doi.org/10.1080/08838151.2020.1843357

Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551. https://doi.org/10.1016/j.ijhcs.2020.102551

Sillence, E., Briggs, P., Harris, P., & Fishwick, L. (2007). Health websites that people can trust–the case of hypertension. Interacting with Computers, 19(1), 32–42.

Siwicki, B. (2018, February 1). Special Report: AI voice assistants have officially arrived in healthcare. Healthcare IT News. http://www.healthcareitnews.com/news/special-report-ai-voice-assistants-have-officially-arrived-healthcare. Accessed October 22, 2021.

Söllner, M., Hoffmann, A., & Leimeister, J. M. (2016). Why different trust relationships matter for information systems users. European Journal of Information Systems, 25(3), 274–287. https://doi.org/10.1057/ejis.2015.17

Song, J., & Zahedi, F. (2007). Trust in health infomediaries. Decision Support Systems, 43(2), 390–407. https://doi.org/10.1016/j.dss.2006.11.011

Tan, Y. H., & Thoen, W. (2003). Electronic contract drafting based on risk and trust assessment. International Journal of Electronic Commerce, 7(4), 55-71.

Troshani, I., Rao Hill, S., Sherman, C., & Arthur, D. (2020). Do We Trust in AI? Role of Anthropomorphism and Intelligence. Journal of Computer Information Systems, 1–11. https://doi.org/10.1080/08874417.2020.1788473

Verberne, F. M., Ham, J., & Midden, C. J. (2015). Trusting a virtual driver that looks, acts, and thinks like you. Human Factors, 57(5), 895-909.

Wang, W., & Siau, K. L. (2018). Living with Artificial Intelligence: Developing a Theory on Trust in Health Chatbots-Research in Progress. Proceedings of the SIGHCI 2018, 4. https://aisel.aisnet.org/sighci2018/4

Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113–117. https://doi.org/10.1016/j.jesp.2014.01.005

Wells, J. D., Valacich, J. S., & Hess, T. J. (2011). What Signal Are You Sending? How Website Quality Influences Perceptions of Product Quality and Purchase Intentions. MIS Quarterly, 35(2), 373–396. https://doi.org/10.2307/23044048

Wendel, S., Dellaert, B. G., Ronteltap, A., & Van Trijp, H. C. (2013). Consumers’ intention to use health recommendation systems to receive personalized nutrition advice. BMC Health Services research, 13(1), 126.

Whetten, D. A. (2009). An examination of the interface between context and theory applied to the study of Chinese organizations. Management and Organization Review, 5(1), 29-56.

Xie, H., Prybutok, G., Peng, X., & Prybutok, V. (2020). Determinants of trust in health information technology: An empirical investigation in the context of an online clinic appointment system. International Journal of Human–Computer Interaction, 36(12), 1095-1109.

Xie, W., Fowler-Dawson, A., & Tvauri, A. (2019). Revealing the relationship between rational fatalism and the online privacy paradox. Behaviour & Information Technology, 38(7), 742-759. https://doi.org/10.1080/0144929X.2018.1552717

Your.MD. (2021). Healthily: Health questions, answered. https://www.livehealthily.com/. Assessed November 9, 2021.

Yu, L., Luo, X., Liu, X., & Zhang, T. (2016). Can we trust the privacy policies of android apps?. Proceedings of the IEEE/IFIP International Conference on Dependable Systems and Networks, 46(1), 538-549. https://doi.org/10.1109/DSN.2016.55

Zahedi, F. & Song, J. (2008). Dynamics of Trust Revision: Using Health Infomediaries. Journal of Management Information Systems, 24(4), 225–248. https://doi.org/10.2753/MIS0742-1222240409

Zhang, X., Guo, X., Lai, K. H., Guo, F., & Li, C. (2014). Understanding gender differences in m-health adoption: a modified theory of reasoned action model. Telemedicine and e-Health, 20(1), 39-46.

Zhang, Y., Liu, C., Luo, S., Xie, Y., Liu, F., Li, X., & Zhou, Z. (2019). Factors influencing patients’ intentions to use diabetes management apps based on an extended unified theory of acceptance and use of technology model: web-based survey. Journal of Medical Internet Research, 21(8), e15023.

Zhang, Z., Genc, Y., Wang, D., Ahsen, M. E., & Fan, X. (2021). Effect of AI Explanations on Human Perceptions of Patient-Facing AI-Powered Healthcare Systems. Journal of Medical Systems, 45(6), 64. https://doi.org/10.1007/s10916-021-01743-6

Zhao, Y., Ni, Q., & Zhou, R. (2018). What factors influence the mobile health service adoption? A meta-analysis and the moderating role of age. International Journal of Information Management, 43, 342–350. https://doi.org/10.1016/j.ijinfomgt.2017.08.006

Zhou, T. (2011). The impact of privacy concern on user adoption of location‐based services. Industrial Management & Data Systems, 111(2), 212–226. https://doi.org/10.1108/02635571111115146

Zierau, N., Engel, C., Söllner, M., & Leimeister, J. M. (2020). Trust in Smart Personal Assistants: A Systematic Literature Review and Development of a Research Agenda. Proceedings of the International Conference on Wirtschaftsinformatik (WI 2020), 15(1), 1-15. https://doi.org/10.30844/wi_2020_a7-zierau

Downloads

Published

2024-05-15

How to Cite

Prakash, A. V., & Das, S. (2024). (Why) Do We Trust AI?: A Case of AI-based Health Chatbots . Australasian Journal of Information Systems, 28. https://doi.org/10.3127/ajis.v28.4235

Issue

Section

Research Articles