Regardless of whether occupational therapists approach artificial intelligence with curiosity, uncertainty, or concern, one thing is certain: AI is also affecting occupational therapy. To ensure that the core values of the profession are represented in the healthcare system’s AI solutions, it is important that occupational therapists engage in the development, writes Vera Kälin. As a researcher, she studies how human-centered artificial intelligence can support rehabilitation efforts.
Vera Kälin
Occupational therapist, PhD and rehabilitation researcher
Umeå University
[email protected]
Artificial intelligence (AI) is developing incredibly fast. It is part of our everyday lives – from navigation apps and voice assistants to automated translations and recommendations. Healthcare is no exception. AI is increasingly being integrated into clinical practice such as documentation systems and decision-making tools. Regardless of whether occupational therapists (OTs) feel excited, uncertain, or concerned about this development, one thing is clear: AI affects occupational therapy (OT) and OT clients, and it will continue to do so
For this reason, it is essential that AI and its implications are actively discussed within the OT profession. OTs need the relevant knowledge and competencies to respond to AI-influenced healthcare systems and to the AI-shaped everyday lives of their clients.1 If OTs do not engage with this topic, AI applications in healthcare and technologies used by OT clients will mainly be designed and implemented by others, such as physicians, engineers, or commercial companies. This creates a risk that core OT values - and the perspective and needs of OT clients - will not be adequately represented in these developments.1 But let us start from the beginning and look at this in more detail.
What is Artificial Intelligence (AI)?
An AI system is defined by the EU AI Act2 as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” In other words, an AI system is a computer system that can learn or adjust its behavior over time and use the data it receives to produce explicit or implicit results—like predictions, recommendations, or decisions—that affect the real or virtual world.
How AI Affects OT
AI in Clinical Documentation. AI applications are being tested and implemented in healthcare settings that involve OTs. One example is automated clinical documentation.3 These systems record conversations between healthcare professionals and clients, transcribed them, and automatically generate summaries. Recent research has shown that such systems can reduce stress and burnout among physicians, decrease time spent on documentation, and free up more time for direct client contact.4
Although these developments may offer benefits, they also involve important challenges and risks. For example, if AI systems do not adequately “understand” OT outcomes, they may be of limited use for OTs or even cause harm. Consider an automated documentation system designed to summarize a conversation between a physician and a client and then automatically determines based on these summaries whether the client should be referred to OT. If the system does not “understand” OT outcomes, clients’ who could benefit from OT may not be referred.
In our prior work, we examined how the concept of “participation in activities”- a core OT outcome - is represented in AI-based interventions, assessments, and taxonomies commonly used in health informatics. Our reviews showed that participation, from an OT perspective, is poorly represented in AI-based pediatric rehabilitation interventions and assessments.5,6 Ongoing work in adult rehabilitation suggests a similar pattern in AI-based home and community interventions.7 In addition, participation has been shown to be insufficiently represented in taxonomies widely used in health informatics.8 To address this challenge, we have begun exploring modifications to the architecture of large language models (LLMs).9 The aim is to guide these models in identifying and extracting information related to participation in a way that aligns more closely with the OT understanding of this concept.
AI and Electronic Health Records (EHR). Electronic health records (EHRs), which are widely used today, provide an important data source for AI tools. These tools may include decision-support systems that, for example, recommend certain therapy services such as OT. As a result, the type of data stored in EHR systems is important for the OT profession.
In practice, many OTs prefer to use free-text fields when documenting in EHR systems. While this approach may better reflect a client’s situation, AI-facilitated decision-support systems often rely primarily on structured data (e.g., checkboxes or standardized fields) and do not incorporate free-text information. Consequently, important aspects of OT practice may be overlooked. For these reasons, it is essential that OTs take an active role in discussions about which outcomes should be included in EHR systems (i.e., checkboxes) and how documentation should be structured. This proactive engagement can help ensure that OT practice is accurately represented in AI systems and future healthcare decision-making.
AI Shaping Occupational Performance. Another important aspect is that AI is increasingly shaping everyday life – and therefore also the lives of OT clients. AI influences how people perform activities and participate in daily life, which in turn affects the activities and situations they need or wish to engage in. For example, preparing a meal may now involve asking an LLM, such as ChatGPT, what to cook with the ingredients available at home and how to prepare it. In this way, AI directly shapes occupational performance and, thus, OTs’ work and skills needed to support clients.
AI and Ethical Principles in OT
AI raises important ethical reflections that are highly relevant to OT.1 These include bias in AI systems, limited transparency, concerns about privacy and data security, the extensive use of planetary resources, and the implicit goals embedded in AI technologies.
Bias and Occupational Justice. AI systems learn from existing data. If certain groups – such as people with disabilities or minority populations – are underrepresented or misrepresented in these datasets, AI systems may reinforce existing inequalities. For example, AI tools used in hiring have shown to disadvantage people with disabilities.10 From an OT perspective, this directly relates to occupational justice and respect for diversity.
Transparency and Explainability. Many AI systems function as “black boxes,” meaning it is not clear how decisions or recommendations are generated. In healthcare, this lack of transparency is problematic. From an OT perspective, therapists need to understand and explain why a particular recommendation is made. Clients also have the right to know how technology influences their care.
Privacy and Confidentiality. Data about daily activities, routines, and participation are deeply personal. From an OT perspective, when such data are collected and analyzed by AI systems, they must be handled responsibly, transparently, and with informed consent for their exact use.
Sustainability and Planetary Health. The development and maintenance of AI systems require substantial planetary resources, including a vast amount of energy and water. From an OT perspective, sustainability is closely linked to occupational justice and global health.
Implicit Goals and the Power of the Designer. AI systems can have both explicit and implicit goals.2 Explicit goals are transparent and clear to users. Implicit goals, however, may arise from commercial and institutional interests embedded in system design. These hidden priorities can subtly shape clinical decision-making, influence which interventions are recommended, or reinforce existing inequalities. This underscores the power of the designer: those who build and implement AI systems embed values, assumptions, and priorities into the technology.
Human-centered AI: A Match with Ethical Principles in OT
A promising direction in AI research is human-centered AI (HCAI).11 HCAI aims to design AI systems that support rather than replace humans. Its goal is to enhance human capabilities, enable informed decision-making, and respect human values, needs, and diversity. HCAI emphasizes transparency, explainability, fairness, privacy, and collaboration with end users. Instead of focusing mainly on efficiency or profit, it seeks to serve individuals and society as a whole.
These principles closely align with the ethical values of OT, including client-centeredness, occupation-focused practice, occupational justice, respect for diversity, and social responsibility.1,12 This strong overlap suggests that OTs have valuable expertise to contribute to HCAI – and that AI development can benefit greatly from OT perspective.
Looking Ahead: AI and OT
AI is developing rapidly, and it is here to stay. It already influences OT practice, which offers opportunities but also presents serious challenges resulting in important new tasks for OTs:
Ensuring OT Data are Well-Represented in EHRs: AI are increasingly built on existing digital infrastructures, particularly EHRs. Because these systems provide the data foundation for AI tools, OT outcomes and perspectives must be meaningfully represented within them.
Developing AI Competencies and Engaging in Co-Design: Knowledge, skills, and critical engagement with AI are no longer optional for OTs - they are essential. This includes proactively ensuring the quality and visibility of OT data in AI-relevant systems and reflecting on the type of AI that should be prioritized (e.g., HCAI). It also requires the competencies needed to actively address ethical issues such as privacy, bias, and transparency, as well as the awareness of the power of AI designers in shaping system priorities and care processes. Consequently, OTs should actively collaborate with researchers, developers, and clients in co-design processes – and be prepared to initiate and lead such processes themselves.
References
- Kaelin VC, Nilsson I, Lindgren H. Occupational therapy in the space of artificial intelligence: Ethical considerations and human-centered efforts. Scand J Occup Ther. 2024;31(1):2421355. doi:10.1080/11038128.2024.2421355
- European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L 2024/1689. Preprint posted online 2024:1-144.
- Bracken A, Reilly C, Feeley A, Sheehan E, Merghani K, Feeley I. Artificial Intelligence (AI) – Powered Documentation Systems in Healthcare: A Systematic Review. J Med Syst. 2025;49(1):28. doi:10.1007/s10916-025-02157-4
- Olson KD, Meeker D, Troup M, et al. Use of Ambient AI Scribes to Reduce Administrative Burden and Professional Burnout. JAMA Netw Open. 2025;8(10):e2534976. doi:10.1001/jamanetworkopen.2025.34976
- Kaelin VC, Valizadeh M, Salgado Z, et al. Capturing and operationalizing participation in pediatric re/habilitation research using artificial intelligence: A scoping review. Frontiers in Rehabilitation Sciences. Published online April 14, 2022. doi:10.3389/fresc.2022.855240
- Kaelin VC, Valizadeh M, Salgado Z, Parde N, Khetani MA. Artificial intelligence in rehabilitation targeting the participation of children and youth with disabilities: Scoping review. J Med Internet Res. 2021;23(11):e25745. doi:10.2196/25745
- Quaaden K, Kaelin VC, Patomella AH. Using artificial intelligence to support participation-focused interventions in home and community settings: A scoping review protocol. May 31, 2024.
- Kaelin VC, Bosak DL, Saluja S, Newman-Griffis D, Boyd AD, Khetani MA. Representation of child and youth participation within the Unified Medical Language System (UMLS). Disabil Rehabil. Published online 2024. doi:10.1080/09638288.2024.2338191
- Guerrero E, Imms C, Granlund M, Lindgren H, Kaelin VC. Advancing pediatric rehabilitation documentation via neurosymbolic AI. In Press.
- Nugent SE, Scott-Parker S. Recruitment AI has a disability problem: Anticipating and mitigating unfair automated hiring Decisions. In: Intelligent Systems, Control and Automation: Science and Engineering. Vol 102. Springer Science and Business Media B.V.; 2022:85-96. doi:10.1007/978-3-031-09823-9_6
- Nowak A, Lukowicz P, Horodecki P. Assessing Artificial Intelligence for Humanity: Will AI be the Our Biggest Ever Advance? or the Biggest Threat [Opinion]. IEEE Technology and Society Magazine. Institute of Electrical and Electronics Engineers Inc. 2018;37(4):26-34. doi:10.1109/MTS.2018.2876105
- Kaelin VC, Nilsson I, Lindgren H. Expanding Interdisciplinary Research in Human-AI Interaction to Include Occupational Therapy: The Now and the Future. Oral presentation at the Inter.HAI workshop held during the 11th International Conference on Human-Agent Interaction; December 2024; Gothenburg, Sweden. Available from: https://drive.google.com/file/d/1gthqMsX_t_0rK7C2PrxBsaoqLOtrj4ti/view. 2023.
Current research on AI and occupational therapy
With financial support from the World Federation of Occupational Therapy (Thelma Cardwell Award for Research) and the Swiss Association for Occupational Therapy (EVS), Kristine Ann Carandang and Vera Kaelin are co-leading a research project to further examine the competencies OTs need in an AI-influenced context to inform OT education, practice, and research.