Who’s the New Kid on the Block? Artificial Intelligence (AI) in Speech Therapy

Digital assistants, manufacturing robots, virtual travel booking agents, self-driving cars, marketing chatbots, smart home devices, text editors, voice to text converters, Siri, Alexa . . . oh my! The influence of artificial intelligence (AI) is far reaching and touches almost all aspects of our everyday lives. The influx of AI operated software into our homes, workplace, and social milieus has left no person untouched by this progression of “thinking” computer systems.

Although artificial intelligence may seem new to most, it has been around since the early to mid-20th century with Alan Turing introducing the term “computer intelligence” while lecturing on the Automatic Computing Engine at the London Mathematical Society in 1947.

Now, if you haven’t heard of Alan Turing, just think of Benedict Cumberbatch in The Imitation Game (2014), in which he starred as Alan Turing, depicting his work as a cryptanalyst working with a team of decoders to crack Nazi codes using a computing machine called Christopher, which not only aided the British government in deciphering encrypted Nazi communications but further pioneered technology that could complete problem-solving tasks.

Now, back to the London Mathematical Society lecture, it was the earliest known public lecture on computer intelligence at which time Turing (1947) stated that “What we want is a machine that can learn from experience. . . and the possibility of letting a machine alter its own instructions provides the mechanism for this.”. Prior to this lecture, Turing published On Computable Numbers, With An Application to the Entscheidungsproblem (1936) in which he theorized that it was possible to build a machine that could compute anything a human could compute; thus, the concept of artificial intelligence (AI) was born and proven through his computing machine, Christopher.

In discussing artificial intelligence (AI), lets establish a working definition appropriate for this article. Simply put, AI is a field, which combines computer science and robust datasets, to enable problem-solving. There are two types of AI: (a) Weak AI or artificial narrow intelligence and (b) Strong AI or artificial general intelligence. Weak AI being trained to perform very specific or “narrow” tasks. Siri, Alexa, and autonomous cars would be examples of such. Comparatively, Strong AI, to date, remains a theory in which computers would have a level of intelligence that would match and surpass that of humans. Think of some of your favorite, or not so favorite, science fiction movies, I, Robot, Terminator, Avengers, A.I. Rising . . . need I say more.

Notably, artificial intelligence has infiltrated most, if not all, market sectors: real estate, consumer staples, consumer discretionary, utilities, energy, industrials, financials, technology, materials, consumer services and healthcare. Examples of such were provided earlier in this article, however, further discussion will be given to the use of AI in healthcare settings, specific to the field of speech-language pathology.

Regardless of the work setting and clientele, speech-language pathologists (SLP) have certain functions that are similar, whether working with children or adults in the educational, clinical, or hospital setting. The flow of the workday may be similar in the completion of interviews, screenings, evaluations, treatments, documentation, report writing, and considerably more. A speech-language pathologists’ caseload may increase based on the therapeutic setting, suburban or rural locale, patient or client type, staff shortages, etc.

For instance, a clinician employed in a rural school setting may have a higher caseload due to a scarcity of SLPs in their district, resulting in an increased workload as well. Or think of the SLP working in the hospital setting with adults presenting with strokes, dysphagia, head injuries, apraxia, dysarthria, etc. This clinician is responsible for prioritizing patients based on their time of admission, diagnosis, criticalness, plus several other factors; interviewing the patient, family members, staff members; gathering reports from allied professionals; completing evaluations using instrumentation (modified barium swallow studies, fiberoptic endoscopic evaluation of swallowing) and so much more! Now imagine these responsibilities exponentially increased as an SLP treats a higher caseload of patients in a county hospital.

It’s not hard to extrapolate on how artificial intelligence could improve the workload of an over-worked and over-tired SLP in either of these settings. How so? Well, let me count the ways . . . AI may aid clinicians in the areas of:

a) Clinical decision-making

b) Treatment recommendations

c) Documentation accuracy

d) Documentation processes

e) Generating personalized exercises

f) Goal-writing

g) Analyzing, collecting, and sorting data

h) Providing biofeedback on real-time speech productions

i) Predicting neurological diseases

j) Improve social, emotional, and communication skills in persons with autism spectrum disorder (ASD)

The list could go on and on, or could it? Artificial intelligence has been proposed as the speech-language pathologist’s ally as a means of streamlining and automating daily processes and workflows, ensuring objective assessments, personalizing treatment plans specific to the client based on their background, communication profile and progress data log, and so much more. But what does this really mean for us every day speechies?

According to the American Speech Language Hearing Association Journals Academy (2020), AI will result in clinicians:

1. Becoming more efficient in relation to documentation, thus allowing for more time to be spent with clients.

2. Having increased accessibility to new tools for evaluating clients, thus improving communication outcomes.

3. Participating more in the early identification of disorders and diseases, as well as increasing efficacious treatments based on sensitive objective tests being used to monitor disease/disorder progression.

To date, many of these predictions are aspirational and yet to be realized by speech-language pathologists in their everyday practices. Arguably, “the language used to describe AI technologies in scientific literature, the popular press, and advertising materials often creates unreasonable expectations of such systems’ current capabilities. . .” (Morris, 2020, p. 2). As clinicians, we need to temperate our excitement with consideration of AI’s realistic performance currently. And before you go pooh-poohing this consideration, just know that I say this as a tech lover who is always ready to adopt the next fashionable tech trend or craze as any avid tech groupie would be.

Notably, a preponderance of research indicates persons with disabilities are rarely included in the design and development process of AI systems, resulting in key ethical challenges related to bias and fairness, privacy and data protection, accountability and transparency, social impact, and job displacement, as well as safety and reliability.

To offset some of these ethical considerations, it is important that we as discerning clinicians promote the incorporation of diverse perspectives during the AI design and development process, collaborate with interested parties (government, organizations, industry stakeholders) to establish regulatory frameworks for AI systems, collaborate with AI developers to discuss ethical implications and continuously monitor and evaluate AI systems to ensure performance quality.

As speech-language pathologists, we are responsible for protecting our clients. Our American Speech Language Hearing Association’s (ASHA) Code of Ethics that may best apply are:

· Principle of Ethics I: Individuals shall honor their responsibility to hold paramount the welfare of persons they serve professionally or who are participants in research and scholarly activities.

· Principle of Ethics IV: Individuals shall uphold the dignity and autonomy of the professions, maintain collaborative and harmonious interprofessional and intraprofessional relationships, and accept the professions’ self-imposed standards.

This professional series on artificial intelligence in speech therapy will focus on ethical considerations as related to (a) privacy, (b) errors, © expectation setting, (d) simulated data, (e) inclusivity, (f) bias, and (g) social acceptability.

REFERENCES

Achenbach, J. (2015). What ‘The Imitation Game’ didn’t tell you about Turing’s greatest triumph. The Washington Post. Retrieved August 29, 2023, from https://www.washingtonpost.com/national/health-science/what-imitation-game-didnt-tell-you-about-alan-turings-greatest-triumph/2015/02/20/ffd210b6-b606-11e4-9423-f3d0a1ec335c_story.html

Ali, A. (2023). The ethics of artificial intelligence: Navigating the moral challenges of AI systems. Medium. Retrieved August 29, 2023, from https://asharibali.medium.com/the-ethics-of-artificial-intelligence-navigating-the-moral-challenges-of-ai-systems-ad1c080421f1

American Speech-Language-Hearing Association (2023). Code of Ethics. Retrieved August 29, 2023, from https://inte.asha.org/Code-of-Ethics.

Berisha, V. & Liss, J. (2020). How will artificial intelligence reshape speech-language pathology services and practice in the future. Retrieved August 29, 2023, from https://academy.pubs.asha.org/2020/08/how-will-artificial-intelligence-reshape-speech-language-pathology-services-and-practice-in-the-future/

Morris, M. (2020). AI and accessibility: A discussion of ethical considerations. Microsoft Research. Retrieved August 29, 2023, from https://arxiv.org/pdf/1908.08939.pdf

TechySLP

Clearview Speech and Consulting Services, PLLC

www.clearviewspeech.com

contact@clearviewspeech.com

https://www.instagram.com/nikosidarnell/

https://www.linkedin.com/in/dr-nikosi-darnell/

https://twitter.com/TechySLP