Artificial Intelligence (AI), seemingly all of a sudden, has permeated our world. New advances in technologies have the potential to assist students inside and outside the classroom and increase the potential for independence and success. AI can be used by educators to support students. It can be used by students to enhance communication skills. It can help with independence, planning, and organization. However, there are also risks with AI – some related to the content creation itself, and others come from a lack of understanding of a new technology. Those risks can be mitigated with education, clear guidelines, and appropriate policies. Ultimately, AI can and should be an effective tool to support the success of neurodivergent students.
AI has existed in our common experience for some time now. Most of us are familiar with common uses such as mapping locations through a GPS or mobile app, the voices of Siri and Alexa helping us to set timers and alarms, or telling us who wrote a song we may have forgotten. More recently, AI is turning up in new places, sometimes without our even knowing it. AI technologies are estimated to create a value of $13 trillion by 2030 but also displace some 800 million jobs in that same timeframe (Hutson et al., 2022). In educational settings, there is a broad spectrum of responses to the availability of AI. AI has the potential to positively impact learning outcomes, but many view it with ambivalence or even open hostility (Hutson et al., 2022). A report from 2018 cites such positives as better learning outcomes, increased access to education, increased retention, lowering the cost of education, and less time to complete schooling (Klutka, 2018).
In the classroom, AI can be used to create adaptive learning experiences, personalizing learning to support the needs of individuals based on their learning styles, interests, and limitations. The potential for students to overcome learning obstacles is significant, allowing for content difficulty and delivery to be adjusted in real time (Rajaratnam, 2024). All of this can be considered an assistive use of AI.
AI’s biggest promise is its generative use, or the ability to create content with certain prompts. AI can be used to write essays, create art, and answer writing prompts. It can generate an email or letter when provided with minimal information. For a student who might have difficulty communicating verbally or in detail, asking AI to write an email to a teacher to get clarity on an assignment or address a particular challenge offers that student support. For some students, learning challenges themselves can be paralyzing or foster procrastination. Having a reliable tool that lessens that challenge can be quite beneficial.
However, the line between assistive and generative uses of AI is not always clear—to the user or to the reader. Content generated by AI tools like ChatGPT, Claude, Google Gemini, and Microsoft Copilot do not cite the primary sources upon which they rely. Those tools merely scour the information available on the internet and summarize it without any regard to the source, age, credibility, or reliability of the information. Yet some web browsers have embedded AI in such a way that it is not even clear if it is being used. The risks for students, therefore, are many. The learning that comes from the process of research and writing is lost. The content itself might be flawed or completely wrong. There is a lack of research and analytical skill-building. However, there might be a benefit to using AI as the first step in a research/writing project. It can be used to generate an outline which, in turn, can be the step-by-step guide to the research to be done from there.
Much of the skepticism, and even hostility, toward the use of Artificial Intelligence in education stems from these risks. The use of anti-plagiarism software is commonplace in educational institutions (Hutson, 2022). The most used software is Turnitin. But, there has been a discussion centered around the reliability of the product in the first place (Fowler, 2023), which has caused a rise in academic dishonesty conflicts between students and educators.
In addition, the use of such technologies without appropriate education to students, then adjusting policies to respond to suspected improper use of AI, may create a barrier to accessible education for neurodivergent students. It is common for educational institutions to have policies in place that address “plagiarism” or “cheating.” The common definitions of both are understood and usually defined in academic integrity policies. The discipline for violating those policies can be severe, including failing grades, failing classes, and dismissal from the school. These consequences are particularly devastating in higher education, where every credit comes at an increasingly high financial cost or the time to earn those credits is limited to the four years students might be able to live away from home. Adding classes or having to make up credits can be costly and time-prohibitive. A dismissal can have particularly devastating results, impacting a student’s future career prospects irreparably.
The potential for misunderstanding leading to these types of situations is currently high but can and must be mitigated proactively. Apart from professional development in AI for educators, educators must gain an understanding of their students’ perception of AI (Feedback Fruits). One suggestion is the use of pre-course surveys to gain knowledge about how students are using AI already, as well as their questions, misconceptions, and expectations for AI use (Feedback Fruits).
Defining the norms and boundaries around the use of AI must also be a top priority for educators (Feedback Fruits)—in short, crafting clear and appropriate policies to reflect the capabilities and potential misuse of AI, but which are not draconian in their application. As with almost everything in education, sound policies should begin in the classroom. There should be clear written ground rules for what is an acceptable use of AI, what is not, and what to do if the distinction is not clear. There should be clear processes in place for constructive conversations around suspected misuse of AI to fully investigate the circumstances before conclusions are reached. The definitions of cheating, academic dishonesty, and plagiarism should be examined and discussed in the context of AI. The disciplinary measures for violating those policies should also be examined and adjusted to allow for transparency and accountability for students who knowingly violate policies, but also to allow for education and support for those who do not.
Assistive uses of AI, however, have a tremendous potential to improve academic success for neurodivergent students, as well as to assist them in gaining independence and supporting daily living and social skills. AI can be used to create digital calendars, to do lists, and set reminders and alarms to get those tasks done. It can be used to order food, arrange for transportation, and order other necessary supplies, even at times with automatic replenishment features. AI can create cover letters and resumes. Indeed, most job applicants are already using these tools. So, AI is leveling the playing field, at least to get applicants through the door. AI can do such things as create recipes with step-by-step instructions for cooking a meal. Studies are also now being done to understand how the personalization of assistive technology can help in analyzing the behaviors and responses of autistic individuals to create different tools to appropriately respond to specific needs, including physical and mental health needs (See Iannone & Giansanti, 2023).
In sum, a thoughtful and appropriate examination of the use of AI as a tool for neurodivergent students can create more benefit than risk and, ultimately, lead to academic and personal success for those students.
Tara C. Fappiano is an educational advocate who works frequently with neurodivergent students in higher education, offering transition services, assisting with accommodations requests and disability services, and supporting them when conflicts arise – including academic dishonesty and related conflicts, to assist them in staying and getting back on the path to academic success. For more information, visit www.tarafappiano.com or email tcf@tarafappiano.com.
References
Fowler, G. (2023, June 2). Detecting AI may be impossible. That’s a big problem for teachers. Washington Post, www.washingtonpost.com/technology/2023/06/02/turnitin-ai-cheating-detector-accuracy/
Hutson, J. Jeevanjee, T. Vander Graaf, V., Lively, J., Weber, J., Weir, G., Arnone, K., Carnes, G., Vosevich, K., Leary, M., Edele, S. (2022). Artificial Intelligence and the Disruption of Higher Education: Strategies for Implementation across Discipline, Creative Education, 13 (12). doi.org/10.4236/ce.2022.1312253.
Iannone, A., Giansanti, D. (2023, Dec. 28). Breaking Barriers—The Intersection of AI and Assistive Technology in Autism Care: A Narrative Review, Journal of Personalized Medicine, 14 (1), 41. https://doi.org/10.3390/jpm14010041
Klutka, J., Ackerly, N., & Magda, A. J. (2018). Artificial Intelligence in Higher Education: Current Uses and Future Applications. Learning House.
Leading in the Age of AI: A comprehensive guide for higher education, Feedback Fruits
Rajaratnam , V. (2024, Feb. 22) AI in Higher Education: Potentials, Pitfalls, and Strategies for Implementation, AI in Academia. www.linkedin.com/pulse/ai-higher-education-potentials-pitfalls-strategies-rajaratnam-1kxvc/
[…] study. AI also has the capacity to modify content and the mode of presentation for students with learning disabilities or neurodiverse […]