Artificial intelligence for students in postsecondary education

AI-based apps can facilitate learning for all post-secondary students and may also be useful for students with disabilities. Here we share some reflections from discussions that took place during two advisory board meetings on the use of such apps for students with disabilities at the post-secondary level.


Introduction
Intelligent technologies (such as smartphones and tablets which incorporate principles of universal design) have the potential to increase the inclusion of students with disabilities and other diverse learners in different aspects of post-secondary education.Artificial intelligence ("AI"), which for our purposes includes "computing systems that are able to engage in human-like processes such as learning, adapting, synthesizing, self-correction and use of data for complex processing tasks" (Popenici & Kerr, 2017), is in many respects the bedrock of more universally accessible engagement, because it has the potential to permit technology to adapt to the diversity and Copyright c 2020 by the author(s).
unique needs of users.While the practical and mundane benefits of artificial intelligence systems are taken for granted by society at large (e.g.speech recognition, real-time captioning (Mathur, Gill, Yadav, Mishra, & Bansode, 2017), linguistic translation (Kapoor, 2020), or organizing tools such as calendars and "to do" lists (Canbek & Mutlu, 2016)), often overlooked are the potential benefits to specific populations who would otherwise be dependent upon third parties for support.For example, artificial intelligence has been successfully harnessed to provide people with mental health concerns an always-available, pointin-time responsive, supplementary or intermediate support system (Inkster, Sarda, & Subramanian, 2018).The available evidence at present indicates that individuals with disabilities are conspicuously excluded from AI training models and have not yet become meaningful contributors to, or beneficiaries of, the ongoing discussions surrounding artificial intelligence and machine learning (Lillywhite & Wolbring, 2020).
Despite advances in inclusive education practices, the reality is that for many of the 10%-20% (Eagan et al., 2017;Fichten et al., 2018;Snyder, De Brey, & Dillow, 2016) of students with disabilities, barriers to accessing postsecondary education continue to persist.Inaccessible websites, commonplace at some of the world's most renowned institutions, continue to pose significant accessibility barriers (Huffington, Copeland, Devany, Parker-Gills, & Patel, 2020).In spite of increased use of automatic speech recognition, Deaf and hard of hearing students often do not have adequate access to qualified real-time captioning (Butler, Trager, & Behm, 2019).Even where captioning or interpreters are available, Deaf and hard of hearing students may lag behind their peers due to the increased cognitive load of processing the visual translation of audio information, visual attention limits associated with trying to track dispersed visual information in the classroom, difficulties engaging in discussions, and limited social interactions with peers, among other factors (Kushalnagar, 2019).
The initiative we report in this article had its genesis in prior research exploring the issue of fairness of AI for people with disabilities (in employment, education, public safety, and healthcare (Trewin et al., 2019)).Building on our previous research regarding the accessibility of information and computing technologies for post-secondary students (Fichten, Asuncion, & Scapin, 2014;Thomson, Fichten, Havel, Budd, & Asuncion, 2015) and the increasing use and utility of ubiquitous mobile technologies, such as smartphones and tablets in post-secondary education (Fichten et al., 2019), the question arose as to the impact and adoption of new AI-based technologies.We therefore convened an advisory group to gather input from post-secondary students, consumers with disabilities, post-secondary disability/accessibility service providers, faculty, and technology experts to explore how college and university students with diverse disabilities (visual or hearing impairments, learning disabilities, attention deficit hyperactivity disorder, etc.) could benefit from AI-based smartphone and tablet apps, and how these could facilitate student academic success.In summarizing the themes that emerged during the advisory board meetings, we hope to lay the groundwork for more in-depth and targeted initiatives to objectively evaluate the effectiveness of the tools currently being used, and to explore the feasibility and potential success of future innovations within the post-secondary education context.

The Advisory Board Meetings
We invited stakeholders from within our network who could provide insights on the use of artificial intelligence at the post-secondary level from diverse perspectives.In total, this included 38 individuals: 7 students, 3 disability/accessibility service providers, 14 faculty members (some with and some without disabilities), 9 technology experts, and 5 technology users with disabilities.Two advisory board meetings were held in May 2020, using the Zoom (San Jose, CA) videoconferencing system to accommodate for the different time zones in which the international attendees resided.Spontaneously, attendees divided themselves roughly in half into one of the two sessions.Academic discussions surrounding the ethics of AI implementation are focused on issues surrounding the potential for discrimination or exclusion based on gender, race, language, or other identifiable characteristics.The issues of AI bias in the disability context go beyond these discussions because, unlike race or gender, there is no uniform cluster or data element that is common to everyone with a disability.Whether a particular AI integration will be useful may, for some users, depend on the degree to which additional idiosyncratic characteristics can be taken into account.In most applications, that functionality does not exist, and current research shows that the training data used to "teach" AI systems is lacking in diversity by not including individuals with disabilities (Kafle et al., 2020).
Students with disabilities may be more technologically advanced than their peers, but greater training and support is required: Students with disabilities who face daily barriers to access often develop a heightened ability to problem-solve and this can manifest through the use of technology.Past research has shown that individuals with more significant disabilities (e.g.those who are functionally blind) may actually develop more sophisticated technological capabilities than their peers.For example, Fichten, Asuncion, Barile, Ferraro, and Wolforth (2009) found that students who were functionally blind felt significantly more comfortable utilizing technology in the classroom than students who had low vision.Nonetheless, many instances were noted whereby students who could benefit from the use of AI-enabled technologies (such as Seeing AI) were simply unaware that such tools even existed.Learning that the technologies exist and how to effectively use them is, for many, a critical step in the transition from high school to post-secondary education.The lack of training is especially problematic for students who acquire a disability later in life and who do not have previous experience with the use of such devices.
Similarly, many students are unaware of the convenience and power that could be achieved by leveraging the integrated use of AI-enabled technologies.For example, the online service "IFTTT" (If-This-Then-That) allows users to schedule and pre-program cascading events based on triggers from webenabled services.IFTTT could be used to detect (based on GPS data from a smartphone or a Fitbit) that a user is not in the classroom where they are expected to be according to their calendar and trigger a notification to make them aware that they are missing a class.

Current and potential utilization of existing AI tools to facilitate post-secondary learning
Chatbots "Chatbots" -AI-enabled text-based conversationalists -are becoming common as a means of providing front-line support to users of technology and have the ability to provide quick answers to the most common questions that users might otherwise pose to a technical support agent.Chatbots have also been implemented in post-secondary institutions to provide on-demand answers to the many questions that students pose.Various universities have used AI-enabled Chatbots to facilitate the transition from high school into university, guiding new students through the myriad of administrative matters that must be addressed at different points of the admissions cycle, such as applying for and responding to financial aid applications and enrolling/registering for courses at the appropriate time (Nurshatayeva, Page, White, & Gehlbach, 2020;Page & Gehlbach, 2017).
In each case, the implementation of an AIenabled chatbot was found to improve outcomes for first-year students, especially for those who were struggling or particularly 'at risk'.Aside from being responsive to students' point-in-time needs, these Chatbot services also permitted admissions and financial aid staff to redeploy resources to the more complex cases.Building on these past successes, several meeting attendees noted that Chatbots are also being implemented at their own institutions, primarily to support the needs of distance education students.These are being integrated into existing learning management systems (e.g.Moodle) to provide answers to common student questions such as, "When is my exam?" or "How can I organize my course documents?"

Emotional, mental health, and medical regulation
Students who experience emotional hurdles or mental health difficulties, many of which can be episodic in nature, are at a distinct disadvantage in the post-secondary environment which has traditionally lacked the flexibility to adapt to varying or changing individual needs (Arim & Frenette, 2019).Meaningful access to mental health support services has been found to be limited for many Canadian post-secondary students, despite the fact that mental health problems including anxiety, depression, and substance abuse are particularly acute among young adults age 18-25 (Nunes et al., 2014), and are especially prevalent during the current COVID-19 pandemic (Son, Hegde, Smith, Wang, & Sasangohar, 2020).
Where such support does not exist or is more limited, several AI-informed app-based tools were identified which can provide point-in-time support to aid in emotional, mental health, and medical regulation: • Brain in Hand is an AI-enabled professional assistant that helps with making decisions, managing symptoms of anxiety, and responding to unexpected situations.Once the individual subscribes to this application, they gain access to a personal specialist to assist in setup, accessible self-management tools, and contact with a human support network.
• Empower Me is a digital coach that operates on smart glasses, to aid individuals with autism in self-regulating by helping them to understand facial expressions and emotions.Where appropriate, it draws attention to facial and eye cues.
• SeizAlarm, My Medic Watch, and Smart-Watch Inspyre can detect seizures and trigger notifications to emergency contacts, providing information on the user's location and current status.
• Woebot is a mental health "chatbot" which can provide an outlet for students with depression, and has been shown to reduce depressive symptoms by employing positive thinking precepts commonly used in traditional Cognitive Behavioral Therapy (Fitzpatrick, Darcy, & Vierhile, 2017).
Many wearable technologies, such as "smart watches", can also cue in to increased levels of stress or anxiety, and provide reminders to take breaks and focus on breathing exercises in response.These tools cannot supplant the intervention of professional counselling, but may be useful in times of crises or when more structured support services are not readily available (Inkster et al., 2018).

Organizational and executive functioning aids
One of the most common areas where AIenabled apps and tools were thought to be useful as an educational aid is in the area of personal organization and assisted executive functioning.More specifically, AI-enabled tools are being used to help coordinate the many dependent and parallel tasks that students must complete.For example, while personal assistants such as Google Assistant are, of course, capable of giving reminders, such reminders are far more helpful if they are presented in a context-aware manner (Singh & Varshney, 2019).For example: • Aside from relying on items in an individual's calendar, AI can detect and recognize patterns in an individual's schedule and provide appropriate notifications, such as when an individual should leave for a meeting.• For those commuting to and from work or school, Google Maps will often learn one's usual departure time and provide an alert indicating the expected travel time on a given day as a subtle reminder of when to leave.• Personal assistants, such as Siri and Google Assistant, can be set to provide reminders at specific times or when reaching specific places (e.g.home, the grocery store, etc.), when the user anticipates being able to respond to those reminders.
Meeting attendees noted that while some students are using Siri, Google Assistant, Alexa, and other AI-enabled apps, they are often unaware of the power and context-enabled options that could be used to provide more meaningful and useful organizational assistance.

Input mediation
Entering information into a smartphone or tablet can be difficult for students who have physical or cognitive disabilities that impede interaction with the conventional keyboard.
Several tools that mediate this interaction and help to improve the speed and efficiency of entering textual information were described by the team, including: • SwiftKey and FlickType are AI-enabled keyboard applications that learn and adapt to match an individual's unique way of typing.• UNI is an AI-enabled device (which requires a subscription service) that converts textual information into sign language and vice versa, including the ability to define custom and unique signs where required.• Word prediction (which uses AI to understand context and recall common words used by a writer) is a built-in feature in most smartphone and tablet devices.• Language processing AI applications, and linguistic revision tools such as Antidote, provide context-aware writing and revision assistance, which is particularly helpful for students with learning disabilities.

Accessing visual or textual information in alternative formats
For students who are blind, have low vision, or have other print disabilities such as a learning disability that impact the ability to read printed information, tools that facilitate the conversion of text into more accessible formats (e.g.audio) are particularly valuable.Text-to-speech systems that are capable of scanning a printed page of text and reading it aloud to users date back to the 1970s, with the introduction of the Kurzweil Reading Machines (Goodrich, Bennett, De l'Aune, Lauer, & Mowinski, 1979).However, over the past decade, the proliferation of AI technologies has permitted the development of far more sophisticated "computer vision" applications, allowing apps to identify or locate objects in the environment, describe scenes and photographs, and provide real-time navigation assistance.
With respect to accessing textual information, the team identified a number of existing "apps" and technologies which facilitate this task, including: • Seeing AI or Office Lens, which use the cameras on smartphones and tablets to have the text that is in front of the camera spoken aloud.• Technologies that aid in summarizing long pieces of text into a more abstract form.Available technologies do exist, such as • SMMRY or Reddit's AutoTLDR "bot", and research continues into making such automatic summaries more useful and accurate (Dangovski, Jing, Nakov, Tatalovic, & Soljacic, 2019).• OrCam, a voice-activated wearable technology that uses a small onboard camera to read text.• Voice Dream Scanner, an optical character recognition tool that can use the camera of a smartphone or tablet to read short texts as well as longer multi-page documents.
• SensusAccess provides a self-service solution to convert a range of electronic formats into electronic braille, audio MP3, DAISY, and e-book formats.
Importantly, while the above tools are all specialized programs targeted at those with specific disabilities, use of the text-to-speech functionalities of modern smartphones, smart watches, and smart speakers can be used by any student who would benefit from bi-modal learning, or from effective proofreading, where they can listen to the text while also reading it.
It is important to note, however, that the accuracy of such applications will depend on the quality of the text that is being scanned.
For accessing non-textual information, the following tools have been utilized: • Seeing AI, which uses the cameras on smartphones and tablets to identify objects or scenes in front of the camera.• OrCam, a voice-activated wearable technology that uses a small onboard camera to recognize and identify known faces, and which can also identify products based on their universal bar code.• CamFind, which can identify objects in a photo or video stream.• Facebook and Twitter now use artificial intelligence to automatically provide image captions for photos and pictures.The quality of these descriptions is improving, and this is important for all students given the increasing frequency with which instructors and academic institutions are using social media to communicate.However, it should be noted that manual alt text descriptions should continue to be incorporated to ensure that such images are accessible to screen-reader users.
For students who may be learning in their second language (or learning a second language), translation tools such as Google Translate, Microsoft Translator, or DeepL can also facilitate access to information.AI has increased the speed and accuracy of automated translation tools, but there remain significant limitations on these tools and the impact of context and nuance in language make it difficult to rely and trust in automated translations (Pantea, 2019).

Accessing auditory information in non-auditory formats
For students who are Deaf or who are hard of hearing, the need to access information from classroom (or online) lectures and videos through sign language or a textual format is acute.There are, however, others who may also benefit from access to information in nonauditory forms, including those with auditory processing deficits, as well as students who simply receive and retain information more effectively when it is in a written form.With the widespread and mainstream adoption of smart speakers and smartphone-based personal assistants (e.g.Amazon Alexa, Google Home, Apple Siri, Samsung Bixby ), a significant amount of interest has been generated toward improving the quality and accuracy of speech recognition technologies.However, as described by in Rabiner and Juang (2008), this remains true today: The quest for a machine that can recognize and understand speech, from any speaker, and in any environment has been the holy grail of speech recognition research for more than 70 years.Although we have made great progress in understanding how speech is produced and analyzed, and although we have made enough advances to build and deploy in the field a number of viable speech recognition systems, we still remain far from the ultimate goal of a machine that communicates naturally with any human being.
The team described a wide range of means by which alternative representations of speech can be automatically generated: • Public services such as YouTube and Google Slides now provide or enable the use of automatically generated captions, although the accuracy is very much dependent on the sound quality of the original production and varies significantly across languages.
• Some institutions maintain subscriptions with external service providers to generate automatic captions on pre-recorded videos.
• The Zoom videoconferencing service allows for real-time captioning by a meeting host, or through connections to external captioning providers (as well as enabling use of automatically generated captions).WebEx and Microsoft Teams have similar features.
• Most smartphone and tablet devices include a built-in feature to "dictate" (rather than type text) through their virtual personal assistant, both to send a short message and for writing longer content (e.g. an email message, Office 365 Word document).
• Just Press Record is an AI-enabled mobile audio recording "app" that permits recording, transcription, and iCloud synchronization of voice memos across iOS devices.
• Translation tools such as Microsoft Translator or Google Translate can also be used to generate a "real time" transcript if the speaker is wearing a microphone.This permits students to both listen to the presentation and follow along with the text.
Several individuals also noted that the above tools could be used by any student who might, for example, want to talk through a plan, create an outline, or simply brainstorm ideas, without becoming fixated on the task of writing.
Concerns over privacy and security have led some to eschew the use of these built-in speech-to-text features (which rely on cloudbased technologies, where audio is sent to a remote processing center for analysis, and the text returned to the user's device) in favor of "on-device" solutions (such as Dragon Naturally Speaking), even though such technologies are less advanced and have a lower overall accuracy.
A separate concern was identified regarding the linear nature of audio recordings, and the inherent difficulty for anyone in locating specific information within such a recording.It was suggested that AI and speech recognition may provide an answer to this problem by permitting a user to "search" audio for specific words.While no specific application or technology was discussed, this approach is actually being used and further developed to predict positive COVID-19 test results based on audio recordings (and key words such as smell and taste) from telemedicine assessments (Obeid et al., 2020).
Consistent with existing empirical evidence (e.g.Rohlfing, Buckley, Piraquive, Stepp, & Tracy, 2020), it was noted by individuals that mainstream speech recognition technologies do not work especially well for those with a voice or speech disorder, those who have strong accents, or who have any other condition which significantly impacts on the ability to clearly vocalize and articulate words and sounds.However, some individuals suggested that, given the learning capability of these systems, those experiencing more gradual changes in their vocalization ability may afford the AI an opportunity to learn these changes over time, and therefore not experience the same degree of disconnect.Moreover, specialized speech recognition applications such as Voiceitt (formerly known as Talkitt), which are designed specifically to learn and understand non-standard speech patterns, are being actively developed (Ibrahim, 2016).

Navigation and environmental exploration tools
Finally, the role of AI-enabled GPS and navigation "apps" was discussed, largely in respect of individuals who are blind or have low vision, for whom the ability to travel independently has been associated with greater self-confidence, stronger employment outcomes, and more successful independent living (Vaughan, 1993).Several AI-enabled tools were described as being of assistance to independent travel, including: • AIRA (Artificial Intelligence and Remote Access), a subscription-based service that pairs AI-enabled analysis with a sighted assistant (using the video camera on a smartphone) to guide individuals through the physical environment.
• BlindSquare, an AI-enabled GPS navigation app that provides outdoor navigation assistance, with the feedback, amount and nature of information being tailored to individual user needs and that which is most likely to be useful at a given point in time.
• Microsoft Soundscape uses AI to enable navigation through the physical environment using a series of 3D audio cues.
• Nearby Explorer Online is a free GPS application that helps students who are blind navigate the physical environment.Specific or favorite locations can be marked to provide more contextual information.
For other users, AXS Map uses artificial intelligence and crowdsourced accessibility information to help locate accessible businesses (with wheelchair ramps) and nearby accessible washrooms.

Future AI-based smartphone and tablet technologies
Probing exhibited a variety of single responses to novel ways to use AI; their frequency is too low to report here.However, what came up more frequently were calls to improve upon existing applications of AI.By far, improving and optimizing AI for captioning/live transcriptions/notetaking was raised most as holding more promise to assist postsecondary students with disabilities.The section "Accessing auditory information in nonauditory formats" above provides an excellent view of the current state of this area.While auto-captioning exists, so has the popularity of the term Craptions, to refer to the fact that while machines can produce captions based on speech recognition, their quality is still not where it needs to be.The #NoMoreCraptions Campaign (3PlayMedia, 2019) speaks to this problem.
The other reoccuring comment relative to improving existing AI use related to AI for assisting with organization/routine setting.This does not come as a surprise given that the largest numbers of students with disabilities have cognitive, attention deficit or learning disabilities where organization and structure are among the factors largely critical to their success (Fichten, Havel, Jorgensen, Arcuri, & Vo, 2020).

Implications for AI Developers
There are numerous implications from this discussion.First, AI developers need to include individuals at the "edges" and not just those that fit the dominant section under the normal curve (Treviranus, 2018).In the case of post-secondary students with disabilities, this means that studies are carried out in an accessible manner using oversampling of individuals with different disabilities.Also, students with disabilities need to be included in the design of AI apps from inception, as proposed by the universal design paradigm (Story, Mueller, & Mace, 1998, Chapter 3).This results in less work -and less expensive work -down the line when poorly designed apps need to be retrofitted for accessibility.
AI developers often set out to address a need within a targeted population (e.g., tools that facilitate the conversion of text into more accessible formats for students who are blind or have low vision) without recognizing that there could be a broader application.Awareness of this may not only increase the number of endusers who can benefit, but funding for product development may be easier to obtain.
When it comes to AI based apps, developers should underscore the need for accessible training documents, including those available through YouTube and Google, to sensitize post-secondary students to both the existence and the potential of AI based mobile apps that can help them in their learning.
Finally, students with disabilities should be viewed as valuable stakeholders in the area of AI development.Not only can they suggest the need for the development of AI apps for novel uses, but also what improvements are needed for existing applications of AI.

Conclusion
Assembling an advisory board of stakeholders representing students, disability/accessibility service providers, faculty members, technology experts, and technology users with disabilities from five countries resulted in an extensive repository of information regarding AI apps, not only in terms of what's out there that is being used, but how it is being used and by whom.Along with this comes advice about what AI apps would be welcome in the future and which existing apps need to be upgraded.
University and college students with disabilities make up a large proportion of postsecondary students.Twenty years ago, we noted a trend for technologies intended for non-disabled students to be adopted -and used in novel ways -by students with disabilities (Fichten, Barile, Fossey, & De Simone, 2000).More recently, we noted a similar trend for the cross-use of technologies intended for non-disabled students to be used by students with disabilities (Fichten et al., 2014).Now we have taken our first steps to gather comparable information on the use of artificial intelligence apps by college and university students with and without disabilities.