July 31, 2023
Subscribe to Vretta Buzz
Venturing into an exciting new era in education, we're beginning to use more advanced AI technologies, particularly in the area of assessments. This transformation, as emphasized by EU’s Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators (2022) and UNESCO's guidance on AI and education (2021) isn't just about swapping out old pen-and-paper methods with shiny new tech. In fact, it's about a total makeover of our approach to assessing from the ground up.
According to AI principles from OECD (2019), AI uses past data and a lot of computing power to make educated predictions. To do this, it relies on a large pool of data or a consistent environment for running repeated experiments. This approach allows tech-based assessments to use historical data from test-takers to build models that can complement human performances in tasks such as creating assessment items, identifying potential bias in responses, and scoring complex data from essays or game-based assessments. These advancements are in line with the Guidelines for Technology-Based Assessment (2022) from the International Test Commission and Association of Test Publishers.
In this article, we explore the many ways AI is creating change, from how we create and manage assessments to how we score answers and provide comprehensive feedback. But the influence of AI doesn't stop with assessments. It can also decode heaps of assessment data, providing a tailor-made assessment experience for every student. While AI offers numerous benefits, it is essential to balance them with crucial concerns such as privacy and fairness. So, we also delve into the ethical side of using AI in assessing and we will discuss ways to navigate these issues responsibly. Lastly, we sneak a peek into the future, pondering on the potential developments AI could bring to the assessment landscape. So, strap up for an exciting journey into the high-tech world of AI in educational assessment.
AI in Assessments: A School Perspective
The integration of AI in education is revolutionizing how things work at school, offering remarkable advancements particularly in the domain of educational assessment. Instead of teaching every student in the same way, AI-driven adaptive assessments are now enabling schools to personalize lessons to meet the unique needs of each student, fostering more engaging and effective learning journeys. Teachers are beginning to view "AIEd as a pedagogical instrument" rather than a replacement, using these tools to provide real-time feedback and better understand their students' confidence and motivation (Selwyn, 2021; Luckin et al., 2016).
One pivotal application is AI-driven adaptive assessments. Although adaptive assessment, a method of altering question difficulty based on a test-taker's responses, has been in use for some time, the advent of AI technology has significantly enhanced its capabilities. Rather than merely adjusting the difficulty of questions, AI-powered adaptive assessment can analyze the process used by a student to arrive at an answer, identify any hesitations, and understand the types of errors made. This ability to learn from each interaction enables AI to deliver a more personalized and nuanced understanding of a student's learning style. So, while adaptive assessment has been prevalent, AI has expanded and refined its potential to cater more effectively to individual learning needs.
Furthermore, the implementation of AI-powered natural language processing (NLP) is significantly transforming the scoring process of subjective assessments. It is enhancing the accuracy and efficiency of scoring in summative assessments, empowering real-time feedback for students in formative assessments, and assisting educators in identifying individual learning gaps through diagnostic assessments. This evolution in scoring provides in-depth and comprehensive feedback to students while freeing up teachers' time to focus on their students' overall growth and development.
Additionally, in his book, The Fourth Education Revolution, Anthony Seldon points out that AI doesn't tire or get frustrated if a student takes longer to understand a concept, and it is available whenever needed. This continuous assessment by AI captures more than a single snapshot of a student's understanding, potentially reducing the need for end-of-course exams. Ultimately, AI is facilitating personalized learning pathways that cater to each student's specific requirements, thus reshaping educational assessment to provide more tailored and effective assessment methods for both students and teachers.
Despite the immense potential of AI in educational assessments, one obstacle that AI will face is the incomplete or lack of digitalization of the learning and assessment process in school settings, as well as the lack of digital literacy of educators and learners, both of which require institutional support, buy-in during strategic policy-making, and consequently, the public trust. Unless these objectives are achieved, fully utilizing all the benefits of AI mentioned in this article may become unrealistic.
AI plays a significant role in changing how test forms are created, making the process more efficient and innovative. In the past, educators used to create assessment items manually on paper or on word documents. Now, with AI, the process is enhanced by using technology to create questions automatically including a distractor-generator. Practically speaking, AI-powered algorithms analyze existing question banks, find patterns, and create new questions based on certain rules or patterns and even generate new content and visual imagery. This helps item developers work more efficiently and generate a number of items at a higher speed. AI can also choose and combine items from an item bank to create custom test forms, considering things like difficulty level and content coverage. This kind of automation makes test assembly easier and ensures having a well-represented assessment. Additionally, AI helps validate assessment items by checking their quality and reliability. It can detect issues like biased questions or inconsistent performance, which makes the tests more trustworthy. Furthermore, AI transforms the roles and skills of the humans involved in the process, emphasizing the importance of understanding AI technology alongside traditional content knowledge and item-writing skills. Finally, human-based quality assurance should always be in place in AI-powered test construction practices to avoid any issues around reliability and validity for use. By combining AI's capabilities with human expertise, we can ensure that the tests meet high standards and maintain their integrity.
AI is reshaping how we manage tests, bringing efficiency to new heights. Remember the old days where proctors and supervisors acted as 'guardians' against cheating in paper-based and computer-based tests? Now, with AI-powered test administration, these roles have evolved into strategic analysts, leveraging machine vision and automated collusion detection to efficiently plan resources and identify irregularities in the digital testing environment using behavior analysis tools. Moreover, text-to-speech assistive technologies can aid in training proctors and supervisors, ensuring their readiness for the dynamic AI-powered testing landscape. With this shift, the requirements for these roles are changing too. Being tech-savvy is gradually becoming more competitive, as the ability to navigate these new tools becomes more essential. Plus, there's more to AI, like Machine Learning (ML) and Deep Learning (DL), that lend a helping hand in many aspects of test administration. These include scheduling tests, selecting proctors, analyzing performance, training proctors and supervisors, and even spotting errors before they occur. Overall, by leveraging AI's capabilities alongside human judgment in scheduling, proctoring, performance analysis, and training we can pave the way for more effective, reliable, and fair test administration practices.
In automated scoring and feedback, AI operates as an advanced tool that assesses student responses, including complex ones, efficiently, since it employs sophisticated algorithms trained on a vast dataset of human-scored responses. In practical terms, assessment technology teams design these algorithms that can comprehend the nuances of context, syntax, and semantics, allowing them to evaluate new responses with similar precision through methods like automated tagging based on response features and process data, and automated essay scoring. Crucially, AI also offers timely, personalized feedback, highlighting the student's strengths and areas for improvement. This approach not only speeds up the scoring process but also informs personalized learning strategies, enhancing student growth and understanding.
Compared to the past when there were student performance reports or assessment cards that indicated either a score or general information with little personalization, AI-powered scoring and feedback result in comprehensive reports on assessments and learning potential. So, AI is giving assessment reports a major upgrade. Instead of just a simple score, we're now able to get a deep dive into how a student understands and handles different tasks. It's like having a detective on our side, finding patterns, tracking progress, and giving a clear picture of a student's strengths and weaknesses. These detailed reports are super helpful for both students and teachers. They guide learning and teaching strategies, making sure everyone's time is spent effectively and efficiently. In short, it's evidence-based decision making that focuses on improving learning.
AI has revolutionized test analysis by providing an advanced means of processing extensive test data swiftly and accurately. It identifies trends, patterns, and correlations, significantly enhancing the design of tests and our understanding of learner interactions. Moreover, AI-generated insights contribute to tailoring learning outcomes more precisely, thereby optimizing educational experiences and achievements. Consider AI in test analysis as an insightful expert, ceaselessly observing, analyzing, and delivering valuable knowledge to shape superior learning strategies and outcomes. With the incorporation of process data visualization, open-source item response theory (IRT), and optimization strategies, in practice, AI-powered test analysis converts log data to data visualizations, and visualizes activity of students without delays associated with video buffering.
As we explore the transformative potential of AI in test analysis, it's essential to consider the data we collect to continuously train AI engines for improved performance. So, we must handle this data with utmost caution, ensuring we're upholding our students' privacy and making proper use of the information. Basically, AI learns from the data we give it, and if that data is biased, the AI's grading could also be biased. As we embrace AI in educational assessment, adhering to well-thought-out rules and strategies will guide us in maximizing its benefits while safeguarding ethical and responsible use.
As AI becomes an integral part of educational assessments, various rules and guidelines have emerged to ensure responsible and ethical implementation. These rules serve as the building blocks for developing a comprehensive AI strategy in the assessment domain. Organizations and institutions have set forth principles that emphasize fairness, privacy, transparency, and accountability when using AI. By adhering to these rules, educators and assessment professionals can establish a solid foundation for their AI strategy, fostering a balance between innovation and responsible practices.
There's been a lot of discussions on the importance of the ethical guidelines around the use of AI, even a call by tech leaders to stop the further developments of AI to catch up by developing a proper regulatory framework, as highlighted through a survey - Chief Risk Officers Outlook: July 2023. Some companies have already started drafting initial guidelines around rules for using AI, both generally and specifically in education and assessments. Companies like Microsoft have come up with their own rules for developing AI in a responsible way, which includes things like fairness, keeping data private and secure, being inclusive, and being transparent, and accountable. Similarly, the Russell Group universities, which are elite top universities in the UK, have made guidelines for how to use AI in education, which aims to help people understand AI, use AI ethically, maintain academic standards, and share good ways to use AI as they evolve.
In the assessment world, there's the Cambridge approach, which promotes human oversight, using diverse data, and being personal and transparent. The UK exam board, AQA, also sees the potential of AI but emphasizes that people need to trust the system and that there needs to be transparency and accountability. On the same line, according to a study by Cesare Aloisi (2023), the research suggests that the challenges of using AI in important exams, such as unreliability, bias, and lack of clarity can reduce trust in the system, and suggests addressing these issues by integrating clear AI rules into high level assessment policies, in addition to embracing fairness, accuracy and explainability along the way. As folks working at the intersection of educational assessment and technology, our job isn't just about including AI into our processes. As rules and guidelines around AI keep changing, it's important to have a well-thought-out masterplan for using AI.
On this note, developing an AI strategy is a key step in using AI tools carefully and systematically in assessment applications. For example, the AI strategy of Vretta is to enhance numerous aspects of the assessment life cycle. This includes an AI authoring co-pilot to support item developers, automated item validation to experiment with different types of technology-enhanced items through simulations, automated test assembly to produce unique and equivalent forms, automated collusion detection to identify students who use generative AI tools (like ChatGPT) to answer questions on a test, process data visualization and modernized data pipelines to support psychometricians through their data analysis process, and many more other applications of machine learning, expert systems, computer vision, and speech recognition AI techniques.
Finally, to ensure a balance between innovation and responsibility, it is critical for the AI strategy to support the integrity of expert judgment, support the development of high quality (and consistent) assessment experiences, protect the privacy and security of personal information, and strengthen operational efficiency. All of these aim to make the assessment process more fair, mindful of ethical considerations, and efficient.
The future of AI in assessing students looks pretty exciting! Looking ahead, the future of AI in educational testing is filled with possibilities:
AI might become a teacher's pedagogical assistant, helping them understand how each student learns, offering new insights, and helping to give personalized feedback.
Imagine an assessment that can not only check your answers but also understand how you arrived at them, any moments of uncertainty you experienced, and mistakes you made - these interactions with you create a unique understanding on how you learn.
Think of AI that gets even smarter, guessing what topics you might find tricky and giving you a heads up so you can practice before it becomes a problem and minimize potential anxiety instances.
Picture a day that starts with virtual and augmented reality that is used over different learning occasions, and we frequently see tests where students feel like they're inside a meta realitythrough a gameas if in a real-life situation.
No matter how fancy it gets, the main aim of AI in assessment will always be to improve learning, make the assessment process more precise, and ensure every student gets a fair chance.
To wrap up, AI is really changing the kitchen when it comes to educational assessments. It surely makes creating and managing assessments easier, scoring faster and more personal, and helps us understand our students better. AI is also contributing to solving socio-technological problems within the assessment cycle. But with all these cool developments, we need to warn ourselves to apply AI in a fair and safe way, possibly by having human-judgements complement the work we do with the AI-based transformations and making consistent strategized moves around it. Looking forward, AI will be even more immersed in education, but we just need to always keep our focus on improving learning, making assessments better, and enabling everyone to get a fair go.
________________________________________
Vali Huseyn is an educational assessment specialist with extensive experience in enhancing key phases of the assessment lifecycle, including item authoring (item banking), registration, administration, scoring, data analysis, and reporting. His work involves strategic collaboration with a range of assessment technology providers, certification authorities, and research institutions, contributing to the advancement of the assessment community. At The State Examination Centre of Azerbaijan, Vali played a crucial role in modernizing local large-scale assessments. As the Head of Strategic Partnerships and Project Management Unit Lead, he co-implemented several regional development projects focused on learning and assessment within the Post-Soviet region.
Feel free to connect with Vali on LinkedIn (https://www.linkedin.com/in/valihuseyn/) to learn more about the best practices in transitioning to an online assessment environment.