Human Rights Institute warns of the dangers of algorithms in education – IT Pro – News

The Netherlands Institute for Human Rights calls for greater awareness in education regarding the dangers of algorithms. Schools are increasingly using algorithms, but there is not enough knowledge within schools to use them responsibly.

The use of algorithms in education has increased significantly in recent years. For example, schools use educational materials adapted to the student’s level or use software that can identify fraud. Although algorithms can reduce the workload of teachers and provide more knowledge about the learning process for students, the Netherlands Institute for Human Rights points out that there are also risks. Moreover, there will not be enough knowledge in schools to use algorithms responsibly. This would expose students to unnecessary risk of discrimination by algorithms, Council warns As a result of research By the research bodies KBA Nijmegen and ResearchNed.

About half of primary schools use adaptive learning systems, where learning materials are adapted to the student’s current ‘level’. Easier training materials are then offered to those who regularly make mistakes. Students who perform significantly better on the same material are asked more difficult questions. The Netherlands Institute for Human Rights states that using such software runs the risk of not always assessing students’ level correctly. For example, students with dyslexia, autism, or ADHD may give different answers than students the learning system was trained on. As a result, the system may incorrectly rate these students lower.

Algorithms are also used in higher education. For example, anti-fraud software is often used at colleges and universities. This runs the risk of certain groups of students being disadvantaged, such as students for whom Dutch is not their mother tongue. The algorithms are more likely to “think” that these students have used an AI chatbot like OpenAI’s ChatGPT or Google’s Gemini, meaning these students are more likely to be identified as fraudsters by the software. Research also shows that anti-fraud software’s facial detection algorithms work less effectively on people with darker skin tones, which can lead to discrimination.

The Netherlands Institute for Human Rights warns of discrimination and unequal opportunities. This is why the Council believes that educational institutions should consider whether technology contributes to quality education before using algorithms. The Board also states that it is difficult for teachers and school leaders to always look critically at the resources used. “People tend to believe what the computer says.” Moreover, the available information about how algorithms work is often insufficient, making it difficult to make a good evaluation. The Council therefore believes that the Dutch Ministry of Education, Culture and Science should help schools in this matter, for example by conducting research on the effects of digital systems, providing information about the risks of algorithms and setting guidelines to which the system must adhere. To prevent discrimination.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top