At Notre Dame’s Trustworthy AI Lab for Education Summit, I gave a talk titled “Privacy-Enhancing Technologies for Educationally Focused AI”. The talk abstract is as follows:
Artificial Intelligence (AI) has great potential for aiding and improving education. Conversely, the risks of AI applied to education may be large. Misapplication or malicious use of AI in an educational setting poses unique risks to students and teachers who rely on correct answers and trustworthy data provenance. President Biden’s recent executive order on Safe, Secure, and Trustworthy AI called on Congress to strengthen and fund the development of cryptographic Privacy-Enhancing Technologies (PETs). Educational practitioners who interact with AI need to be aware of the nature of such technologies to effectively protect their students and institutions.
In this presentation, I will first describe actual and potential threats to security, privacy, integrity, and authenticity that AI used in education may pose, including misinformation in training data, siphoning of student information, mistranslations, discrimination, or offering dangerous information to unauthorized parties. I will then discuss at a high level several PETs that can be applied for AI in education, paying particular attention to the traits of these PETs that are most important in an educational context (e.g., methods for the protection of student data under FERPA).
The PETs discussed will include differential privacy, federated learning, homomorphic encryption, distributed multi-party computations, and trusted hardware. Finally, I will present a comparison of the uses and relative strengths and weaknesses of these PETs applied to AI in education. Throughout this presentation, I will draw upon both hypothetical examples and real-world attacks and defenses, citing research from Notre Dame’s Data Security and Privacy Lab and external research.