Author: jtakeshi

Recent Workshop Talks

While at Tokyo Tech, I have given the following talks:

  • Tokyo Workshop on Trustworthy Foundation Models | August 20, 2024 | “Advances and Challenges in Applying Trusted Hardware for LLMs”
  • Kwansei Gakuin University, Workshop on Privacy-Enhancing Technologies and Law | Nov. 23, 2024 | “Privacy-Enhancing Technologies in American Law”

Moved to Tokyo Tech!

As of August 2024, I have started work as a Postdoctoral Researcher at the Tokyo Institute of Technology! The Institute is one of Japan’s most prestigious universities, generally ranking in the top 2 to 4 spots in STEM subjects. On a personal note, my family once lived on the west bank of the Nomi River, which today runs along the west border of the Institute’s main campus in Oo-Okayama.

Some Paper Acceptances in 2024

I’ve had a few papers accepted recently:

Summation-based Private Segmented Membership Test from Threshold-Fully Homomorphic Encryption (PoPETs 2024). My coauthors are Nirajan Koirala (first author), Jeremy Stevens, and Taeho Jung.

HEProfiler: An in-depth profiler of approximate homomorphic encryption libraries (J. Cryptographic Eng., accepted but not yet published). My coauthors are Nirajan Koirala (co-first author), Colin McKechney, and Taeho Jung.

PPSA: Polynomial Private Stream Aggregation for Time-Series Data Analysis (Securecomm 2024). My coauthors are Antonia Januszewicz (first author), Daniela Medrano Gutierrez, Nirajan Koirala, Jiachen Zhao, Jaewoo Lee, and Taeho Jung.

Spoke at ND-TALES

At Notre Dame’s Trustworthy AI Lab for Education Summit, I gave a talk titled “Privacy-Enhancing Technologies for Educationally Focused AI”. The talk abstract is as follows:

Artificial Intelligence (AI) has great potential for aiding and improving education. Conversely, the risks of AI applied to education may be large. Misapplication or malicious use of AI in an educational setting poses unique risks to students and teachers who rely on correct answers and trustworthy data provenance. President Biden’s recent executive order on Safe, Secure, and Trustworthy AI called on Congress to strengthen and fund the development of cryptographic Privacy-Enhancing Technologies (PETs). Educational practitioners who interact with AI need to be aware of the nature of such technologies to effectively protect their students and institutions.
In this presentation, I will first describe actual and potential threats to security, privacy, integrity, and authenticity that AI used in education may pose, including misinformation in training data, siphoning of student information, mistranslations, discrimination, or offering dangerous information to unauthorized parties. I will then discuss at a high level several PETs that can be applied for AI in education, paying particular attention to the traits of these PETs that are most important in an educational context (e.g., methods for the protection of student data under FERPA).
The PETs discussed will include differential privacy, federated learning, homomorphic encryption, distributed multi-party computations, and trusted hardware. Finally, I will present a comparison of the uses and relative strengths and weaknesses of these PETs applied to AI in education. Throughout this presentation, I will draw upon both hypothetical examples and real-world attacks and defenses, citing research from Notre Dame’s Data Security and Privacy Lab and external research.

Paper accepted to IEEE Transactions on Computers!

Our paper “Accelerating Finite-Field and Torus FHE via Compute-Enabled (S)RAM” has been accepted by IEEE Transactions on Computers! As described by TC’s Editor-in-Chief: “TC is the IEEE Computer Society’s flagship journal and is considered a lead archival publication in the field of computing. It publishes high-quality research that is timely and relevant to researchers in academia, industry, and government laboratories. Google Scholar ranks TC as the top journal in computer hardware design, and TC’s impact factor remains the highest among its competitors.”

My coauthors are Dayane Reis (U. South Florida), Ting Gong (U. Washington – Seattle), Michael Niemier (Notre Dame), X. Sharon Hu (Notre Dame), and Taeho Jung (Notre Dame).

Upcoming: Invited Talk at Seagate Research Group

On June 7, 2023, I will be giving an invited talk to the Data Trust team at Seagate Research Group! My talk, “Privacy-Preserving Computation”, will present an overview of several of his projects in the area of Data Security and Privacy, including my work at Notre Dame, Google, and Meta (formerly Facebook). Topics include hardware acceleration of homomorphic encryption, privacy-preserving contact tracing, trusted hardware, and large-scale secure data aggregation.

Paper accepted to J. Cryptology!

Our paper “SLAP: Simpler, Improved Private Stream Aggregation from Ring Learning With Errors” has been accepted at the Journal of Cryptology! The JoC is one of the best journals (arguably the best) in cryptography; needless to say, I am very happy to have a publication in the JoC. My coauthors are Ryan Karl (Carnegie Mellon U.), Ting Gong (U. Washington – Seattle), and Taeho Jung (Notre Dame).