Research Interests
My interests are shifting. I'm focused on more theoretical questions today. If you are a student interested in computer theory and theoretical cryptography, please reach out! I am looking to advise such students on their PhD.Until recently I was interested in the problem of computing on encrypted data. To maintain privacy and security, it is increasingly important that our information remain encrpyted not just when at rest, but at all times. We have to balance this need with the desire that our data remain useful. My research looks at how we can achieve both goals. I am interested both in the foundational aspects of this question (what is even possible?), and in how we can make such techniques practical in the real world.
Teaching
CS693: Privacy and Security of AI (Spring 2026)CS483: Analysis of Algorithms (Fall 2024)
CS487 / CS587 : Introduction to Cryptography (Spring 2022)
CS330: Formal Methods and Models (Fall 2021)
CS795: Topics in Privacy, Anonymity and Fairness (Fall, 2018)
CS600: Theory of Computation (Spring 2018)
ISA562: Information Security, Theory and Practice (Fall, 2017)
CS795: Introduction to Cryptography (Fall 2016)
About Me
I joined George Mason University as an assistant professor in Fall, 2015. From 2012 until 2015, I was a research scientist at Applied Communication Sciences (ACS), where I did research in cryptography and cyber security. Prior to that, I was a postdoc at Columbia University with Tal Malkin, as a recipient of the Computing Innovation Fellowship. I received my PhD in July 2010 with Jonathan Katz in the computer science department at the University of Maryland. Here's my curriculum vitae (PDF).Publications
Below are some of my favorite papers that I've written.
See
DBLP
and Google
scholar for the full list.
The More The Merrier: Reducing the Cost of Large Scale MPC
Eurocrypt, 2021.
Paper will be available for download soon.
Secure multi-party computation (MPC) allows multiple par-ties to perform secure joint computations on their private inputs. To-day, applications for MPC are growing with thousands of parties wish-ing to build federated machine learning models or trusted setups forblockchains. To address such scenarios we propose a novel MPC proto-col that maximizes throughput when run with large numbers of parties.In particular, our protocol has both communication and computationcomplexity that decrease with the number of parties. In particular, weshow that when used as a triple-generating service to produce offlinematerial for general MPC protocols, our construction achieves a totalthroughput of 320 million triples per second when run with approxi-mately 1 million parties. Our protocol builds on prior protocols based onpacked secret-sharing, introducing new approaches to optimally leveragepacked sharing for general (unpacked) computation.
Secure multi-party computation (MPC) allows multiple par-ties to perform secure joint computations on their private inputs. To-day, applications for MPC are growing with thousands of parties wish-ing to build federated machine learning models or trusted setups forblockchains. To address such scenarios we propose a novel MPC proto-col that maximizes throughput when run with large numbers of parties.In particular, our protocol has both communication and computationcomplexity that decrease with the number of parties. In particular, weshow that when used as a triple-generating service to produce offlinematerial for general MPC protocols, our construction achieves a totalthroughput of 320 million triples per second when run with approxi-mately 1 million parties. Our protocol builds on prior protocols based onpacked secret-sharing, introducing new approaches to optimally leveragepacked sharing for general (unpacked) computation.
Download.
We explore a new security model for secure com- putation on large datasets. We assume that two servers have been employed to compute on private data that was collected from many users, and, in order to improve the efficiency of their computation, we establish a new tradeoff with privacy. Specifically, instead of claiming that the servers learn nothing about the input values, we claim that what they do learn from the computation preserves the differential privacy of the input. Leveraging this relaxation of the security model allows us to build a protocol that leaks some information in the form of access patterns to memory, while also providing a formal bound on what is learned from the leakage. We then demonstrate that this leakage is useful in a broad class of computations. We show that computations such as histograms, PageRank and matrix factorization, which can be performed in common graph-parallel frameworks such as MapReduce or Pregel, benefit from our relaxation. We implement a protocol for securely executing graph-parallel computations, and evaluate the performance on the three examples just mentioned above. We demonstrate marked improvement over prior implementations for these computations.
We explore a new security model for secure com- putation on large datasets. We assume that two servers have been employed to compute on private data that was collected from many users, and, in order to improve the efficiency of their computation, we establish a new tradeoff with privacy. Specifically, instead of claiming that the servers learn nothing about the input values, we claim that what they do learn from the computation preserves the differential privacy of the input. Leveraging this relaxation of the security model allows us to build a protocol that leaks some information in the form of access patterns to memory, while also providing a formal bound on what is learned from the leakage. We then demonstrate that this leakage is useful in a broad class of computations. We show that computations such as histograms, PageRank and matrix factorization, which can be performed in common graph-parallel frameworks such as MapReduce or Pregel, benefit from our relaxation. We implement a protocol for securely executing graph-parallel computations, and evaluate the performance on the three examples just mentioned above. We demonstrate marked improvement over prior implementations for these computations.
Complete Fairness in Multi-Party Computation without an Honest Majority
Theory of Cryptography Conference, 2009
Gordon et al.\ recently showed that certain (non-trivial) functions
can be computed with complete fairness in the
\emph{two-party} setting. Motivated by their results, we
initiate a study of complete fairness in the \emph{multi-party} case and
demonstrate the first completely-fair protocols for non-trivial
functions in this setting. We also provide evidence
that achieving fairness is "harder" in the multi-party setting, at
least with regard to round complexity.
Complete Fairness in Secure Two-Party Computation
ACM Symposium on Theory of Computing (STOC) 2008
In the setting of secure two-party computation, two mutually
distrusting parties wish to compute some function of their inputs
while preserving, to the extent possible, various security
properties such as privacy, correctness, and more. One desirable
property is \emph{fairness}, which guarantees that if either party
receives its output, then the other party does too. Cleve
(STOC~1986) showed that complete fairness cannot be achieved
\emph{in general} in the two-party setting; specifically, he showed
(essentially) that it is impossible to compute Boolean XOR with
complete fairness. Since his work, the accepted folklore has been
that \emph{nothing} non-trivial can be computed with complete
fairness, and the question of complete fairness in secure two-party
computation has been treated as closed since the late '80s.
In this paper, we demonstrate that this widely held folklore belief
is \emph{false} by showing completely-fair secure protocols for
various non-trivial two-party functions including Boolean AND/OR as
well as Yao's ``millionaires' problem''. Surprisingly, we show that
it is even possible to construct completely-fair protocols for
certain functions containing an ``embedded XOR'', although in this
case we also prove a lower bound showing that a super-logarithmic
number of rounds are necessary. Our results demonstrate that the
question of completely-fair secure computation without an honest
majority is far from closed.
On Fairness in Secure Computation
PhD Dissertation, 2010
Download.
Secure computation is a fundamental problem in modern cryptography in which mul- tiple parties join to compute a function of their private inputs without revealing anything beyond the output of the function. A series of very strong results in the 1980’s demonstrated that any polynomial-time function can be computed while guaranteeing essentially every desired security property. The only exception is the fairness property, which states that no player should receive their output from the computation unless all players receive their out- put. While it was shown that fairness can be achieved whenever a majority of players are honest, it was also shown that fairness is impossible to achieve in general when half or more of the players are dishonest. Indeed, it was proven that even boolean XOR cannot be computed fairly by two parties.
The fairness property is both natural and important, and as such it was one of the first questions addressed in modern cryptography (in the context of signature exchange). One contribution of this thesis is to survey the many approaches that have been used to guaran- tee different notions of partial fairness. We then revisit the topic of fairness within a modern security framework for secure computation. We demonstrate that, despite the strong impos- sibility result mentioned above, certain interesting functions can be computed fairly, even when half (or more) of the parties are malicious. We also provide a new notion of partial fairness, demonstrate feasibility of achieving this notion for a large class of functions, and show impossibility for certain functions outside this class. We consider fairness in the pres- ence of rational adversaries, and, finally, we further study the difficulty of achieving fairness by exploring how much external help is necessary for enabling fair secure computation.
Secure computation is a fundamental problem in modern cryptography in which mul- tiple parties join to compute a function of their private inputs without revealing anything beyond the output of the function. A series of very strong results in the 1980’s demonstrated that any polynomial-time function can be computed while guaranteeing essentially every desired security property. The only exception is the fairness property, which states that no player should receive their output from the computation unless all players receive their out- put. While it was shown that fairness can be achieved whenever a majority of players are honest, it was also shown that fairness is impossible to achieve in general when half or more of the players are dishonest. Indeed, it was proven that even boolean XOR cannot be computed fairly by two parties.
The fairness property is both natural and important, and as such it was one of the first questions addressed in modern cryptography (in the context of signature exchange). One contribution of this thesis is to survey the many approaches that have been used to guaran- tee different notions of partial fairness. We then revisit the topic of fairness within a modern security framework for secure computation. We demonstrate that, despite the strong impos- sibility result mentioned above, certain interesting functions can be computed fairly, even when half (or more) of the parties are malicious. We also provide a new notion of partial fairness, demonstrate feasibility of achieving this notion for a large class of functions, and show impossibility for certain functions outside this class. We consider fairness in the pres- ence of rational adversaries, and, finally, we further study the difficulty of achieving fairness by exploring how much external help is necessary for enabling fair secure computation.