**Speaker A:**
Foreign.
**Speaker B:**
Hello and welcome back to the Strange Water Podcast.
**Speaker C:**
Thank you for joining us for another great conversation.
**Speaker B:**
Building anything related to zero knowledge cryptography is incredibly challenging. Forget about the already huge task of building and deploying a working product in a space where a single oversight could easily be the next nine figure hack you need to implement the most cutting edge esoteric mathematics ever invented. Mathematics so new that even last quarter's ideas might be considered last generation. Today's guest is working on addressing this problem on abstracting out the implementation of ZK cryptography so that builders can focus on building applications instead of wading through arcane number theory. Jason Morton is building EZKL or Ezekiel to make creating and deploying a ZK prover and verifier as simple as possible. EZKL takes a high level description of a computer program and builds out all the components you need in order to make it verifiable via zero knowledge. If you have any interest in zk, from passing curiosity to already building a ZK powered empire, this episode is for you. First, Jason will help us understand the context that has created the renaissance of activity in ZK that we have today. Then, through explaining Ezekiel, he'll sketch out what a ZK based system looks like and how it's implemented. And finally, we end with a huge conversation. What is the purpose of ZK and what kind of world does it enable? Already excited about zk.
**Speaker C:**
Get ready.
**Speaker B:**
This episode is going to show you where ZK is going to. One more thing before we begin. Please do not take financial advice from this or any podcast. Ethereum will change the world one day, but you can easily lose all of your money between now and then. All right, enough from me.
**Speaker C:**
Let's get to the conversation. Jason, thank you so much for joining me on the Strange Water podcast.
**Speaker A:**
Great to see you Rex.
**Speaker C:**
So before we like really dive into the ZK cryptography, I'm a huge believer that like the most important part of every conversation are the people in it. And so with that as a frame, can you tell us a little bit about like who you are and what brought you into the world of cryptography?
**Speaker A:**
Yeah, so I was a math professor until pretty recently and so what I studied was applied algebraic geometry. So you take things like deep learning models and turn them into systems of polynomial equations. That started for me. My first paper about deep learning and systems of polynomial equations was in 2008 when deep learning was still not even on GPUs. We were trying to already study the math of it and then I got into this DARPA program where we were trying to do this transformation. And the DARPA program. My role, I would characterize as these deep learning models are working great in practice, but do they work in theory? Which is a problem that, for the most part, nobody cares about. Right. But we found it interesting. And so he worked on that. And then that kind of got me into thinking about circuits. And what I described as turning this into a system, polynomial equations, is kind of what we call arithmetization now in zk. And so I was thinking about that a lot from different angles, not from a cryptographic angle. So I was sometimes on the dissertation committees of people that did cryptography, but not myself, cryptographer. And then for me, a separate thread was kind of cryptocurrency. Getting into Bitcoin, getting into Ethereum. I lost money on Mount Gox to start, and then very back in 2011, that was a big zero, but it was still really cool. And then got into Ethereum around 2015 when it was. When it was new. And I got very excited about that. And actually that was the same time I learned about zero proofs, but didn't really put all these threads together until relatively recently with happy coincidence of things coming, coming, coming together.
**Speaker C:**
Yeah. And I think the. The more I, like, talk to people in this space, and especially like to people with the academic background in the space, the more I'm realizing that, like, what is truly special about crypto outside of, like, the math and the application and all this stuff. But what's happening in this space is that, like, theory and academics are touching, like, the bleeding edge of innovation and company building in a way that is just like, never happened before.
**Speaker A:**
It is unbelievably exciting. I mean, one of the things, as a mathematician you try to do, I was an applied math vision. So you try to do work that matters. And usually your application is kind of like, well, in theory, this could eventually impact this area. And here it's like you go off and do some math, and suddenly your system is 50% faster or 100% faster. And it's an actual difference in terms of what people can do. And you have people clamoring to use it. And that immediate translation of do some new math and deploy it is incredibly satisfying.
**Speaker C:**
Yeah. My near crypto origin story, and then I missed it for 10 years, was in 2012, I was studying compute science at Stanford, and I had read the Bitcoin white paper, and I thought, like, I mean, kind of cool, like, computer money for computer money's sake, like, whatever. And I was in Intro to Number Theory taught by Professor Dan Bonet, and I sat there thinking, like, why did I. Why did I challenge myself on this? This is so. And, you know, his, like, end of the quarter examples were all about quantum. Because at the time, like, cryptocurrency weren't. Wasn't really.
**Speaker A:**
Yep, yep. Right.
**Speaker C:**
Just like, flashy, sexy thing. And so I, you know, I left. I went and worked for, you know, in like, the beer manufacturing industry for 10 years or for five years. And then I did a bunch of stuff that eventually I ended up back here. But I was just at the Stanford Blockchain Conference in. In September, August. And yeah, I mean, it's just like, oh, small world. But yeah, no, I mean, I was just sitting there, like, literally watching professors read out, like, what their, like, students were finding in, like, new algorithms or like, new whatever, and then watching people, like, try to form, you know, is this direct interaction. And, you know, I fortunately, like, 10 years later, I got this moment where I was able to meet with both Professor Bonet and Professor David C. And both of them remarked on. They made the same comment where, you know, first, like, I wrote a paper in this realm and like, within six months, it was a company. And I thought, that's cool. That's a fluke. And then it was after it happened the second time that I realized, like, okay, there's something very interesting happening here. And so, yeah, you're the first person since that's happened that would be able to appreciate a comment like that. So I'm happy I was able to share that story. But, yeah, there's something, like, pretty crazy going on here.
**Speaker A:**
It is. It's really crazy. Especially I'd say that one of the interesting things that's happening in cryptography driven by blockchain, even though it's not necessarily always connected. Right. The research that's happening in ZK is that it's happening so quickly that sometimes people don't even have time to write it into a paper. Right? It's. It's. The paper is kind of right. By the time you write the paper, it's. It's so out of date that, you know, has to cut up with a bleeding edge. And I mean, that's great and exciting. It's also slightly dangerous because there aren't as many eyes on it. You know, on the other hand, once you. Once you throw it into a production system, in some sense you have lots of eyes on it. You have lots of, you know, volunteer auditors, let's say, that are out There trying to find vulnerabilities. So, yeah, that's another form of peer review as part.
**Speaker C:**
Yeah, hackers. Hackers are part of the ecosystem too.
**Speaker A:**
Right.
**Speaker C:**
But as per your comment, as part of your origin story of like, you are interested in the problem of this works in practice, let's make sure it works in theory. Something I was so struck by listening to the starkware guys kind of like read out about like, where they are and how far they've gotten at Stanford blockchain conference is like, some of the breakthroughs that they're making right now are like, being able to like, mathematically prove that the system is secure. And like, you hear that and you're like, wait or sorry, not even secure, but like performing or like all of these things that you thought like, oh my God, like, I can't believe we didn't know this yet. And it was like, well, this technology is so advanced, it's coming straight off of, you know, like academic papers and into practice. And, you know, it's, it's, it's crazy here. So anyway, let's, let's go back to our story. So we were talking about in the first time you heard about zk. So like, talk to me about that.
**Speaker A:**
Yeah, the first time I heard about ZK was actually. Yeah. At an Air Force meeting in 2015, a research AFOSR meeting, because I was working on some quantum distributed systems algorithms and again with quantum. So, yeah, and I heard about it. And of course, at the time was mostly talked about as probabilistically checkable proofs. And it was mostly talked about in this context of you don't trust the server and you want to outsource your computation to Amazon, but you don't trust that Amazon will do the right thing. And I think at the time, the overhead was so enormous and it was hard to make a case for that because you're like, well, if we don't trust Amazon, we're done. We have to turn off 30% of the Internet or something. Right. I mean, it's quite a disaster. And so I think that was hard for me to understand it. But then I think in the context, of course, the blockchain, then the asymmetry starts to make sense. I don't know what saying that is. Of course, a zero knowledge proof lets you take computation for a machine, pull it to a different machine, and then the first machine can trust it. And the analogy that I like to use is more than an analogy. So an example of a zero knowledge proof is a digital signature like ecdsa right. I put in some message for the hash of the message. I have a private key. I run this algorithm that everyone knows. I get out a signature for that algorithm, and then anyone can check that signature and believe I really knew the secret and I really signed this message. Digital signatures are also probabilistic in the sense that anybody could match on their keyboard and have a chance of guessing Satoshi's keys. It's just a really small chance, small enough that we don't care about it. I think when people really get that, okay, it's not just probabilistically checkable proof it has the similar security properties to a digital signature, then it becomes much more interesting. And then, of course, the zero knowledge proof, you take that signature algorithm and you can replace it with any program. And that's really exciting. So I think in order to justify the overhead, though, you need a big differential between a weak computer that's outsourcing computation to a strong computer. And it has to be economically interesting to pay for that proof, both in terms of time and in terms of compute. And so I think the first place where that asymmetry was enormous is when the computer is Ethereum and you're trying, you know, which is this global computer, this, like, power of the smartphone, extremely weak, and your powerful computer is your laptop or even your phone relative to Ethereum. And so that asymmetry started to justify it, but now that the overhead is coming down really fast. So for us with, we've seen improvements of, by some measures, 200,000% since we got started and 200,000 times, rather 200,000 times since we got started. And that means that things that seemed ridiculous are starting to be a bit more practical. Even sort of that original scenario of wanting to trust a server when you're a weak client starts to make sense.
**Speaker C:**
Man, you're just, like, completely blowing my mind right now. And, like, part of the amazing thing about this podcast is, like, the opportunity to interview people that are smarter and more experienced than you and, like, to have these moments of, like, oh, I know nothing. Because, like, I, as, as someone who entered this space, from literally a Hayden Adams on a Bloomberg podcast, to thinking that defi was the purpose to, like, entering through crypto Twitter, to eventually figuring out that ZK is the future and realize, I thought my, like, ultimate unlock that I got to like, was that we thought ZK was about privacy, but really it's about, like, projecting computation. And what you're telling me right now is like, yeah, that's what we were thinking about in The Air Force in 2015. But like, and like, just with everything in Ethereum, like going back to reading Vitalik's white paper and seeing the types of ideas that he was outlining and we're still building today, it's like, oh, my God, all this technology and all of the insight and the knowledge and the understanding has been there and we're just witnessing the magical moment of it all coming together and happening.
**Speaker A:**
There was this under the surface exponential growth curve in terms of how fast this stuff was, at some point it sort of crossed the threshold of practicality, I think around maybe a year and a half, two years ago, let's say. And then now we're seeing just, you know, it keeps getting more practical, but then all the infrastructure gets built. Right. Because there's still this huge gap between, you know, when something works in theory to when something works in practice, to when something, you can, you can actually build on it.
**Speaker C:**
And if you had to, like, put a marker in the sand for what happened 2ish years ago, that, like, where we bridged that from that moment to, like, practicality.
**Speaker A:**
That's a good question. I don't have a great answer to that. I mean, to me, it's the. It's the accumulation of innovations over time rather than a particular single breakthrough. Yeah, it's a Moore's Law, like, process, you know, where you can't really nail down exactly what it is, but with some regularity. Those breakthroughs keep happening.
**Speaker C:**
Yeah, for sure. And so while we still are at the beginning of the podcast and talking about your background, can you talk to me a little bit about, like, what you were working on from 2015 until this time two years ago, when your really specific expertise would come into fruition. But how were you engaging with the world computer until then?
**Speaker A:**
I guess there was research. My research line was still around circuits and models of computation. So something like quantum computation and other weird models. So one of the things that I was studying, for example, is this question something called holographic algorithms, which was a work by Les Valiant at Harvard. And this was an idea that you could take a circuit and perform a kind of transformation on it and turn it into something that looks like it should take exponential time, something that takes a polymer time, of course. And so that was a really interesting avenue. And it sort of led to a development of a bunch of tools for thinking about the way circuits transform. And so for me, what we're doing now is kind of related, not directly related, but related in a way to that kind of thinking about how Circuits transform over time.
**Speaker C:**
Yeah. And I guess since we're starting to get into the meat of it, um, can we take an opportunity to like, define for people that are not like, living and breathing this like, what a circuit is? Um, I know that's something that like automatically enters every podcast about ZK and no one, no one knows what it is.
**Speaker A:**
Yeah. And. And it, it's a bad. It's an abusive notation too.
**Speaker C:**
Right.
**Speaker A:**
Like, we're not literally talking about things put together with copper wires. Right. So, okay, there's like different ways of describing it, I think. So what we're focused on with EZEKIEL is a particular kind of circuit. And this is why I'm going to introduce this, because I think this is the most well known type of circuit in programming, which is machine learning models or AI models, neural networks. So a neural network is a type of circuit. Anything you can express with like Pytorch or tensorflow, right. So the way you're doing it is you're setting up a dag, you're setting up a direct, basically graph. You're setting up a series of operations, right? And you're thinking about the abstract syntax tree, those operations. You're kind of freezing that graph. So you start with a number, you add one to it, you multiply it by something else, you hit it with a non linearity, and then you can think about that series of operations as a computational graph or circuit. So in some sense, when we say circuit, we just mean something that doesn't have loops, you know, an iterative computation. But it's kind of a certain framing of that where you focus on the aspect of each of the, each of the chunks being a tensor. And in machine learning, those tensors are represented by floating points, like tables of floating point numbers. And in a zero knowledge circuit. And it's kind of the same thing, except that they're field elements, elements of a finite field that are progressively being transformed in each step. And so what we're doing is in a machine learning setting, what you're doing is you're taking that frozen graph, that series of operations, and you're using backpropagation to train it. So you're saying, okay, the thing I got at the end was wrong. Let's adjust each of the layers slightly until the thing that I got at the end is less wrong. And then that's basically the idea. And so what we're doing here is we're not doing back propagation, but we're doing something else. Once we have the trained series of operations that says, okay, Input image, we do a convolution, we do some relu's, we do a bunch of operations on it and output, we say, was this image a hot dog or not? Instead of. So then what we're doing is we're transforming all those operations into operations that happen on field elements in a finite field and then that proving that each step along the way was done correctly.
**Speaker C:**
So let me try to summarize and you tell me how wrong I am. But so the idea, and just to keep it specifically into the ZK realm, the idea is that we start with arbitrary computation. And that can be anything from simple addition to anything that's like pretty complex, like written in code, all this like stuff. But the idea is like you start with computation and then part of the magic of ZK and like all of this research that had to happen over the last 10 years is like figuring out how to first represent that computation in what we are calling a circuit. Right. And the circuit is the like abstract mathematical representation of that computation. We, we like execute or run that circuit through our like ZK proving system. And what the ZK proving system gives us out is a like cryptographic proof that that circuit that was given to it produced the result that the circuit that, that resulted from that circuit.
**Speaker A:**
Yeah. Given this input and this pre committed circuit, I got this output. But you're also allowed to keep the input private.
**Speaker C:**
Yeah. Okay. So the point is that the circuit is the computation that you want to prove, is that right?
**Speaker A:**
Yeah. And the trick and the magic and the challenge is often figuring out good ways to turn the computation you want to prove into an efficient shark representation.
**Speaker C:**
For example, you made the point that the circuit can't have loops, Right. So it has to be iterative. And you can take human readable code that has a loop in it and turn it into iterative. But depending on how you do that transform process may be much more efficient, much less efficient. And so to your point, that's what you're saying matters is the transformation.
**Speaker A:**
Yeah, the transformation. And a lot of the research you see is in coming up with clever arguments that make that transformation more efficient. Some of them are very cheap. So let me give an example of something that I think can sometimes give an intuition for what's happening. So one of the things that we do is to use what's called lookup table to express a non linearity. So your most basic neural network is just matrix multiplication. For matrix multiplication then you do a element wise operation. If the number is less than zero, you set it to zero. If the number is greater than zero, you keep that number. That's called relu. Or you hit it with a sigmoid function or some other transformation element wise. Okay? So the element wise thing, we use these lookup tables. Lookup table is just you literally write down all the possible inputs that you want to do, all the possible outputs, and then you prove that your input output pair is in that table. And there are various arguments for doing that. So one of the interesting things to keep in mind here is that you don't always in a proof, need to kind of do everything you would do in order to compute the result. So the example of lookup table is this. I'm talking about an element wise operation, right? So suppose I do a series of element wise operations, okay? I start with the thing, then I add two to all the entries, then I take the square root, then I take the log, then I take. Then I hit it with a 10 function, then I hit it with sigmoid function, relu function. I just hit it with a whole series of functions. Okay? If I'm actually computing that, generating the witness, as they say, I actually need to do all those operations to find out where it gets to at the end. But if I'm doing a proof in a lookup table, I can just have the first column of lookup table be the input, the second column be the result of all of those operations stuck together, and then I can prove my input output is in that column. So in that example, I can. No matter how many operations you add, it doesn't change the cost of producing the proof.
**Speaker C:**
Yeah. And nothing in life is free, Right. So you always have trade offs. And in this like construction, the trade off is you're creating more and more lookup tables, which in.
**Speaker A:**
No, no. So what I'm saying is in this situation, you only create the one lookup table. It is free. Yeah. So if the cost of doing a thousand operations, one after another, the cost of proving it is the same as the cost of doing one operation because you just write the input and the output and you get to skip all the stuff in the middle.
**Speaker C:**
Got. And that, I guess is a good transition back into our narrative. That is the point of ZK or really of cryptographic proving or whatever. Right. Is that in order to verify a proof, by definition you're not doing the same expensive computation that it took to get there.
**Speaker A:**
And the other thing that's kind of lurking here is that where do all these speedups come from? Right. Why is it that we think we can get to a point where zero knowledge proofs are fast enough to be practical. I think they're already there. But why are they going to get more and more practical over time? Part of it is just the ingenuity and engineering of people chipping away at it. But part of it is this thing that I just suggested which is that the complexity of producing a proof can be lower than the complexity of producing the witness. And that means that for some operations we can hope that asymptotically the cost of proving goes to zero. Right. Another example think about is something like ECC ram, right. Error correcting code RAM where in a server you have this error correcting code so that when it gets with a cosmic ray you flip the bit back. You can imagine when that was first done there was a lot of overhead, but now it's just built into the system and we ignore it and there's essentially no cost. And so there is some possible future where we get to something like that with zero knowledge proofs where that, that cost keeps falling until eventually it's, it's, you know, essentially zero or not something that we have to think about.
**Speaker C:**
Yeah, I mean I think, I can't remember like which like famous like person wrote a book about this and is now like. But whatever. Like there's something about how just like when you get like super scalable economy like cost like driven down and like boost of efficiency, like it actually does open up new applications and things that just sound like so incredibly absurd like when you think of them because you know exactly what we're talking about. In 2015 it was just like it, it would take like 50x the amount of computation that it takes to run the actual program to prove it. So like why would we even bother? Like the innovation and the scalability and like the 2200,000x gain makes like okay things that seem ridiculous actually like pretty casual. And like once those things are casual like we have an entire, we have like these are. This is how technology happens.
**Speaker A:**
Yeah. One of the fun examples that happened this past weekend at East KL was one of the, one of the people in our team built like a kind of hackathon sized project like in 7 hours a tic tac toe on chain tic tac toe game and did it in the following ridiculous way which is really fun and it kind of touches on way people starting to use LLM instead of coding the rules of the game. He just enumerated a lot of examples and then trained a model to decide if the game had been Played across the according to the rules or not and then deployed that model as a zero knowledge proof. And so that runs in less than a second. Yeah, less than half a second actually. The proof of that model that says yes, this game is legit or this game is not, that it was a cheating move.
**Speaker C:**
Yeah. And I definitely am still a little bit struggling on how these like how this fits into gaming quite yet. But to see these early examples already shows that this is clearly a path we're going down. So um, it's super cool. But. Okay, so let's talk about like what you are building and, and I, I've been calling it Ekzl, but I heard you call it Ezekiel.
**Speaker A:**
So like ezkl, like eerie easy zero knowledge learning or easy zero knowledge for liberty or.
**Speaker C:**
Yeah, no, I'm currently going through a naming thing right now. I. It's literally the worst part about like having ownership of anything. So this one I like at least. It's like you got Ezekiel, that's good, you can pronounce.
**Speaker A:**
Yeah.
**Speaker C:**
So can you like, I guess start us off by just like telling us like what is it? What's the problem you're trying to solve and like where are you in the process?
**Speaker A:**
Yeah, so you can think about it as a compiler that turns machine learning models, or AI models or general computational graphs expressed in Python into a circuit. And by circuit I mean a prover and verifier pair. Right. So in order to make sense you have to have a way to prove it and you have to have a way to check that the proof is correct. And we try to also have kind of a managed pipeline to help people deal with this because this process of creating provers and verifiers generates a lot of cryptographic artifacts. Cryptographic artifacts are a little bit annoying to deal with because when they're wrong it just like, well this random looking 6 bit string is the wrong random looking 256 bit string. And so we have some tooling around that to help. To help. But yeah, that's the basic thing. And then the focus on terms of verifiers is on chain verifiers, things that can run on Ethereum because lots of reasons, obviously it's kind of a big target and what a lot of people are interested in doing so that you can extend what you can do with Ethereum to put AI models on chain or just put complex logic, business logic. And also because I like the kind of epistemology of it. Right. So one of the questions with this, in this whole Space is kind of, okay, great, you have a prover verifier pair. But if you give me the prover and you give me the verifier, how do I know you gave me the right verifier? Or like, how do I know this verifier corresponds to the thing I want to be true? One simple answer to that is you deploy the verifier on chain. Everyone knows where you know, it's a consensus fact. Right. So you use a little bit of consensus to solve that kind of public key infrastructure type problem. And so we like the use of on chain verifiers for that, although we don't need to use them.
**Speaker C:**
No, I mean, I think, like, as you said, like, the reason this is starting to get interesting is because for the first time we have this, like, you know, differential between computational power between computers that are. Or VMs that are relevant.
**Speaker A:**
Right.
**Speaker C:**
And so, like, in the past, like, yeah, we, we used to be able to launch like the Apollo 11 with a computer that's like, less sophisticated than my TI81. But like, we didn't think, like, oh, how do we get these systems to like, combine to create something even more powerful? We were just like, let's throw out the old systems for the better systems. And so to not target the on chain verifiers is like to, yeah, I mean, miss the opportunity that you identified. So that totally makes sense to me. But I guess. So let's break it down. It sounds like there's three distinct parts, right? This is the circuit generation piece, the prover and the verifier. And so the verifier we kind of briefly talked about, but just to be explicit about it, like, the piece of code that exists probably on chain, but as Jason just said, like, they can deploy it elsewhere. That takes in the cryptographic proof and whatever other data is needed with it, and then like, is able to verify was the proof correct with those inputs. Is that correct? Okay, so the verifier is off chain. It's actually like a pretty intensive piece of software.
**Speaker A:**
The prover. You mean the prover.
**Speaker C:**
Sorry, the, the prover. The prover, yeah. Verifier on chain. The prover is off chain. It's actually pretty intense piece of software, like, probably going to need a lot of computation. And that's why it can't be on chain.
**Speaker A:**
Yeah, well, it varies actually. Right? Yes, it can't be on chain because that's far too much. But the amount of computation that it needs, again, this is where we get the most of the performance gains and the ones that I quoted earlier are there. They're in how fast the proverb runs. So the resources required are constantly falling.
**Speaker C:**
And that's where a lot of like the Moore's Law stuff you're talking about is.
**Speaker A:**
And there's kind of three classes you can think of. There's three target. So I often ask you when they want to use this. What you know, what is your verification environment do you want to do on chain? You want to do wasm, like in the browser? We can do that too. And what is your proving environment? You know, does it have to fit in a browser? Does it have to find in a phone browser? That's like the most difficult environment. And laptop or server. And for the biggest models or the biggest computations, we go all the way up to. You have to have this giant server with a terabyte of RAM or something. But some things do fit in a wasm, and then you can prove it on your, even on a phone. And then the other thing that happens is, and this is true across the uk, we have one particular implementation, but there's lots of other implementations, which is the idea of a two stage proof. You have a first proof which is easy to produce but hard to verify because it's large or because it has some other issues that make it hard to verify. You run that easy to produce hard to verify proof on your device, your end device, and that hopefully has the blinding factors in it. Whatever you wanted to hide is now hidden. Then you take that proof and you send it to a server which performs a compression or a second stage proof and turns into something that's easy to verify on chain. So for, for our system, we use aggregation proofs for that second stage right now in other systems like the Hermes, Polygon, Hermes and stuff you'll see or not, sorry, the stuff that Geordi is doing now, forget the name of it.
**Speaker C:**
But the ZKEVM stuff from Polygon.
**Speaker A:**
Yeah, ZKEVM stuff. I mean, I just really like which project it's called. But you do a first stage which is fry based and then a second stage which is growth 16 based. So that's another way to do that. You have a verifier written, a different proof system that can compress the first proof to something that's relatively easy to verify.
**Speaker C:**
Yeah, for sure. And that is the magic of computer science. You are a math professor, but I'm sure you remember all the basics that you guys taught us, which was like, essentially that if like abstraction is always the answer it's always, always the answer if like something's too big or needs to be like, like optimized or simplified or whatever. You abstract away like the parts into like specialized pieces. And then so each individual thing can be like hyper specialized or in this case like be fitted to the computational resources that are available. And so yeah, it makes sense that there would be like a lot of different options and more and more flexibility as just the technology stack matures.
**Speaker A:**
So abstraction, one of the, So I like, I love abstraction. Of course, mathematics is all about like building towers of abstraction. And so like when we come to cryptography we're like, oh my God, this is all just like very detailed, you know, do this, do this, do this. And we want to build abstractions. So a lot of the work we do in code is to build these layers of abstraction to make it easier to express high level programs as, you know, in terms of a circuit. But I think in an even bigger picture as an industry right now we are struggling to find the right abstractions. So we are struggling to find kind of what is the right pattern to create a zero knowledge app. Right. Another way to think about this is it's like we are collectively discovering what a web framework should be. We don't know, we never heard of MVC or any other web framework pattern. And we have to figure out how to separate the parts of the system, verifiers and provers and what goes on chain and what doesn't go on chain and just discover the patterns that are easy to understand but are powerful enough. However, I think it's really important that we don't simply deliver a vm and they say, okay, here is a computer, please build whatever you like, because that's not okay. I can go to Lenovo on my computer. You have to be a software developer, make some choices for me, narrow down the possibilities that I can do. Give me a framework, give me something that's opinionated so that you can quickly build with it. And yes, there'll be some limitations, but figuring out what, what are the right opinions and what are the right design patterns I think is going to be critical to getting this stuff.
**Speaker C:**
So let's continue finishing the stack and then I want to get back to like what your position is like at least today on what the right configuration of all these things are. But so just to wrap up the prover is like, might be compound, might have like multiple stages, might exist on supercomputers as Moore's law gets better or Moore's law, like effects get better Then it might be happening just in your browser. And what that's doing is taking like the circuit, which is the, you know, the skeleton of the computation, like what is going to happen to your inputs, and then it's taking the input, running those together, and giving you the cryptographic proof that this input, this circuit, gave you this output. And this is how you can verify it.
**Speaker A:**
Right? Correct. Okay.
**Speaker C:**
And then the final piece of what you're building is this, like, I think you kind of referred to it as a compiler, but it's essentially a piece of code that says, okay, given your Python code or your neural network or you mentioned one other thing that I'm forgetting, but given your code, this, like give it to our compiler and we will transform it into a circuit that can be understood by approver. Is that right? So is that like a pretty intensive piece of software that needs to live in a server farm or how does that kind of work?
**Speaker A:**
Yeah, I mean, so not yes and no. So let's say for the cutting edge computation, for the bleeding edge computations, it's always going to be a big server, but for many practical computations, it doesn't need to be that. So one of the things that we did early on was to build it such that we built Python bindings and made it so that you can run, you can use a Google Colab notebook or another jupyter notebook, you can define your model, train your model, import ezekiel, and then use that to transform it into a circuit all inside of a Google Colab notebook. So that's enough resources usually to do the setup and to do a proof, as long as it's not an enormous model. And of course enormous changes all the time, but it's not actually that enormous. But it's okay. So right now we're kind of in the. We can do around a billion flops, let's say in a proof, a bit more than that. But that's not that many flops, actually. Yeah, by today's standards. But it's enough to do interesting things. Yeah. So the resources are not bad. It's not that bad, actually.
**Speaker C:**
Cool. All right, so going back to what you originally saying, that you think it's important to have like an opinion or a, like some sort of structure to how this is built. So. So it sounds like Ezekiel, like at the base layer, is developing all these different tools that anyone can kind of like configure and mix and match. But how does like, I guess Ezekiel is the organization or like the collection of people work? And then how do you like build these structures that can inform like devs.
**Speaker A:**
Yeah, so great. So right, so you can think of it as like an onion of software. There's a little bit of core cryptography work, which is obviously a huge amount of work. Right. So like that's the core. If that doesn't work, nothing works. Right. So there's this core compiler that's doing all the cryptography or there's a, even below that there's a cryptographic system, uses a bunch of open source libraries. Then there's the open source compiler that transforms your input program into something and then there's kind of an increasing amount of tooling around that. I mentioned the Python bindings. We also have JavaScript bindings to make it easy to serialize. So they're only like one of the hardest problems in cryptography always is serialization. Everything disagrees about you constantly to think about, oh, is this big endian or little endian? And like how is this bytes. Right. So there's a lot of headaches around that. So we building tooling layers of the onion to try to make that easier to use. And then the kind of biggest layer of the onion on the outside is the services that are attached to it. So this is kind of. If you just want to take your model and a sample input and click a button and upload it to our server, then we'll do all the other work for you. So you really want just say like here's the model, pick the model and give me. And then later on there's an endpoint and you can ask for a proof. You can upload the input and we'll compute the proof. So that kind of stuff is there to just make it easier, make it a shorter. We're kind of make a type development, make it so that people can go from their idea in the context of a hackathon or a trial to seeing something that works as short a time as possible. So yeah, and that requires more opinionated like a few more opinionated choices about oh, we're not going to give you every possibility. Instead of for example, letting you decide the calibration of the circuit, like choices of how wide or tall it is and how it's laid out. We have some automated tools that will make those choices for you and then if you don't like them, you can drill down.
**Speaker C:**
I mean like experience wise. This reminds me of I just like tried to teach myself like Node JS and Next JS and like Vercel to get the a little website up for this podcast. And yeah, I mean, like, it's set up like, in this amazing way where like, I pushed to GitHub and then Vercel automatically, like, sees my code and compiles it and then puts it on a, like, web server. And then it knows like, somehow how to like, repoint my URL to this new. And you know, it's just like, as someone who is not trying to build a enterprise or a corporation and knows that if things ever really took off, I might crave a lot more customizability. I might need to make those choices in the future. But this gave me the opportunity to focus on writing, I don't know, hundreds of lines of code and getting a full website working so that you can.
**Speaker A:**
Explore an idea and figure. I think we're all excited about the possibilities of zk, and I think we can talk a little bit about the big picture. Why I think it's important. But you know, what I'm certain of is that I won't be able to find all of those things. Right. The most important thing is to make sure that it's easy for a lot of easy to search that space. Right. So people can, a lot of people can do it. It's a short, short development loop to do it. And you can also use AI models. So if you are, you know, to help you find that search that space or to help you deploy ideas quickly, um, that that will help us figure out kind of what it's for.
**Speaker C:**
Well, unless you have anything else to say about like the product or. Yeah, just anything you want to talk about, Ezekiel. Like, I would love to, like, spend the remaining time of this conversation talking about. Yeah, the big picture. And you know, I, I like to talk about the Ethereum endgame. And I guess, like, we'll see if you and I agree that that's like really the proper frame for anything like cryptography. Slash, like distributed computer related. But anyway, first of all, is there anything you want to close off on or should we talk about the future?
**Speaker A:**
Yeah, let's talk about the future.
**Speaker C:**
Okay, cool. So what do you think is the end game or the point or why are we doing this?
**Speaker A:**
Okay, so a couple of waypoints, right? So one of the things Vitalik talks about is that he thinks that CK will be as important or more important than blockchain as a technology. And let's operationalize that for a second. So one way that I like to think about that is this kind of two axis thing of how many computation, how much computation goes in one proof. And I was talking earlier about flops Right. So last year it went from on the order of 10,000 flops per proof to on the order of a billion flops per proof. And I hope that, you know, that should continue. Right. So you're squeezing more and more into a single proof that then gets hopefully, you know, verified on chain. Another way to think about it is how many proofs are happening per blockchain transaction. So one of the other examples that we've been integrating with is something called Zoo Pass or PCDPass, which is a product of Xerox PARC that was built for Zuzulu. It's kind of like a ticketing system that you could use to take it around. And one of the cool things about that is that you produce many proofs in order to enter a venue or to kind of collect a thing. That's sort of like a poap. It's kind of a, you know, someone will come around with a card, and you scan the card and then it shows that you are in that room at that moment. It's very intimate. It's really cool. And there's no kind of public knowledge of that. It's just only your wallet knows that it has that because it's proof that you got it. And so in that context, you can start to see about tens or hundreds of proofs that are computed per blockchain transaction. Right? So maybe you used a blockchain transaction to buy a ticket, and then you prove that you are a member of the group many, many times, took a poll or participated in an event or whatever. For me, one of the transformative things was going to a music performance and using that task to enter the performance. And it was easy enough for the bouncer to scan and check the proof. So I think you have these two axes, and you can kind of imagine a world where there's billions of proofs being produced and billions of operations per proof per blockchain transaction. And so we're really using this blockchain as kind of a clearing layer. It's kind of like the L2 model, but I think even more scalable than that because it's doing something different. It's, to me, kind of providing some of the promise of blockchain, where we ran into just scalability and privacy problems, and there were certain things we just couldn't implement. Now we can build those things and then just use consensus when we actually need it. So one of the things that I want to mention also is that why do I think, for example, the zero knowledge is worth betting on, even though it's still sort of slow relative to something like consensus and L2, L1, especially L1 consensus and MPC. And one of the reasons is because anything that involves MPC or consensus that involves nodes talking to each other is always going to be fundamentally bounded by the speed of light. If you want to make consensus all over the world, you have to wait hundreds of milliseconds at least, for that message to move around. And if it's a chatty protocol, it might take seconds, whereas the proof is being produced on a single device. So there's kind of not as much of a limit, no real limit to how fast that could go. It doesn't require. There's no hard limit, right, from. Well, there is fundamental laws of physics, but we can expect a lot of improvement, to put it that way, that it's not limited by speed of light. And so I imagine kind of a lot of proofs being composed of each other and sort of floating out there and occasionally attaching or touching back to the blockchain. It's like a sprinkle of consensus locks all these things into. Into a. Into a structure.
**Speaker C:**
I have so much to say based on that. But, like, I. To. To people not in this world. Well, well, first to people in this world who but, like, don't really understand the purpose of Ethereum, like, what I always like to say is the idea that, like, the Internet as we know it was essentially created in like, 95 or 96 when Mark Andreessen gave us the Internet, you know, and then, like, then it got, like, better and better and better. And what stopped when Zuckerberg invented Facebook. And like, since then we've gotten, like, better ux, like, better resolution videos. But, like, we're done. Like, no, like, of course this is a new technology and it's still growing. And like, what I see, Ethereum, not cryptocurrency, which I think is like mostly garbage, but Ethereum specifically is this, like, hard, durable backbone of the Internet that, like, provides us something to anchor to and. Right. And like, what. How people who are not like, totally lost in the ZK rabbit hole. Like, what I say is, like, this is just how we express property on the Internet. And I think that, like, that is like the first order effect. And that's like, we can see it with Ether. We can kind of start to connect it to, like, oh, maybe like, property can have, like, representation, chain. And like, down that rabbit hole, lines like, sorry, down that rabbit hole lies the idea of the Internet, of things, right? Like, this is the first way that, like, all of these, like, refrigerators and like, smart cars and whatever can be connected to the Internet in a way that's, you know, actually like, makes sense and is not, like, so, like, incredibly insecure and all this stuff. But for us in the ZK world, and especially you, who's building the ZK world, like, it's to really understand that, like, what ZK does is make, like, computation. Like, just like collapse down to zero. And like, the. The ramifications for that are to me, like, the. The couple things that like two. As we said two years ago, everything changed. It was. For me, there was a specific podcast. I remember the guys from starkware came on Bankless and they made a throwaway comment in November 2021. They're like, oh, yeah, somebody today is folding proteins on starkware. Just like, yeah, it's like, stupid. There's no reason to do but, like, just to show that we can. That was two years ago. And like, the other part of it is, like, we realized the first, like, practical application for cryptography that wasn't like, identity, like in hiding identity or hiding information, and that was bundling blockchain transactions. Like, that was the first time we realized, like, oh my God, like, if we do this off chain, we. Blah, blah. I'm starting to ramble here, but I think, like, what we're all starting to realize is that like, Ethereum can be the anchor and we don't have to limit ourselves to like, the EVM and to like, this blockchain world and like, whatever. Just like, like warts and all that we have to deal with, like, as the tools that you're building, as like, the things that Risk zero is building. Like, so many people are starting to show us that, like, this is general purpose computation. And it's like really like a step forward in like, what it means to live in like, a world where we have, like, supercomputers in our pockets.
**Speaker A:**
Yeah. A couple of other metaphors that I like are just things I want to touch on that are reasons to be excited. So one of the ways. Think about one of the things that comes from Brian. Brian Goo from Z parc is that it's. It's kind of like a language of truth, you know, it lets you restrict yourself to a language in which in some sense, only true things can be said. Right. So, you know, given the rules that you've decided and the verifier problems and stuff, you know, but given there's inputs, you perform this transformation and you get an output, and then those things are composable and have transitive trust properties. And that's something that really unleashes like a whole new type of communication power. Right. So that's something that I really like. And one of the other kind of concepts that I want to come back to is I actually think, and this is a whole other can of worms, but I actually think that in terms of property, right, that cryptocurrency in the sense of tokens, ledgers is kind of a skeuomorphic thing. It's like the faster, you know, faster, faster carriage instead of an automobile or an airplane. And that like what we've built here with these tools of consensus and cryptography, zero knowledge proof signatures is much, can express much more complex things than here's a ledger with some number of tokens and they're transferred somewhere else. Right? And we don't really have another way of rep understanding economics other than that because we're used to this system. It's been with us for a long time, 600 years or something, you know, double entry accounting. But there's a wide range of possible kind of economic models that could grow out of having this power. And another way I say that is kind of, in a sense, money itself is a zero knowledge proof. And we've summarized the whole history of transactions. All the things I ever did for someone and were never done for me are summarized in some kind of bank balance by an old ledger system of banks talking to each other. And you just need to test against this balance. Does this person have enough money to buy a mango or whatever? And so now we can do much more sophisticated proofs that take into account a lot more information. And it's going to be really interesting to see what people build with that power.
**Speaker C:**
Man, that's like incredibly mind blowing. I mean, not to put you on the spot, but do you have any thoughts on like what alternative models like look like in now that we have more.
**Speaker A:**
Yeah, I mean, so I have, I have only bad ideas, but let me share one of those bad ideas. Okay, so to expand on what I was saying, so you could imagine a world, right, where instead of every time you do a transaction, instead of having an amount, you just record what happens. Someone mows the lawn for me and we just kind of record, oh, you mowed the lawn, but we don't attach a price to it. And then later on you can sort of think about any future point. You can go back and attach prices to everything. But those prices can vary either by because of how much you value past actions, or they can vary because of their different societies place different values. So for example, you say, oh, this, this society thinks burning carbon is bad. So retrospectively, all the times you took gas, we're gonna, you know it's gonna cost you more, right? Okay, so you can kind of pick these prices retrospectively or you can even just not ever bother to pick the prices because you know that will continue to happen in the future. And each point you just kind of can do these zero knowledge proofs that summarize all the information that came before. Another way of saying that something like that is that now another bad idea is you can have an AI, right? So right now we could say you keep a track of all the things that you ever did without attaching prices to them. And then you feed into the AI. You call OpenAI. You say, hey, this is all the good and bad things that Rex did. Should he be allowed to buy this yacht or not? Right. OpenAI makes decisions. You're like, oh, that's like a terrible dictator thing. Okay, well we could do the same thing, but we could use zero knowledge AI. And it all seems very dystopian. But then you think, well, that's kind of the system that we have now, right? It's just that we, we use the banking system to make those decisions.
**Speaker C:**
And I think it's like super easy to call that dystopian. But like so much of what I think about is like a post scarcity world which we're like rapidly entering into. And you know, like the frames that we talk about, like, literally the only idea we have for solving that is universal basic income. Like literally that's the one. And like what you're making me think of now is like, okay, like maybe we could, could like find a way to use technology to facilitate like a real like community. Like, you know, just sense of like contributing in community and, and like togetherness in a way that like money like actively creates separation.
**Speaker A:**
Yeah, Financialization I think was a great technology. It was developed, you know, 600 years ago. But it's kind of, we have computers now, we can be a lot more sophisticated. We don't have to reduce everything to one dimension. Yeah. Another example I like to think about. Yeah, like you think of a community like Midwestern, in the Midwest. You know, a couple hundred years ago you said you can ask your neighbors to build a barn for you, they'll all come together and help, but you can't ask them to build 100 barns. So there, in some sense you have a claim on that community, you have a share, but it's Definitely not a ledger. Right. It's not like you have a name, you know, one barn for the first person or 1,000 barn tokens. Right. Like, there's some. There's more subtlety and humanity to it. And I think we have the opportunity to find something that feels more humane.
**Speaker C:**
Wow, man, that's incredibly powerful. Wow. So last thing before we go is like, I want to touch this intersection of AI and crypto. And like, I think, like, right now, like, here's some alpha for you. Like, if you want to go raise venture capital, just like, create a startup that matches those two words together and like, you're golden. But to have like a slightly more nuanced conversation about it, like, I. I am not really sure where these two things intersect, but what I do see as like a relative layman, especially in this conversation, is like, AI is about, like, creativity and abundance and just like, like dumping out as much like, variation and possibility into the world as possible. And what I see with crypto is the exact opposite. It's about scarcity and it's about provability and about, like, identity and like, creating like, the trustlessness is such a misnomer, right? Like, it's all about trust. Right. And so to me, like, what, you know, I don't know how these technologies connect, but I do see that like, together they're almost like half of a story. And I just, like, I see this all as like the, the history of humanity develop, like technology going back to like, let's start at Gutenberg, right? But it goes back further. Like, this is about, like, how we use technology to like, change communication and what that new mediums of communication like, allow us to achieve on this planet. So I don't. I don't really know where the intersection is, but I'm curious, as someone who, like, lives and breathe this stuff, like, do you have any thoughts on what these two technologies mean for each other?
**Speaker A:**
Yeah, I mean, I think so. You're right on all these points. I mean, I wouldn't say that, like, cryptography is really about scarcity. It's more about composability or transitive trust. And I think scarcity is one thing you can model with transitive trust, but not the only thing you can model. And so there's a few places I see intersecting. Of course, we make something that turns AI put lets you put AI models on chain. So if there's a reason that you want to do that, you want to create a smart contract that acts more like a human judge that decides whether a task was performed or who should win in a dispute or whatever that you can do that. And so there's a lot that happens there in terms of being able to create kind of objective but automated and non human judges or deploying them. So that's pretty interesting. I think. I think one of the things that of course comes up is also provenance, right? So content provenance, which is a huge problem as AI makes you can create any image you like, so. Or any video you like. So how do you know that something was real? In some sense you can prove all the transitions. So you can say, okay, the camera signed it. So you mentioned, Dan, you worked with Sony on the camera, that there's a camera signs it or your app signs it and then all the transformations that happen to that image crops and noise, models and stuff are all proved along the way. And that's sort of a weak kind of AI, but depending on how you do it. So then you know for sure what the chain of provenance, I think that's probably going to become more important. The other thing that is kind of another one of these little bit mind blow ideas is when you think of an AI agent, right? So suppose I wanted to at an extreme make an agent for myself. So after I die, I train an LLM to make decisions for me and I give it power of attorney. How do I know that that agent, how does someone else authenticate that agent? You can say, oh well, someone holds keys and they run the agent and then they sign things using the cryptographic keys. But that's a problem, right? Because someone can change the data, feed it bad data or change the algorithm or they can leak the keys. One of the really cool things you can do is an AI model that can prove itself or for which execution can be proved. It kind of is its own key, it authenticates itself. So I can say I sign over my power to this model. The model runs and when it gives an answer it comes with a signature of proof, which is as long as the receiving the smart contract or wallet or whatever honors that proof, we know that that thing ran correctly.
**Speaker C:**
Yeah. Essentially what you're saying is like today we have like EOAS externally owned accounts, which is essentially like what you have with a private key. And your metamask is like you have your private key and you sign stuff and that's how you authenticate to Ethereum that you are doing the thing that you want done is authorized by you. And what you're kind of painting is how we can reconstruct that for AI models to give them similar, if not identical, levels of sovereignty on chain as people do.
**Speaker A:**
They can be their own wallet. The model itself is the wallet.
**Speaker C:**
All right, well, man, I think we're getting too dangerous here. Just to be clear. All hail Skynet. And, you know, things are about to get crazy. But no, man, Jason, I really appreciate this conversation. You've, like, both, like, blown my mind in terms of, like, where we're going, but also just shown me, like, just, like, the very concrete steps we're gonna take to get there through what you're building with Ezekiel. So just really appreciate the. And the sharing of thoughts, and thank you so much.
**Speaker A:**
Thank you for inviting me.
**Speaker C:**
Of course. Before I let you go, can you just let the audience know where they can find you, how they can, like, learn more about the project, like, where.
**Speaker A:**
Where should the best place is? EZKL xyz. All right.
**Speaker C:**
Thank you so much, Jason. Really appreciate it, and have a good rest of your week.
**Speaker A:**
Thank you. You, too, Sam.