**Speaker A:**
Hello and welcome back to the Strange Water Podcast.
**Speaker B:**
Thank you for joining us for today's conversation.
**Speaker A:**
I am a huge believer in the modular blockchain thesis. Put simply, the modular blockchain thesis stems from the idea that the most valuable component of a blockchain transaction is settlement and not execution. Or put another way, it's the result of the transaction and not the computation needed to calculate the result. With that insight, we've realized that we can offload execution into high performance, low.
**Speaker B:**
Low cost environments, which we now call.
**Speaker A:**
Rollups or L2s, without sacrificing the credible neutrality that makes blockchain valuable. And for the last couple of years, that is basically the entirety of what the modular blockchain thesis meant. Rollups. And maybe if you're deep enough in the weeds, then it also meant data availability as well. But really, the conversation hasn't moved that much further. It's just rollups and data availability. But like I said, I am a huge believer in the modular blockchain thesis, not just because I believe in the rollup paradigm, but because I believe the modular blockchain thesis is applicable to so much more than just execution. I believe that the modular blockchain thesis, when executed with the magic of zero knowledge cryptography, is the answer for how we turn these underpowered blockchain computers into modern, globally decentralized supercomputers. Let's just walk through this metaphorically. Let's say a blockchain computer is analogous to a real computer with our processor, hard drive, etc. The first step of modularization is replacing our processor with a super processor or a roll up. The next question is the billion dollar question of the next cycle. What else can we modularize?
**Speaker B:**
Now this intro is already starting to.
**Speaker A:**
Drag on, so I'll just cut to the chase. Perhaps the biggest difference between a blockchain computer and a regular computer is is that a regular computer runs applications that have access to modern databases and data analytics. Blockchain computers, well, it is incredibly expensive to store data, let alone access and analyze it. So expensive that often it's not even possible. And so the next system we are going to upgrade to a modular component. We're going to bring big data to the world computer. Which brings us to today's conversation. I am so excited to announce that we have Ismael Hishon Razezadeh, the CEO and co founder of Lagrange Labs. Lagrange is developing ZK technology to give smart contracts direct access to big data tools by offloading the difficult parts while still maintaining the guarantees of a blockchain environment. Over the next hour, we'll walk through all the implications of bringing data to smart contracts. And soon you'll understand the much more expansive and grand vision of the modular blockchain thesis. For anyone looking to develop modern user facing applications that leverage blockchain, you're going to want to pay attention. One more thing before we begin. Please do not take financial advice from this or any podcast. Ethereum will change the world one day, but you can easily lose all of.
**Speaker B:**
Your money between now and then. All right, let's bring out Ismail. Ismail, thank you so much for joining us on the strange water pod.
**Speaker C:**
Thank you so much, Rex. I appreciate you having me.
**Speaker B:**
Of course, man. I'm so excited to talk about all of the, like, definitely esoteric but super exciting things that you're doing for blockchain. But before we get there, I am a huge believer that the most important part of every conversation are the people in it. So with that as a frame, can you tell us a little bit about yourself, how you found crypto and I guess, why you didn't run away?
**Speaker C:**
I love the last part. So my name is Ismail. I'm one of the founders of Lagrange Labs. And so what got me into crypto originally was this belief that fundamentally you could rebuild an entire new financial system from scratch that was more equitable, that was more internationally distributed, internationally integrated than what we've traditionally seen in what I guess what we'd call tradfi. And that was what years ago had got me into crypto. And what made me stay was, in fact, how interesting the process of developing and building that is all the way from the application level defi down to the core infrastructure, even lower into the core cryptographic primitives and science that undermines all of this. And so that just holistically interesting space is what has attracted me and what has kept me here.
**Speaker B:**
And like, I just have to. I always wonder, right, like, what, what were you doing that you even realized that our financ financial system is broken? Or like, in what way were you exposed to crypto? That you realize that this is solving a problem that we had, which is, I think is the big leap for most people.
**Speaker C:**
Yeah. So fresh out of college, I went to go work as a software engineer at a large multinational insurance company, Manulife John Hancock, which operates in Canada and in the US under the name John Hancock. And this is an entity that provides, you know, large amounts of life insurance to large portions of US insured population. And so what I kind of realized there Was how much of the infrastructure and practices within a large financial institution could be just disintermediated. Where you have this large entity that offers ostensibly a very simple product, you pay premiums and in the event something terrible happens, you receive a payout. And the amount of payments that you would pay in terms of insurance premiums are going to be based on the risk profile of that payout occurring across the relevant sized decadent population or insured population. And so essentially this is a very simple risk underwriting and then issuance process. But you're thinking of a company that has 50,000 global employees and you think, okay, well this entire system has been designed in a way that's not incredibly capital efficient. It's not very technically efficient, but it's just developed in layers and layers of legacy systems and legacy technology. And so we're seeing that. And then at the entity manual, like John Hancock, they had a very, very small crypto practice when I got there, which was sort of how I started kind of getting my feet wet in crypto. And eventually part of the crypto team left and I ended up taking over and I built out a lot of their engineering focus there for their decentralized some of the centralized insurance products they're working on at the time. And I just had found that there was this opportunity to take something that was once incredibly capital inefficient and start making it more efficient.
**Speaker B:**
Yeah, I totally hear you. And I think that there is so much interesting stuff on our core principles of decentralization and production, permissionlessness and all that. But there you have to be inside a behemoth like John Hancock or my background, Anheuser Busch, like to see real like trad finance and at scale and to like really understand that like yeah, we're sitting on top of like 80 years of just like guys layering like really bad fixes on top of even worse systems over and over again. And I, people ask me like, okay, do you believe that, you know, let's say crypto will take over finance. And I'm like 100%, not even a question. And they're like, oh, so you think DEFI is going to take over TradFi? That's not at all what I think. Like what I think is that the most basic parts of what we're building here, like the ability to send value, the ability to have like a immutable ledger, like just these basic things which you think think would like exist in large multinational companies, really, really don't. And like at the very base level, like forget all of this like moon boy crap about crypto, this is just like a better way to move money around.
**Speaker C:**
It is, it fundamentally is. And I have never been a huge believer in like the private blockchain space and you know, the JP Morgan quorums of the world. But I do think that this idea of the public transparent ledger, which is what a blockchain functionally is, a global state machine, right. An immutable representation of the state of the world computer, is what will fundamentally power an entirely new financial system. And I think the public and the transparent nature of that and the permissionlessness of it is going to unlock so much global utility.
**Speaker B:**
Yeah, for sure. So why don't you kind of walk us through your story a little bit from this position at John Hancock running a crypto team in a very non crypto company to now today, you're sitting at the helm of like in my opinion, one of the more exciting frontier tech, just like entities that are making the world computer possible. So like can you talk us through like literally what, what, what's your story of how you went from just like being a guy probably in a suit, taking a paycheck to like rolling your sleeves up and becoming a founder?
**Speaker C:**
That's a fantastic question. So I actually went even worse in the wrong direction that became a vc. But I worked a great people when I was in venture, but it was a wrong direction for me. And what I realized was a, I loved early stage companies a lot more than I enjoyed working at a large financial services company. Right. I enjoyed being a VC more than I enjoyed working at a 50,000 person insurance company wearing a suit, taking a paycheck. But I didn't necessarily like investing in early stage companies as much as I enjoyed building. Right. So it was kind of step that was somewhat in the right direction for me. More early stage, more entrepreneurial. But it wasn't the exact things I enjoyed to do. The things I enjoyed to do was to work with people at an early stage company with a more scrappy mantra and push forward on some truly differentiated and innovative technology. And so I was at this early stage fund, Renegade Partners, fantastic group of people based out of San Francisco. And I was there for a little over a year and I decided that I wanted to take a step and take a big risk and start a company. And so I left. And shortly after I left, crypto fell off a cliff and we had a number of the very well publicized negative events of the last market. And what I realized when doing this and when working at this point and building at this point, I was mostly running solidity contracts to the early days of Lagrange was that a lot of the ideas we were toying with would all run into the same issues. When I wanted to build an on chain application, there was a little black box of things I could do. I could build a contract this way, but I couldn't build it any way outside of this box. I couldn't build it the same way I'd build an application in Web2. And this was kind of something I had known since I'd been working with facility for quite some time at this point. But it was just every time I was thinking of, oh, what about if I build this idea? What if I build this idea, I'd run into something and I'd run into that same thing. And that was when I was like, well, maybe instead of trying to build an idea, I build the thing that solves the issue I keep running into. And that was when lagrange in its current form really started to take shape. And we were very, very lucky early on by bringing in some tremendous people with very strong appliance cryptography backgrounds, very strong research backgrounds, and very strong engineering skill sets who are able to take kind of this nebulous idea of these problems that can't be solved, that keeps you in this little box and make them bigger and make it so that instead of you being able to build 20 things on chain, you can build 400, you can build 200, you can build one order of magnitude more by up leveling the infrastructure by a meaningful degree.
**Speaker B:**
Yeah. You know, your story is. It's super interesting, and I think it's like kind of the. The perfect model for what's going on in Blockchain and really probably what's going on in every edge tech industry. And I've told this story too many times on this podcast, so apologies, audience, but I was the. The story of Alchemy, right. The RPC provider. So they, back in like 2015, 2016, around Ethereum time, wanted to make, I believe it was a game. It might have been a Dex, I don't remember. But they spent so much time just building their own RPC structure so that they could have enough access to the data to create an application that eventually they realized like, oh my God, if everyone is having this problem, like, we should just solve the problem and make that the company. And that's. That's your story you just told me. Except for instead of rpc, it is this black box, which we'll get to in a second.
**Speaker C:**
I Love the comp to Alchemy. I aspire to be able to run Lagrange the same way Nikil and folks run Alchemy for sure.
**Speaker B:**
So, last question before we get into Lagrange itself, but while I think it's not too uncommon of a profile to have people that like jump back and forth across the VC line to go from VC to entrepreneur and vice versa, I, I, I feel like we don't really hear about the experience that much and I would love to have you reflect any thoughts on like what were the lessons learned by like being someone who crossed over that barrier? And do you think that you gained any skills as a VC that made you a better entrepreneur and vice versa?
**Speaker C:**
One of the beautiful things about working as a venture investor, despite how fun it is to make fun of venture investors, is that you really get exposed to a bunch of fantastic operators who really build meaningful technology. And you can watch the things that the founders do who succeed and you can watch the things that the founders do who struggle. And you can see what are the characteristics that truly make someone who is a very good leader of an early stage company and what are the characteristics that make someone who might struggle a bit more and, and you can start trying to learn and be mentored by in many cases the more experienced operators who are running their respective businesses. And being in the position of being an early stage investor is one of the things that kind of gave me an idea of what it would mean to run an early stage business and kind of gave me confidence that this is something that I would really like to do. And so one of the things that I think was very important that I have learned from being an early stage investor was the question of where in the value chain do you want your business to sit? I think a lot of people, they see a problem and they build a solution and that's a very kind of humanistic and very normal thing to do. And it oftentimes works, but oftentimes people don't ask themselves. Is the thing I'm building situated at a point in this value chain where value accrues and let's say a trough or in something that value flows out of, and if value flows out of it, it's hard to build a business there. Someone will build trough and vertically integrate for sure.
**Speaker B:**
You know that recently on the pod we had Brendan Farmer from Polygon and a comment that he made was that to the effect of like, oh, if, if all the ZKEVM stuff is open source, like can't just, you know, like all the optimistic roll ups, just like take your stuff. And his comment was like, yeah, absolutely. But we believe that long term that all of this stuff, execution environments, proving technology, all of it is going to be commodity stuff. And like, we don't really think there's a lot of value there. So we're going to create it because we need it. We're going to put it out there because we think other people want it. But like that's not what we're going to focus on for capturing value. And I think you just put it in like a, in, in the first principles way as opposed to in the polygon context. But like that makes so much sense to me.
**Speaker C:**
I appreciate that. Brendan's a really smart guy. We, we, we've had a few calls with him and he's, he's phenomenal. Um, and yeah, and that's, that's very much the context as well. Like you, you think of the proving stack, you think about where you're going to build what should be something that a business builds as a public good and what's something that a business builds as part of its core design. Like our team puts a lot of cryptography. It's the application of that cryptography to our product problem space that captures value. It's the publication of that cryptography that offers value. And that's how we sort of think about that. Where, you know, ideally 10 years from now when a PhD student at a university is writing a new vector commitment, they're going to be citing work that Lagrange did, not because they for example, might be using our product, but because we've put our research in the public domain where people can read it. And that's something that I think is very important. The publication, the systemization and then the structuring of research that can be used in posterity.
**Speaker B:**
Man, I love what you just said. That's so cool that you think both in terms of capturing value and of giving value. And both those are important to growing your like enterprise. But like only one is about like kind of like cash flows and the other is really about growth and about giving back. And so I really like that frame. Capturing versus giving value.
**Speaker C:**
Very cool, thank you.
**Speaker B:**
So let's, let's go to Lagrange and like, let's, we'll, we'll go back to the, the way that you described it. So you're hacking around, you're trying to like to, to build something and you realize that you're stuck within this box. So can you, can we start off by what is this box that, let's say before Lagrange, like you're stuck in if you want to develop apps for blockchain.
**Speaker C:**
So I think this box is often constrained by the first principles, what you can do in the state machine on Ethereum or what you can do in any state machine. And when you put that in concrete terms, it is what are the things I can build? And so a simple example. I want to build a game and my game wants to give you three points if you own a blue pudgy penguin, one point if you own a pudgy penguin with a hat, and one more point if you own a penguin with some other trait. But my contract can't actually quarry the Pudgy penguins contract and query the metadata. I can't query how many NFT. I can't query which NFTs you own. Right. I can basically check if you give me an NFT index, so I can check if you're the owner of it. But I can't say select all NFTs owned by Rex. I can't say select all NFTs owned by REX that you've held for more than 30 days. I can't really use data in the same way that I would if I was building a Web2 application. I functionally have a contract that has a defined set of variables, a defined set of data within it, but I can't loop through my own contract at the same way I would if this were the microservice. And I can't build a database or query at the same way I would if this was in web2. So functionally I'm forced to structure my application away that is constrained by what type of data it can compute over and what type of data it can access.
**Speaker B:**
And I guess just to like really pick this apart. So we talk about the, you know, the EVM as Turing complete. And what Turing complete means is that you can do anything that a computer can do, sort of. But. But why can you just walk through why? Let's say you can't query the pudgy pension, the pudgy Penguin smart contract to do this kind of analysis to give the points. Is that a GAS constraint? Is that an EVM constraint? Like, talk through why that's a problem?
**Speaker C:**
It is a GAS constraint. You can't really run a quarry from your contract on a different contract because you are not going to be able to loop through all the mappings of that contract. Even if you're just the same contract, you can't loop through mappings. There's 10,000, 20,000 penguins in there. I can't figure out which penguins you own versus which penguins someone else owns. It's just going to be way too much to loop through. So a select all quarry, for example, have to touch every single element of that mapping. If I was to run a SQL query on it in order to be able to extract anything meaningful from it, I can't say just show me five NFTs you own because you might own six. I can't say show me this, show me that. Because you effectively are not touching every single element. And this is something that in some ways is an intractable issue with any type of data storage on chain where you can't, because of GAS constraints, loop through and touch every element in a very large set, whether that be an array or mapping. And that's true in a single contract context and the multi contract context and the cross chain context. You just aren't able to do that.
**Speaker B:**
Yeah, and that, that's both like on just like the first layer of like, okay, maybe you just can't afford the gas to run that kind of computation. But then it's also, if you're advanced enough, like there are hard GAS limits to the amount of gas like per block and it sounds like you build a query with any sort of like sophistication and like you're probably hitting that 15 to 30 million gas limit. And like then, then it's actually like a hard. You cannot do this, not you cannot afford this.
**Speaker C:**
Right, exactly. It's a, you can't do this. And it's also is within the constraints set by your application's features, your application's functions. Oftentimes you find that stuff, stuff that aren't optimized just aren't tenable. You know, you're not going to use a dex with an additional feature for additional $40, $50, $100 to swap on it. There's a lot of things that an app developer or DAPP developer has to bake into their DAPP that are entirely dependent on GAS constraints. You just have to kind of accept that within the execution environment there are certain things you can do and certain things you can't do. There's certain design patterns and certain anti patterns. And what we let you do is take some of the anti patterns and now they become reasonable and efficient design patterns.
**Speaker A:**
Awesome.
**Speaker B:**
So awesome. Perfect segue, I guess into like, let's talk through what Lagrange actually is. Like how I think anyone listening to this podcast can already figure out like, okay, we're going to in some way use ZK to do this computation off chain and then give the result back on chain and blah blah, blah. But can you actually just walk through like what is the Lagrange application or stack and how does it interact with blockchain?
**Speaker C:**
Yeah. So we are a co processor for big data use cases on chain. And what that means is that there's certain types of computation that you can't run on chain, things that are outside of that little black box. And what we let you do is to put that computation and run it off chain and then verify the result of that computation back on chain. And specifically this is computation that runs over the state that's available on chain. So you basically instead of having some certain function call or some certain in contract requirement that runs directly within the on chain execution environment, you run that off chain in our system and then you generate a proof of what that computation is that you can verify back on chain. And to make it very specific about how we are different than, you know, let's say ZK VM or ZKEVM is that we let you do things that are data centric, we let you do things that are data parallel, that are in a MapReduce format, that are things like SQL, Spark, RDD or MapReduce and that are generally centered around how do I compute over a large amount of historical on chain state. Let's say moving average is a price over 10,000 blocks, changes in contract balances, changing asset balances, volatility.
**Speaker B:**
So, okay, let's talk through an example. Let's say I have my points algorithm for my Pudgy Penguins base game and I say for each of these trades, this is how many points that we're going to allocate to each person. And all of this is too expensive to happen on chain, right? So it's, let me just guess how this is and you, you tell me where I'm wrong. But I think it starts with the Lagrange smart contract, which is on chain. And I say to Lagrange like okay, this is the query I want you to run and here's the incentive I'll give you to run it. Then I'm assuming somebody from maybe running Lagrange software or somewhere outside of the EVM is looking at that contract sees okay, someone just put this request and this bounty in I off chain. Just run the run the request, find the answer, return back to the smart contract, the answer and a ZK proof. And if the ZK proof on chain in the Lagrange smart contract verifies then the smart contract will release the bounty to me, release the answer, sorry, Release the bounty to the proof, the off chain person, release the answer to me, the Pudgy Penguins on chain game and then the transaction's over. Is that correct?
**Speaker C:**
That is it. You, you, you understand the whole dynamic. I love this. But there's, there's a couple additional wrinkles that we let you do that we really think are going to become increasingly prevalent over the next few years. And one of those is the ability for your contract to define a database. So when we typically think of computing over on chain state, we think of, okay, we have the evm, we have the block header and we have the catch hash and RLP encoded leads within the storage state and storage state receipt and transaction tries. And so what we let you do is to basically define some subset of your contract or other contract state that you want to be preemptively transformed into an optimized database or data structure. Basically, I would like to take my entire Pudgy Penguins contract and let's say the accompanying off chain metadata that can be proven as being stored there based on, let's say the base URI of the metadata. And I want that to be kept in what is a snark friendly data structure that can be proved as equivalent to what's on chain. And then when you run your queries, you don't run them on the EVM state, you don't run them on Ethereum data, you run them on the optimized structure, which is much faster, much lower latency and is incredibly cheap. And you've effectively taken the concept of computing over on chain state and you've abstracted it away to building a database of that state that you can then treat as if you would treat a Web2 database. So basically your smart contract now gets a database.
**Speaker B:**
Yeah, no, no, I, you've jumped ahead to like the punchline I want to get us to, which is, I'll just say it now. Like, I'm a huge believer in the modular blockchain thesis. But like, what really freaking bothers me about that term is what it essentially means today is rollups. And if you're like really advanced, maybe data availability. But like when I think about the modular blockchain future, I think of like.
**Speaker A:**
What are the next modules?
**Speaker B:**
And like, so obviously the next module is going to be allowing smart contracts to have access to databases and big data. And so like when I, when I look at what you're doing, it is, yes, it's like kind of a Cool application that's leveraging ZK and blah blah blah, everything you tell your VCs. Right. But what I think is actually cool is that you guys are starting, you're in the wilderness, like starting to show us that like modularity in blockchain doesn't just mean like taking inherently blockchain things and like rejiggering them around. It is also about bringing in like the best of Web2 and like the 80 decades of computer research and then figuring out how do we modularize it and then allow it to connect to Ethereum.
**Speaker C:**
I love that. The next module, I think that's so apt because when you're building a modular blockchain you have all of these different components all the way from the DA to the sequencing to the execution, to the interop protocols you integrate to the light client design that you have that you bundle into this kind of new total package that you can kind of configure to your application requirements. Your application specifically chain. But there's always that question of what's next. And I think people have been toying with this idea of CO processors which has a little bit become memed, where you just have taken an Oracle from two years ago and now it's a CO processor. But there's so much foundational innovation going on there that us and our competitors, who I don't want to mention have all been working on that I think is really going to shift the space forward meaningfully. Where you really can now take the next module can be something that's built and that's a substantive advancement of what you can do.
**Speaker B:**
Yeah. And I just, I think that I really, really believe in the world computer and I also really believe that Ethereum in 2015 was not the world computer. It was like a decentralized version of the computer that was like even worse than the one that got us to the moon. Right. Like it was just bad and slow and not usable. And the question is like okay, how do we get from that to the world computer? And like all I can think of is like we upgrade it with better and better modules. And what, what is one of the modules we need is we need access to data and like ability to analyze and make decisions based on data.
**Speaker C:**
That's exactly it. There's a, there's a world that we believe is going to be say in the next 12 to 18 months where every smart contract, or at least the meaningful ones and every roll up has databases and they're inbuilt and they're baked into the actual chain Themselves and they're verifiable where you have a CO processor which is able to functionally provide a database like experience to any DAPP on that chain.
**Speaker B:**
And sorry, last like Moonboy comment, I'll bring us back to Lagrange. But you know, I like Vitalik is so smart because like the more you think about even just the name Ethereum, right? You think like, okay, what is Ethereum? Like okay, it's, it's like Geth, maybe like just this one piece of software and then we created like different ones. But like, okay, because we have these different clients, it's not just Geth, it's really like, it's the databases that exist within these clients. But okay, the databases exist across 10,000 computers. So it's not really the specific, it's just in the ether, right? And as we create these new modules like you say that we're going to have, every smart contract is going to have a blockchain, native database.
**Speaker C:**
Verifiable database.
**Speaker B:**
Yeah, verifiable database. And like my question to you is like, oh yeah, well where is that going to exist? And it's going to exist in the ether, in Ethereum, you know, like in no one place. And so I just, yeah, hats off to Vitalik. And to like there is just like you go read the Bitcoin white paper or the Ethereum white paper today and we're just now getting to some of the things that he's talking about. And it's just, I don't know man, the world computer vision is always here and it's just so cool to be at this stage where we're transforming it from a crappy blockchain computer into the world computer.
**Speaker C:**
That's exactly it. The space is moving very fast and I think that there is a good tranche of companies that have come out of this last bear market, assuming that we're now accelerating to the bull and that are that are really poised to make meaningful impacts on what it means to develop an on chain application.
**Speaker B:**
Okay, all right, we just did our whole 20 minute moon boy section. We'll pretend that was at the end. Let's go back to Lagrange. So we talked through this pudgy penguins example where I as the on chain application like want this done for me and I put it there, I put a bounty, it comes back, everyone's happy. But let's talk through what's actually happening on the like fulfillment and the off chain side. So first of all like, who are these entities that are watching the smart contract and Waiting for, you know, request plus bounties to show up.
**Speaker C:**
Yeah, so it's anyone who wants to run one of our provers. The first version is of course going to be me. Not me, my Lagrange. I might run one too. If I really want to compete with Lagrange. I'm faster proving, but I'm probably going to lose. Our team's quite, quite good at that. And eventually it'll be anyone who has likely restaked plus. Yeah, likely restaked on eigenlayer to be able to run approver. Obviously we want there to be some type of economic incentive behind running approver, since effectively that is what can guarantee liveness that the proofs that are delivered back are delivered back in a timely fashion. But to really kind of get to the crux of what the off chain computation is, it's important to kind of touch on something called a record tree, which is a novel data structure that our team has put out. Originally the first form of a vector commitment last summer at SVC and now in a new paper that we'll be releasing soon that kind of unveils this optimized way to compute over existing Merkle like data structures that basically builds an updatable batch proof on top of a section of the blockchain's state. So we think of the. Let's just think of the storage try, for example, which is a 16arity data structure built with Ketchak hashing and RLP encoding for each of the branches, where you take a bunch of adjacent nodes, you encode them in rlp, you encode them with RLP encoding, you hash that, and that creates one of the adjacent nodes for the next level and you append and hash all the way to the root. And so you can prove inclusion in this by just checking the inclusion of a leaf all the way up to the root. But what you can't do is to construct a very, very large proof in the current form that fits on top of a large portion of that data that can update as the data changes. And so let's say I want to prove the inclusion of 10,000 leaves, the entire pudgy penguins contract. Well, the whole tree is going to change every, or let's say a subset of the tree is going to change and therefore the root is going to change every 12 seconds. So am I going to run 10,000 inclusion proofs every 12 seconds? What if I want more than one contract? What if I want the whole ERC20 contract? Two million? Who's going to pay for all that? Compute it's currently in the present state of zk, not cost efficient to do that. So what we've designed and what a research team has designed is this data structure called a RECL tree. This natively superimposes itself on top of the MPT tree, where you can in essence prove and assert the inclusion of some subtree rooted at every point of this new data structure that can run computation over all of those nodes that it's proving inclusion of concurrently with asserting their inclusion. And what that lets you do, and what we do is we take that entire portion of the tree and we prove that it's equivalent to a new data structure, say our optimized database, and then we can update that inclusion, proof every block in logarithmic time. And so what this lets you do is to basically have a node, or have, let's say an off chain Lagrange node, commit to a portion of the tree and then continually compute and update a proof over that tree and then serve computational updated instruction. So you can kind of think about this as a very data centric form of co processing where instead of it being just a stateless prover that you put out a bounty to the node commits, that I'm going to be able to maintain a data structure for this portion of the tree of this schema, and this is the database I maintain for you. And then you request that node to basically generate a proof now over the optimized structure that I built for you.
**Speaker B:**
So let's put restaking completely to the side for a moment. That's just going to make things too complicated. But in this structure. So I guess first I will take the role of Pudgy Penguins, whatever company that is, and you can be Lagrange. So first of all, like, do I need to as a company come to like Lagrange protocol first and be like, I am going to be running a ton of data across this smart contract and before I get started, I need someone to commit to hosting this in a consumable form that you don't need to commit.
**Speaker C:**
But Pudgy Penguin does need to commit. But someone has to, someone has to say, I care about this data and I'm going to pay for it to be optimized. You don't have to, but the cores are going to be much cheaper if someone commits. Because if you commit to that, that means all the cores, everyone else does. Both you and anyone else who wants to core that contract run over the optimized structure. So you can think about this as defining a database or preloading a public database.
**Speaker B:**
Okay, got it. So. So in the worst case scenario, Lagrange can always answer the queries, but if somebody hasn't pre committed to that specific section of data, then you're essentially just running like, for lack of a better metaphor, you're running like manual queries over a large data set as opposed to like hyper focus queries on something that you've structured to answer queries. Right?
**Speaker C:**
Yes. And it's much cheaper from a ZK standpoint and approving standpoint, and much faster to run them over the optimized database.
**Speaker B:**
So in this world again, I'm Pudgy penguins and you're Lagrange and I have not come to you and said I would like someone to commit to this. You as the protocol or a participant in the protocol might notice, like, oh my God, this query gets hit 60 times a day, every day. It makes sense for me to commit to hosting this data structure because if I'm the only one hosting it, every one of those requests is going to go to me and therefore I'll be able to capture the fees from all 60 of those commits every day. Or sorry, the queries every day.
**Speaker C:**
That's actually very true. So it doesn't even have to be a user of pudgy. It can be anyone who says I'm going to host and structure this this data set and take fees for Coruscant. And that data set is basically a portion of on chain state. Maybe ERC 20, maybe pudgy, maybe bored apes. And frankly, we're just going to commit to a bunch of the major contracts that are available day one.
**Speaker B:**
Yeah, and conceivably, with the power of Moore's Law and the power of folks like yourself researching zk, it is conceivably possible that the technological lift for processing the entire all of Ethereum into one of these optimized databases could be doable by a single node. And once you reach that point, then it doesn't really matter if people commit to subtrees or not. Is that right?
**Speaker C:**
Well, at that point then you should commit to chains.
**Speaker B:**
Ah, interesting. Okay, we'll leave that out as at.
**Speaker C:**
That point we went to chains. We're going to have thousands of rollups then maybe millions. So somebody has to still commit to these roll ups.
**Speaker B:**
Yeah, very interesting. All right, well, we'll leave that aside to come pick up at the end. But okay, so makes so much sense. But can we talk a little bit about like these? First of all, what is your term for the people that are like processing data queries and returning them back to the Lagrange smart contract. Like the Lagrange node operators. What do you call them?
**Speaker C:**
Yeah, we should definitely hire a marketing person since probably the name I'm going to put out is going to frustrate. Frustrate our future marketers.
**Speaker B:**
Yeah. Okay, so we'll say Lagrange node operators.
**Speaker C:**
Yeah, we can think of Lagrange node operators operators or Lagrange provers. It'll likely be, it'll be Lagrange data provers.
**Speaker B:**
Okay, cool. So Lagrange data provers. We're going to start off with just Lagrange foundation or however you guys are structured is going to be the first one. Is this, I don't mean to ask you a leading or a trick question here, but like, is this an opportunity to create like a new decentralized network of nodes or you know, and then like we can talk about like, okay, that requires crypto economic security because you have this new nodes. And then like we can go down that whole like token route which I'm not interested in. Or is the, is the insight here that the power of ZK really like allows these individual like Lagrange nodes to participate in this network without you guys having to worry too much about like coordination across the nodes?
**Speaker C:**
Yeah, so that's a good question. So it doesn't have to be a decentralized network of nodes because we're not basing any of the computation economic security. You're, you're not running a graph node to extract data from the chain and then staking to say you've extracted it. Right. And getting slashed. And it's none of that headache and none of that mass and potential, you know, risk of compromise. It's fundamentally end to end zero knowledge. From the construction of the optimized data structure to the computation of that data structure end to end zero knowledge. And the final proof you get back can be verified with respect to the execution environment. There's no need for any external signatures or commitments. But I think, I'm sure you know this and I hope many of the viewers here who've been watching you for a while know this too. You know, you get safety from zero knowledge proofs, but you don't get liveness, you don't get liveness of approver from a proof. You get the safety of that validity of the computation. But you can't assert that if I want that computation at 2 o' clock on a Tuesday, that that node's going to return to me the correct thing at 2 o' clock on a Tuesday or anything. Yeah, it could be offline, you know, it could return to me nothing. And so that's where staking and that's where building a degree of economic guarantees over liveness is very important. And that can come from proof networks or proof markets. And I'm sure there's plenty of people who can debate both of those approaches for quite a while. But at the end of the day it is an economic problem for liveness. And that's where this comes in.
**Speaker B:**
Yeah, no, no, it makes a lot of sense. And when I think about why we need proof of stake for Ethereum, part of that is about making sure all of the state databases on all of the different Ethereum nodes are in sync. And so I was thinking like, okay, for Lagrange, when we were committing to this like section of the Ethereum state, like, well, you're kind of creating a database here and so like, do we need to be keeping all these databases in sync? But then really you kind of don't because the, you're creating a database that's just an optimized version of Ethereum and you don't need to make sure that all of these optimized databases are in sync, you just need to make sure that they match Ethereum.
**Speaker C:**
That's exactly it. And you have the proof that matches. All you need to make sure is that if you've committed to the Pudgy Penguins contract to serve proofs on there, you'll be paid for the proofs you serve. So if I send a request out that I want to select all quarry for penguins that are blue because you want some game logic to check for blue penguin ownerships, then you can run that query and the person who serves it to you will take a bounty on that and deliver the proof back to you.
**Speaker B:**
That actually brings up an interesting question, which is when you're writing queries like on the Pudgy Penguin side, how are you communicating these to the Lagrange network? Are these written in a proprietary language? Are they written in SQL? Are they written as just a human request? Talk to me through what is, what is the language of the Lagrange system?
**Speaker C:**
So the first version that we're going to come out with, so take a step back, we can support anything that's data parallel. So we can support SQL, we can support MapReduce, we can support Spark and RDD, but what the first version that we're going to come out with will support is SQL and it'll be a subset of SQL statements. Initially the first test coming soon, but then it'll eventually be the full SQL where you define your data structure on chain. You define a series of events that correspond to specific SQL statements with configurable parameters with respect to what you want those SQL statements to be configurable and then when those events are emitted, you'll get the resulting proof back. So you have a PUDGY Penguins contract and you want to support select statements based on owner and based on some subset of the traits. And then you'll be able to have an event that when emitted, will correspond to that specific statement.
**Speaker B:**
And you talked a little bit about having Lagrange data nodes. Shoot, I forgot the term. But the Lagrange participants like choosing to commit to a subtree or a subsection of the state. Is there any thought about giving these operators the option to committing to a specific query as well? I don't know if there's really that many performance gains here, but I'm just thinking, okay, if I as the Pudgy Penguins game know that I'm just going to hit this exact same query like 700 times per day, like, maybe I don't even want to pay the gas costs of like the select star where this I just like want to communicate back up to Lagrange like, hey, run the run query 7. Is that at all part of the piece that you're building?
**Speaker C:**
It is. It is with the first version that we'll be coming out with. We do target, basically you commit to data structure plus a set of cores and we typically think of those as tied together because the most expensive part from an operator standpoint is the data structure. That's the heavy proving with the snark unfriendly hash functions and the RLP serializations. And the queries are going to be very optimized because we can control what we've proven equivalence and transform into. But that's the first version. I do see a world where if I need to cord the Pudgy Penguins contract, let's say 500 times a minute or 10,000 times a minute, and you also have to query 10,000 times a minute. Some operator is going to say, whoa, I can't handle both of you. I can maintain the data structure, but I don't have the bandwidth to support both. So I'll commit to that plus one query and someone else will commit to the same structure plus another quarry. But that requires a tremendous throughput on the system, I think, to make that an imperative. But it's definitely something that we can support with the architecture, the first version is really going to be committing to a set of queries and then committing to the transform structure and the specific data schema. Basically the database.
**Speaker B:**
Yeah, and that's actually a pretty interesting comment you just added in there. It's also about committing to a specific amount of bandwidth or resources too. So it's about the schema, maybe, as I said, the queries that you're willing to service, but also the amount of bandwidth you're willing to dedicate to doing this.
**Speaker C:**
Exactly, exactly.
**Speaker B:**
Cool. So I think we've talked basically all through we can about how to use this system to basically provide data capabilities to smart contracts. What I'm really interested in is like the obvious next generation of this, which is how do we give databases the ability to access and store and process data that isn't necessarily related to the evm? So obvious one, let's say we have tokens that represent wheat futures and if we have that, a smart contract should be able to query weather data of the past to make a prediction on what the wheat token will be in the future. And so my question to you is one, how does will Lagrange eventually support, you know, non blockchain native data? But two, like how do you build a system that is, that supports that? How do you think about that?
**Speaker C:**
So I think with verifiable computation, the way I always think of it is where do you inherit the safety of that computation from? Generally the public statement. And so within the public statement, you need a commitment to data that your proof will compute over or prove computation over. And so when you do this within the execution client, you can get that commitment outright. You can get the state of the block header at some point in time. Let's say I get the most recent block header and I can verify the database with respect to that block header. When I start thinking about things that are off chain, the question is where do I get the safety from? And this is the big question with verifiable computation. Verifiable databases, where if I'm trusting you to supply the data from my wheat futures database and then I'm doing a verifiable computation over the weak features database, really it only inherits the safety of the computation from your initial commitment. If you lied on the initial commitment, the verifiable computation doesn't have the same authenticity. The proof itself can't check if the data is correct. That's where you need to be very smart with designing systems. What we contemplate, what we build are systems where we can Prove the computation and the validity of that insofar as you trust initial commitment. So if you put a different commitment on chain, we can compute over that. So if you have an application that commits data every so often on chain, there's without question a world in the future where we can let you support computation of your type with our verifiable design over that data. So if you have a protocol for anchoring data and you want to have a very complicated system that does that, we can do that. But you still need someone else to build, I guess the other part of the modular stack, which is how do you originate safe data?
**Speaker B:**
Yeah. So it sounds like what you're saying is that everything is on the table with Lagrange, but in order to maintain the security or the magic that is granted to you by ZK Lagrange, the position of Lagrange is like we are only ever going to hold data to the fidelity of what appeared on chain. And so if you want to run like stuff versus arbitrary data, totally fine, just put it on chain first.
**Speaker C:**
Exactly. Otherwise you know, I could load data into the database. But if someone's going to quarry that, they have to trust me who loaded it. Which in that case we don't need the verifiability, they should just trust me to give them the data result. Any of the query.
**Speaker B:**
Yeah, and so this question is not centered in like the, the world that exists today, but let's imagine like the world computer Ethereum Mainnet has ossified and like we're closer to steady state. Do you foresee there being just kind of like maybe like a dedicated like Lagrange data chain? Like do you see? Like basically I, at the end of the day your, your answer to me makes so much sense, which is like we're only going to prove against what is put on chain. And it's like, okay, then we just got to put stuff on chain. But we're also saying that block space is going to become scarce. Putting things on chain are going to be expensive. And so I guess what I'm trying to get to is like how do you reconcile in the future state this world where everything needs to go on chain in order to be verifiable, but in order to get it on chain, like that was the problem in the first place.
**Speaker C:**
That's a great question. So the way we think of it is within our scope of how I see Lagrange now. We have no interest in building an L1 or building a data chain, or building a data roll up or building anything like that. I think that there are companies that are already starting to take, like, steps to build something like this Chronicle, or I think Witness is a new one that. That announced a fundraiser very recently, and they're a tremendous group. And I think this is kind of up their alley of how do we take data and we timestamp it on chain. And I think you can start building a whole bunch of really, really interesting designs that take off chain data and put it on chain. And that should be, in my view, its own kind of cycle to be iterated on. I don't think the company that solves truly verifiable computation over data is the same company that solves data rights on chain. Or how do I put data on chain? I think that's a question of, I think network design, incentive design, blockchain design. What we do is a question of verifiable computation, start design, and in the spirit of modular, different modules.
**Speaker B:**
Yeah, no, very cool. Very, very good answer. And I think that's what's exciting about being in this space is that, like, what you're doing is building the tools that, like, could. Could be on the path to the future that I just described, but also, like, aren't necessarily. And the reality is we need to build these tools in order to figure out which path we're on. Yeah. Cool. All right, so with the last less than 10 minutes here, I would be remiss if we didn't just, like, spin a little bit around, restaking and around, like, this Eigen layer moment. And I think there is so much to be said, like, both, like, positive and negative about it that I'll try to focus us a little. And I think the two areas that are worth talking about are one, like, you brought it. You pointed at it already is like, how would Lagrange itself leverage, what does it call them? Active, validating services in order to become more secure. And the other thing which feel free to answer or go down whatever road you want. But, like, what. What I have been noticing so much on crypto Twitter, and why I'm so excited to have you on this podcast is I just keep seeing, like, more and more announcements of, like, use cases or ideas or. Or like, applications that are leveraging Lagrange plus Eigen layer. And so I think, like, yes, I'm very interested to hear how Lagrange can use Eigenlayer, but I think maybe more interesting is once you have these two primitives, what are the types of things that we're unlocking?
**Speaker C:**
So I think the stuff that we do and the stuff that eigenlayer does is naturally synergistic. What we do is entirely zk verifiable computation. What eigenlayer does is entirely a new mechanism for applications to derive security from Ethereum validators, Ethereum stake. And I think that these two things will unlock a new range of applications that we're frankly quite excited about. And I think you can start thinking about, okay, once you start being able to prove computation and once you start being able to have new safety guarantees around your validator sets, what are you going to build? And so you can think of, you know, theoretically anything that's, that's an AVS will likely have read access to Ethereum. So if you build a roll up on an AVS, if you build a new L1 or L2 on an AVS, whatever you build on AVS, you're going to likely have read access with theorem state and Ethereum block header. And now you can verify proofs with respect to that. If you have read access to multi chain block headers because you're using Ethereum as this mechanism to intermediate cross chain messages, then you can run verifiable computation over those commitments. You don't need to have any type of complicated infrastructure to cache historical block headers, you can just natively run them. And so that's some of the stuff that we're frankly quite excited about because you have a whole range of applications that will keep just inheriting safety from this base layer that will keep basically writing state routes, writing commitments to their own state onto this L1 that we can then structure computation over verifiably and allow you to manipulate and in essence run ETL on to kind of get these optimized databases and core the optimized databases. And that's what we think can unlock a lot. And just restaking broadly as a primitive is also something that we're going to consume because I think just the cost of capital, the safety guarantees and just end to end what it is is inherently much, much better than the status quo when you, where you're trying to bootstrap a network on your own token.
**Speaker B:**
Yeah, and look, I definitely, as it's played out, have my kind of criticisms of eigenlayer, but the thing that will always be like the, the magic insight on Eigen layer is like, look, I just, we keep building these systems that are predicated on like. And then we'll have like hundreds of thousands of real people running real computers that like participate in this network. And like, okay, like I believe in Ethereum, I believe that's going to happen. But then, like, then we have thousands of L2s and each one of them have like tens of thousands of people. And then we have like thousands of Oracle networks and each of those have like, at a certain point, the amount of nerds willing to take up space in their wife's closet to run Ethereum nodes is just going to run out, or, sorry, run distributed computing software is going to run out. And I think what is magic about Eigenlayer is this idea of, okay, how do we just build one network, one network of computers and one network of trust? And like, the computers are the Ethereum nodes and the trust is the Ethereum. And then the question is, like, what do we use it for? And I think, what, like, if you're not living and breathing in this stuff, like, it's a little hard to understand why Eigen Layer, this thing that has nothing at all to do with zk, was like the atomic bomb that set off the ZK gold rush. But I think it's just in this, this like, Venn diagram of like computation trust that. And like right in the middle is like, how does that happen? That's Lagrange protocol. Or, and don't mean to take anything away, but, or any of the other like 30 companies that just raised capital in the last year to leverage this like absolute freaking magic technology to create the world computer.
**Speaker C:**
It is, it is. I mean there, there's a phenomenal, phenomenal group of projects building on Eigen Layer. They're an ecosystem that we believe in very strongly, and I think they're one that we think will be around for, for the duration of, of of my career in crypto. And you know, I think, I think you, you, you place your bets on the people who you think have a true first principle understanding of the space and then you stick with those bets. And I think Eigenlayer is a team that we've believed in for a very long time and one that we continue to believe in very strongly. And what we build, we think there's a good opportunity for it to overlap very closely with the types of things you can build on there. Especially because as we kind of started with, about talking about this little black box that you can do on chain, there are teams who have for their longest time been bumping into the edge of that black box to try to build new things that are just not possible in the current state of the art. And those are the teams that I think we know of, the ones that we know about, they're the rollups who are trying to build complicated fraud proof mechanisms. They are the restaking protocols, the LRT tokens, the complicated defi primitives, the things that push the state of the art and capture our imagination are the things that are the most hitting into these walls of what can be done today on chain. And that's why we've historically got along so well with these teams because I think they run into the issues that we try to solve and they build the things that we try.
**Speaker B:**
Yeah. And again, I think ZK is the like magic key that brings everything together. Because like I really believe that zk, like when all said and done, is about taking the like most crazy computation and projecting it into the most resource constrained environment possible. And like that's exactly what you're doing with these. Like it's so resource intensive to do like big data analysis on datasets. But like what ZK does is allow us to project that back into smart contracts. And that is what the ZK gold rush is, is to figure out like, well man, what else can we compress and put in there?
**Speaker C:**
That's exactly it. That is exactly it.
**Speaker B:**
Cool, man. Well, I, as you can tell, like this is like exactly what I think is so interesting and I could take you down this. Like we could keep talking for hours and hours and hours, but for the sake of everyone, let's, we'll try to wrap this up here. So first of all, Ismail, thank you so much. I learned so much just one, about what you guys are building, but two, through what you guys are building just how all of this stuff is going to work. And so, so incredibly excited, so incredibly thankful and just thank you.
**Speaker C:**
Thank you so much, Rex. I really appreciate you having us having me and I appreciate the level of interest you've shown in what is a very esoteric subject, but one that we expect and hope to be very impactful in the space for sure.
**Speaker B:**
For sure. No, I appreciate it. And before I let you go, can you just tell the audience where they can find you, where they can find Lagrange and if they're kind of like getting inspired about like what does it mean to have a smart contract with access to data? Like what's the best way to get started?
**Speaker C:**
Yeah, so I think the best way to get started would be to ideally use our testnet that should be coming out in late March, or to reach out to me directly if you want to build a little bit closer with our team. You know, you can find me on telegram and on Twitter at ismailhr, you can find Lagrange on twitter@lagrange dev. One word. Or you can go to our website, lagrange.dev. and we have good resources there to help people who are interested in this stuff kind of dig in more and kind of get a deeper understanding of some of the things that our team's very excited about. We'll be showcasing to the market very soon.
**Speaker B:**
All right. Awesome, man. Well, I'll make sure that your Twitter is in the show notes and everyone can find you. But once again, just thank you so much and have a great rest of your day.
**Speaker C:**
Likewise.