**Speaker A:**
Foreign.
**Speaker B:**
Hello and welcome back to the Strange Water Podcast. Thank you for joining us for today's conversation. At the end of 2022, Vitalik was on an episode of the Bankless Podcast in which Ryan Sean Adams asked, what is your assessment of crypto after this crazy year? Now remember, 2022 was the year where the moon crashed into the earth and where the Adderall induced mania finally fell apart under the Bahamian sun. So suffice to say that there was.
**Speaker C:**
A lot to say, but Vitalik gave.
**Speaker B:**
One answer that really helped me understand the moment that we were living through.
**Speaker D:**
2022 was also the year of ZKEVMs, right? So five years ago the zeitgeist basically was that oh, there is this fancy thing called ZK Synark technology and maybe theoretically it could make abstract proofs and you can turn things into polynomials and put stuff on top of stuff and maybe eventually verify an Ethereum block. But come on, it's so high overhead it's going to take literally four weeks to make a proof and four years to actually audit the code to do any of that. But Fast forward to 2022 and we have multiple ZKEVM implementations that are all promising some kind of mainnet launch next year. This is amazing, right? Like ZKE EVMs have just gone from being a non existent pipe dream to being the I think clear and manifesto long term and possibly even medium term future of scaling Ethereum.
**Speaker C:**
So here we are, about a year.
**Speaker B:**
Later, multiple ZKEVM implementations did come to mainnet. So much so that the ZKEVM itself isn't a particularly sexy problem anymore. In fact, the technology and the narrative have moved so far in such a short amount of time that much of today's conversation would be mostly unintelligible back then.
**Speaker C:**
What's your data availability strategy?
**Speaker B:**
Have you considered an alternate execution environment than the evm?
**Speaker C:**
What pieces of your stack are going.
**Speaker B:**
To leverage restaking and on and on and on and on. And even a year ago we could all see the big hairy problem that was beginning to emerge. Yes, the rollup centric roadmap seems great in theory, but in practice, if it requires a massive fragmentation of liquidity and composability, does the rollup centric roadmap destroy everything that makes crypto amazing? Well today I am so excited to announce that we have the perfect guest to answer all of these questions and to show us the future of blockchain scaling. Brendan Farmer, a co founder of Polygon and a co Leader of Polygon0 over the next about 70 minutes. You will learn so much about the future of Polygon and blockchain scaling and of Ethereum, with a particularly deep dive into the new and very exciting Ag Layer, a first of its kind solution to an increasingly fragmented, roll up, focused world. Of all the conversation we've had on Strange Water, this might be one of my favorites. And I hope you take away just as much from Brendan as I did. One more thing before we begin. Please do not take financial advice from this or any podcast. Ethereum will change the world one day, but you can easily lose all of your money between now and then. Okay, let's bring out Brendan.
**Speaker C:**
Brendan, thank you so much for joining us on the Strange Water podcast.
**Speaker A:**
Yeah, thanks for having me, Rex.
**Speaker C:**
Of course. So before we get into all the incredible stuff that's happening with Polygon and with Ag Layer, and just like the revolution that you're bringing to us, I'm a big believer that the most important part of every conversation are the people in it. So with that as a frame, can you tell us a little bit about yourself, how you found crypto and eventually how you made your way over to probably the most important infrastructure entity in this industry?
**Speaker A:**
Yeah, I mean, Rex, you're hyping this up a ton. I hope I'm going to be able to live up to this. But yeah, so I think for me, I studied math in college, math and philosophy. And toward the end of my time in undergrad, I started to get into cryptography, just encryption as a result of kind of the Snowden Leaks and just sort of worrying about privacy concerns. And then so I was kind of like tiptoeing around that for a couple of years. And then Ethereum launched and I was living in San Francisco and started to go to two Ethereum meetups, I'm sure, like a lot of people, and just kind of became really interested in, in cryptocurrency and Ethereum and just what you could do. And so I think for me, the journey into crypto really like started accelerating when Vitalik launched the plasma paper. And this was, I think, 2017, and there was this line in plasma where it was like, okay, we provide security and safety with this cool game theoretic mechanism using fraud proofs or whatever sort of design and they referenced, you know, but you could also use zero knowledge proofs or snarks to kind of make all of this better. And I had sort of like vaguely heard about what zero knowledge proofs were, but I, you know, didn't know if the math would be accessible or, you know, how to get into it. But that really sort of stayed with me for a while and I started getting more into zero knowledge proofs. And at that time there weren't a ton of like easily accessible entry level tutorials. And so it was pretty rough. But I found myself at zcon0. So zcash was nice enough to give me a grant so I could attend ZCON zero and I met Daniel Lubrov and the two of us started talking. I had just left kind of a job at this pretty boring startup that wasn't going anywhere. Daniel had just left Google and we sort of wanted to do similar things and we kept talking and you know, after a few months we're like, you know, maybe we should work on a project together. And that project became MIR, which was like a ZK L1 that was trying to use zero knowledge proofs for privacy and scalability. And so we got that off the ground in 2019 and we raised some money at the end of 2019 and then immediately there was a global pandemic. And that was a really fun time to be trying to get a startup off the ground. But we managed to hire some people. We built a really good team sort of by accident, honestly. Like we randomly came into contact with really, really talented people and this journey sort of brought us to being acquired by Polygon. So Polygon, in kind of mid 2021 was looking to pivot into building ZK technology. So they had the Polygon POS sidechain and they knew that let's like in the future that wouldn't be good enough to be competitive in the L2 landscape. And to their credit, they decided to make a huge bet on ZK technology. And so they acquired three ZK teams and we were one. And yeah, so that kind of brings us to where we are now. I work within Polygon on ZKR and D, so proving systems like Planky 2 and Planky 3, which are state of the art zero knowledge proving Systems and a Type 1 ZK VM. So a Prover that allows you to take any EVM chain and immediately upgrade it into a ZKL2. And so that's what I'm focused on and it's yeah, really interesting.
**Speaker C:**
Awesome. So before we get to kind of like modern day and what we're working on now, I would be remiss if I did not ask one of like, honestly like one of the godfathers of like the ZK revolution that we're going through. So let me put it this way. Last year the META in ZK World was like, let's build tools that make ZK accessible so that like application developers have access to these tools and then can like, focus on building like, things that users do and not worry about PhD level math. And that became like, very obvious after we had like a couple big wins in ZK World and then people went to go replicate it and realize like, oh my God, exactly what you ran into in 2019. And so I would just love to take a moment to reflect on this like, kind of five year evolution of zk. And you know, I think I just said like kind of the, the basic part, but as someone who was there and really like helped shepherd us forward, what was it like during this, like, this period where ZK went from this math that has been around for decades that literally no one outside of the widest, tallest ivory tower has ever heard of, to something that has like, just joined the, like, the background lexicon of our like, vocabulary and knowledge?
**Speaker A:**
Yeah, I think that's a really good question. I will say for honesty, I probably am not the one shepherding ZK technology into the future, but I get to work alongside the people that are doing that directly. So that's a nice thing. But yeah, I mean, the way that I look at it is I was really fortunate to join the ZK space at a time when I think it was just entering a parabolic exponential growth curve. And so if you think about like when I joined the space, like the state of the art was like sort of the Groth 16 proving system. So you needed a trusted setup, the performance was good but not great. And like it was inconceivable that you could build like a ZK VM where you're proving like the validity of Ethereum transactions. Like, that was something that was seen as like a decade away, like so far in the future that it was not really conceivable. Um, and I think that we saw, and to be honest, like a lot of this was catalyzed by crypto because it, I think it was obvious to enough people early on that this technology was basically like the perfect fit for crypto privacy and scalability. Like, you know, from the scalability perspective, zero knowledge proofs allow us to compress the computational load of validating a bunch of transactions into validating a single tiny proof. And so it's a complete game changer in that respect. And from privacy, the privacy perspective, it allows us to have anonymity on public ledgers. And so to me, it was very obvious early on that this was an extremely compelling technology for Crypto, but it was a really, really uncomfortable position to be in at the time to be starting a company and raising money given the enormous amount of technical uncertainty that existed around the technology. And so I guess in 2019 we had proving systems that were transparent, so like Starks or didn't require trusted setup like Marlin and Planck, and didn't require a per circuit trusted setup. And so you could see how like the, the restrictions and the limitations on the technology were beginning to be lifted. And I think that honestly, like, our team did a lot to create this feedback loop between like techno, the sort of like academic R and D and the like engineering of these proving systems. And like, I think that we really focused on changing the, the discourse from a primarily academic one where people would look at papers and they would say, okay, we're going to measure the efficiency of this proving system in terms of the asymptotic complexity of the number of field elements that it's doing. But for us, we looked at questions like, okay, we have a 256 bit field and we have a 64 bit field. We know that the 64 bit field, each field element is going to fit in the single word of a cpu. And, and so it's going to be like way, way faster in practice on CPUs and on hardware than a 256 bit field. And so like, let's look at these areas where we can squeeze out as much efficiency and performance out of every component of the prover so that we can really like push this technology forward. And I think that if you look at like the progression from 2019 to the current day, like seen a huge leap forward in what's possible, both from our work, from work of others working on binary fields and other really interesting areas.
**Speaker C:**
Yeah. So last summer I had the great pleasure to sit down one on one for 30 minutes with Professor Dan Bone at Stanford SBC. The thing, both of us were just so awestruck by him. Much more than I never want to put myself on the same level as Professor Bonet, but, but what is special about crypto is this interface between academia and entrepreneurialism. And like, we've seen this in tech for now, like especially Internet tech and for like the last 30 years. But there was just something so special about like sitting in that conference room at Stanford watching grad students and researchers from Microsoft and just like people that like live and breathe in academia come out, give a, like, presentation on their paper and then watch like 4,000 kids writing down. This is exactly how I'm going to turn it into a startup. And again, just to add a little more onto that, I asked. So I actually, I went to Stanford, I took Intro to Number Theory From Professor Bone 11 years ago, left the space entirely. And actually during that time the like example he gave at the end was about quantum cryptography. Nothing to do with crypto at all, even though this is 2012. And so I asked him was like, talk to me a little bit about the transition for you personally, how it was like going from quantum to crypto. And he's like, somewhere around Ethereum time I wrote a research paper, I published it, and then two months later somebody had raised millions of dollars on that paper. And I thought, it happens, it's a fluke, that was pretty cool. But it is what it is. And then like two months later the exact same thing happened. And that's when he realized like, oh my God, there's something different happening here. And everything that you just said kind of speaks to that so loudly. And I guess why I'm again going to call you a godfather of like the ZK revolution is not because like the important part is the research and the mathematics, like that's been going on for 50 years. The important part is what's happening now, which is us figuring out that this like really, really weird idea of like hiding numbers behind an elliptical curve has like real applications for things that are transforming the world right now. And you know, we're still early in that process, we still get to watch it, but I just, I can't even imagine what it was like in 2019.
**Speaker A:**
Yeah, I mean, to be honest, like, like I said, I think it was really nerve wracking because we raised a lot of money and we didn't know that the proving system was actually going to be efficient enough for what we wanted to build. And so yeah, I mean it was this uncomfortable position where I think ordinarily when you think about launching a company, like you don't think about like using your seed round to invest in like Core R and D for the technology that you hope to build on. And that's like exactly what we had to do. And fortunately it worked out. But yeah, I think I'm glad that we sort of entered on the right position of that parabolic curve of development because yeah, it's, there are definitely a lot of elements of a crypto startup that are, that are a little bit different than traditional tech.
**Speaker C:**
Well, I want to meet and like thank your VCs because like we need some people that are willing to just like throw Money at technology that might work.
**Speaker A:**
Yeah, yeah, yeah. Shout out to Olaf from Polychain for being willing to make those bets.
**Speaker C:**
Cool. So again, last question before we get into, like, what's happening now. I would love to hear a little bit about your reflections on this time post acquisition by Polygon. You mentioned that, that Polygon acquired three teams in this ZK space. And so I would love to hear a little bit from your perspective what that was like. Did it feel like a competition? Did it feel like, oh, my God, we just like the two people that we're competing with are now our best friends? Like, talk to me a little bit of the dynamic of being an inquiring founder in an acquired founder in a space where conceivably you could have, like, rivals.
**Speaker A:**
Yeah, yeah. I think that there was definitely a world in which acquiring three separate ZK teams that were all, like, continuing to operate autonomously was not like a good idea or a good thing. I think fortunately in this world it ended up being a very good thing because all three teams were sort of focused on different things. So we were all working in zk, we're working on ZK tech, but it's sort of different stages of the development process. So my team, Polygon Zero, has always been laser focused on core proving system R and D. So Planky two, Planky three, just like, pushing the underlying tech forward. And Hermes, by contrast, is really, really focused on shipping and getting something to market. And so it actually ended up being really, really complementary because we could take the proving system advancements that we'd made in planki 2 and transfer them to Hermes. And that was what allowed Hermes to launch a type 2 or type 3 ZKVM and be really fast to market. And so the third team, Mydin, is focused on sort of alternate ZK VM design, so different execution environments beyond the EVM and how we can add privacy and scalability and client side proving. And so I think that there's a really, really nice interplay and like a really sort of fertile collaboration and exchange of ideas. And I think it's been a really good thing for us and for Polygon.
**Speaker C:**
Sounds like the lesson there is that it's easy to set up this situation and for it to actually be a disaster. But I think maybe it's just a testament to, you know, the, the M and A team at Polygon to like, realize that, that it's possible to acquire three ZK companies in a way that's like, complementary and about, like, fitting a puzzle together and not like, let's just go suck up the market and like, hope we get some of the winners and then hope that plays out internally.
**Speaker A:**
Yeah, yeah, I think that's completely right. And I do think the perception at the time was Polygon is just like turning on a fire hose of money at the space, trying to buy technology. And like, that really wasn't the case. And I think to their credit, the Polygon team did a really good job of recognizing that we were all like, very internally motivated and we were founders and we were sort of like, able to operate autonomously in a way that was a lot more productive than sort of equi. Hiring a bunch of people and squishing them all together in a single team and like, hoping that that would work. And so I think it was a really good thing.
**Speaker C:**
Yeah, I mean, like, the proof is in the pudding here in February 2024 seems to be working out.
**Speaker A:**
Yeah, yeah, yeah. So far, so good.
**Speaker C:**
Cool. So I'm having a little trouble coming up with a transition here, so I'll lean on you a little bit about this. Like, what we need to start talking about is Polygon zkevm and that kind of gets us into AG layer. And I think it makes sense to kind of just skip over Polygon pos but and then maybe at the end we can kind of wrap around and talk about how we get from Polygon pos, which is a lot of TVL today, to like a future that. Does Polygon POS exist? Does is it migrated over? But let's just fast forward for now straight to Polygon zkevm. So I think anyone listening to this podcast understands the most of the modular blockchain thesis. They understand the idea of like layer twos and roll ups and even like zk layer twos. But without just like doing an explain like i5 can you talk a little bit about what is like the Polygon zkevm? And for crypto natives who kind of understand in general what we're doing, like, what are the important things to know about Polygon zkevm?
**Speaker A:**
Yeah, sure, that's a good question. So the Polygon ZKVM is a new chain and it's a chain that allows that sort of, I'll say, EVM compatible or EVM equivalent. I'm not sure what the accepted terminology is, but it allows any developer to take any existing EVM smart contract code and deploy it seamlessly on this new chain. And it just works. Same with all existing wallets with existing developer tools. And so for developers and users, it presents an environment that's identical to Ethereum. And this is a really, really important thing because say what you will about the EVM and its sort of drawbacks, its pros and cons, but the vast majority of crypto activity currently exists on the evm and you know, the vast majority of smart contract developers and written smart contracts. And so it's really important, at least in the short term, to preserve compatibility with the EVM and to allow for the transfer of all of this accumulated knowledge and tooling and application to be migrated to a new chain where transaction fees are much lower than on Ethereum, where we can sort of optimize for these different things like latency and cross chain composability. And so that's sort of the vision for zkvm. And so the ZKVM chain that Polygon released launched last March and we actually have an update that's coming soon that we announced, I guess last week. I don't know when this will air, but sometime within the last few weeks. And so it's a new, it's an upgrade and it's a much more performant ZKVM prover and it exposes a new mode which we call the Type 1 mode. And so what the Type 1 allows you to do is previously with the ZKVM, you needed to start up a new chain to have this environment where that was sort of EVM compatible or EVM equivalent. And with the Type one, you can actually take any existing EVM chain, whether it's an optimistic roll up a sidechain like the Polygon POS, an Alt L1, and you can just start generating proofs for it. You don't have to migrate anything, you don't need to change the client. And so it's this really powerful tool that allows us to take all of this liquidity and usage and state that's fragmented across all of these different EVM chains and upgrade those chains with ZK so that they can ideally connect to the AG layer and enter this environment of unified scalability or sort of composability, liquidity and state.
**Speaker C:**
And so I guess just for a quick clarification, when you talk about type one, you're referring to like Vitalik's framework of Ethereum L2.
**Speaker A:**
Yeah, sorry, I should have clarify.
**Speaker C:**
No, no, that I, I know what you're saying, but I can't even like come up with what Type two through five even are.
**Speaker A:**
So, so, so, so Type two is, it's basically the, the way that Ethereum represents state, like the State Tribe is really inefficient for ZK proofs. Like it uses a hash function ketch act that's expensive to compute, the Merkle partition structure is expensive, it uses RLP encoding. And so for a type 2 the idea is let's take that state data structure and let's replace it with something that's more ZK friendly. And so we can expose the same environment to users and developers, but on the back end it's just easier to generate zero knowledge proofs with the type 1. The idea is we need to be able to generate proofs for existing chains that are running without any migrations. And so we just have to accept and sort of optimize around the cost of using the existing state trie and the evm.
**Speaker C:**
And then as you go further out, like type three, type four is the idea that things get more and more distant from what Ethereum base chain looks. But at the end of the day you're still putting settlement on main layer and you're still giving the ability to, I like to say rage quit, but to withdraw on mainnet.
**Speaker A:**
Yeah, exactly. So like type three, you know, maybe you don't have access to all of the pre compiles that are available or maybe you're missing some opcodes. And then type 4, I'm not sure if there is a type 5, but type 4 5, it's like, okay, you're no longer in the EVM, you're in some different ZKVM, but you're able to compile solidity contracts to run on this ckvm and there might be some compatibility issues, there might be some issues with dev tooling. Um, but maybe you're getting a, you know, an improvement in performance for, for.
**Speaker C:**
Generating proofs and you know, I, I can guess. But rather than that, I would love to hear just from you, like why when you sat down with the Polygon, you know, greater ZK team for the first time, why was it so critical to chase after the type 1 style zkevm with the caveat that the typing system didn't even exist when you were doing this.
**Speaker A:**
Yeah, yeah, good point. That's a good way to put it. So the really critical thing was, and you alluded to this earlier, we have the Polygon POS chain that has a couple billion bridge to that chain, has a lot of usage and so there's all of this value that's currently on that chain. And so the thought is like, if we can upgrade that chain to be ZK proven, we can connect it to to the zkbm, to other chains in the Polygon ecosystem, to chains run by third parties like OkX or Manta or Astar and we can actually share liquidity and usage between those chains. And so like, for us it was really about unlocking all of this latent value in the POS chain and bringing that to the rest of the Polygon ZK ecosystem.
**Speaker C:**
Let me ask you this leading question, but I imagine that at some point I very much believe that the Polygon team are incredibly Ethereum aligned. And just like all really open source projects, you look at the things you're building also as giving back to the greater knowledge set. And I guess the question is how much when you guys were originally architecting this, even now to when you're actually building it and deploying it, how much do you think this technology might end up being the ZK that enters like Mainnet and enters the base chain once we're ready to make that upgrade?
**Speaker A:**
Yeah, so I, I, I didn't want to be presumptuous and say that earlier but, but I, I, I think that that's a huge part of it. Right? Like, like we, you know, I entered crypto through the Ethereum ecosystem. We really enjoy being within and contributing to the the Ethereum ecosystem. And, and so the idea that we could develop technology and be the first to generate proofs for real Mainnet Ethereum blocks at a very, very low cost was like an incredibly exciting thing for us because we sort of envision this future in which the main net Ethereum chain is ZK proven. So you might be using some application on the web that depends on state that's on Ethereum or depends on the ability to send transactions via a blockchain. And you know, in order for you to be able to do that trustlessly, you can either run an Ethereum full node, which you cannot do in the browser unfortunately, or you can ZK prove someone can ZK prove Ethereum and can provide a proof that allows you to have full node security guarantees in a very, very lightweight way. And so this is like I think a really important property for the future of Ethereum and it's something that we're excited to contribute to.
**Speaker C:**
Awesome. Well, yeah, again I really see Ethereum at the basenet has to go ZK for so many reasons, but at the end of the day it's just like if we don't then we have infra and alchemy and fine, we'll give next cycle, let's say we have one more and then we're Basically stuck with aws. I mean, it's just once you re centralize in that way, it's like, what are we even doing? And so ZK is the solution to that. And I think it makes so much sense from a, from a business standpoint, from a branding standpoint, but also just from like a true open source Ethereum standpoint to be like, this is a problem that needs to be solved and we think we can contribute to it. And if we're going to build our projects with that same technology beforehand, like, that should only be viewed as like us putting our money where our mouth is, that the technology actually works. And so I think this is the vision to the. Sorry, this is the path to the world computer. And I hear you that you don't want to sound presumptuous, but when we look back, this is how this story happened.
**Speaker A:**
Yeah, I mean for us, we want to be viewed as being Ethereum aligned and being positive contributors to the ecosystem. And we think that you should look at what people do and not what they say. And we think this is a really meaningful action to support the continued growth and development of Ethereum.
**Speaker C:**
All right, so we're altruist, we're fixing Ethereum. Good. Let's go back to polygon. So we have zkevm, which is pre. This new launch was just like the first evm. I think you guys are EVM equivalent.
**Speaker A:**
Yeah, I think so.
**Speaker C:**
So you're the first z cam equivalent, L2 and that's awesome. So first, why don't you talk through. Outside of this new ability to deploy Type 1 ZKEVMs, like what are the upgrades under the hood that have happened just to like the, the. I hate that this is the monolithic part of the zkevm.
**Speaker A:**
Yeah. So there have been a lot of upgrades that have affected performance. They've been mostly incremental up to this point. So performance upgrades, adding pre compiles like pairings and different hash functions. And so there's been a lot of incremental progress so far. I think with this new Type 1 upgrade you're going to see like really, really significant improvements because on the client side we're able to use existing EVM clients like we use Aragon. And like one thing that's held back the sort of growth and adoption of the ZKVM so far is like, we made the decision to rely on a custom EVM client. And I think other optimistic rollups have also discovered that you should actually never try to build a custom EVM client. And so we're really excited to switch that out and to be able to support a much, much higher level of throughput and scalability on the zkbm.
**Speaker C:**
Okay, got it. So heard you like nothing worth like pointing out individually, but the culmination of all these upgrades are just straight up like huge performance boost as of this new release. So let's talk about the Type 1 deploying. So first of all, like, let's just walk through from a user journey. Like, let's say I work at Anheuser Busch and I realize that I want to do this new like NFT giveaway that involves tickets and, and I realized like there's an opportunity to use blockchain and deploy my own. Okay, this is way too advanced for a beer company. That would never happen. But like, let's just say I am at this stage where I realize like my choice is to build a new L1, to build a L2 from scratch, or to use like ZKEVM. Like talk to me through a little bit about like if I am interested in using zkevm, what choice am I making? And like to like, what is the lift to get from like, hey Brendan, I want to do this to. We've got something running.
**Speaker A:**
Yeah, sure. So I, I think the nice thing about using ZKVM is that it, it presents the same developer environment as on Ethereum. And again, like say what you will about the evm, there's been, I think, billions of dollars invested in improving the developer experience and the developer tooling on Ethereum and on the evm. And so I think that the nice thing for developers is like, you don't need to chew glass. You have like, there are a lot of resources, there's a lot of hand holding that can help you do things like minting NFTs in an EVM environment. And so you can very, very easily transfer that and deploy that on zkbm. I guess we can get to this maybe a little bit later with the AG layer, but the bet from Polygon is like Right now the L2 landscape is fragmented. Like it's really difficult to move state and liquidity and assets between L2 chains, at least in a trustless way. And so the problem that we are trying to fix is like, how do you give Anheuser Busch the ability to set up their own environment where maybe they are running in a super centralized way because they just care about performance and they want to give a really, really good user experience, but they also want to be able to plug into existing sources of liquidity and Pools to allow their users to actually like, you know, access liquidity for these assets that they're getting. How do we like take these two things that are sort of intention, which is the ability to have a lot of control over your environment, the ability to maybe operate in a centralized way and at the same time, like access. The reason that we care about crypto, which is the ability to operate on shared state access, sort of public sources of capital that exist, permissions permissionlessly. And so that like, for us, that's sort of the bet on the Polygon ecosystem is like, how do we build foundational technology that allows you to do that?
**Speaker C:**
Yeah. So, okay, I am still going to push off AG layer for one more segment. But, but like when we're talking about like let's say Anheuser Busch or any corporations, like, done, Brendan, you've sold me. I want to do zkevm. I want my own chain. You keep talking about like sovereignty and like my ability to make choices. And like, can you help me understand how at the same time, like the point of this is that it's within the evm, like you don't have to make any changes. It's like beautiful and perfect, but also you have control. You can make whatever changes you want. Like, talk to me a little bit about like where the space for customization is within the ZKEVM framework.
**Speaker A:**
Yeah, sure. So, so I might give a rambling answer here, so stop me if I am just going in a million different directions, but I think one way to look at it is like you have a lot of choice at different layers. So you can use the EVM as your execution environment. And actually you can also use like mydin or things that are like an execution environment that's more tuned for your specific use case. Like, if you only want to issue NFTs, there's probably better ways to do that than using the evm. But let's say that you want to just use the evm. So even though you're using the evm, you still have sovereignty and choice over like how you sequence transactions. So if you're running your chain on a single server, obviously this is a terrible model for running Ethereum or for running like a public chain that has a lot of defi activity where there are huge trust assumptions. But for something like an NFT Mint that's run by a centralized party anyway, it might actually be a very good thing for providing a good user experience. And so the point is you as the developer should be able to make these choices because presumably you're the person that's best positioned to determine what's best for your application, for your users.
**Speaker C:**
Yeah, so I was working my way into just getting to this question, but I think you got us there perfectly. Because what I find so, so cool about the era that we're entering right now is that we're realizing through the power of ZK that I think it's a mistake for, for anyone to shit on the evm. Like, yes, I understand that it like basically came out of like a couple guys in like over a weekend, just like jamming something out and like it wasn't really thought through. And that's fine, but literally, have you tried using JavaScript? It's exactly the same thing. And the point is more about just the amount of like people involved and the amount of people trying to make it better. That being said, like, the EVM does suck. And what we're finding is through the power of ZK is that we can have alternate computing environments and then projected back into the EVM through zk. And the one like we somewhat interesting to me is like the Solana VM and like what something like Eclipse is doing. Much more interesting to me is something like what Risk zero or what Succinct labs are doing, which are like completely unrelated to blockchain environments at all, but they just have the ZK proof. And so with this as a background, my question to you is like, when we're talking about the whole structure, including the AG layer, including all the things you're building together. If I want to build within the ZK ecosystem, sorry, the ZKEVM ecosystem, am I tied to the ZKEVM as my execution environment? Or is what you're building really about the scaffolding to just say whatever you're building will help it settle regularly and with composability back down to mainnet?
**Speaker A:**
Yeah, so definitely the latter. And I really like the way that you put it where ZK allows you to take all of these different execution environments and project them back onto the evm. And I think that is a really evocative metaphor and I think that that's exactly how we think about it. You can imagine someone who wants to basically replace centralized crypto exchanges, but wants to do it in a non custodial way. And probably accepting the overhead of the EVM or even the Solana VM is not really a trade off that they want to make because they're so performance sensitive. And so in that case being able to take through RISC 0 or through, I think the succinct Bob's project looks amazing. Being able to develop custom execution environments, but still being able to plug into all of the liquidity and state and users that exist on the evm, that's like a huge transformative thing. And I think that, like, enabling composability between different execution environments is something that's essential for crypto to. To really grow past its current stage.
**Speaker C:**
So I think this is now the perfect opportunity to transition over to AG Layer. But I think with. In order to transition, I think it would be helpful. We've talked about two terms. We've talked about zkevm, and we're about to talk about agglayer. And then there's another one which we haven't touched, which is CDK or Chain Development Kit. Can you talk a little bit about, like, are these three different concepts? Is this one concept, like, how should we think about these three things?
**Speaker A:**
Yeah, sure. So the way that I would put it is that the CDK is a software development stack that allows you to launch new chains in the Polygon ecosystem. So it has a client, it has a choice of execution environments. Like, you could use a ZK vm, you could use mydin. In the future, there will be a lot more choices. And so the CDK is like the building block. And you can opt for ZKVM in a CDK chain. And in fact, a lot of the chains that are launching CDK chains are opting for. For ZK VMs. But I would look at CDKs as like the unit or the building block of the Polygon ecosystem. And then the aggregation layer as the thing that stitches together all of these CDK chains into a single environment that feels like using a single chain. So where you have unified composability and liquidity and state, and users don't have to think about, like, bridging or trust assumptions or like moving between chains to access liquidity as they're using the Polygon ecosystem.
**Speaker C:**
Got it. So, like, everything is a CD CDK chain, but not everything is necessarily a ZKEVM chain.
**Speaker A:**
Exactly.
**Speaker C:**
Got it. So that brings us to agglayer, and I think we've, like, touched on enough of it that if you're paying attention and listening to this podcast, you probably already know what it is. But just for the sake of completeness, like, would you just give us definition and, like, overall vision of, like, what is AG Layer?
**Speaker A:**
Yeah, sure. So AG layer stands for aggregation layer. It's a shortened form of aggregation layer. And fundamentally, it does a really, really simple thing where it provides safety for super low latency cross chain asynchronous and synchronous interaction and composability. So it allows chains to interact very, very quickly while maintaining safety. And so this is really important because if you think about the way that cross chain composability works on the L2 ecosystem today, what does it take to move trustlessly between like the Polygon ZK VM and starknet? So you have to take your funds on Polygon ZK vm, you have to withdraw them. So we have to generate a proof for your withdrawal, and that might take two to five minutes. And then that proof is submitted to Ethereum every 30 minutes. And then that block on Ethereum must be finalized. So that takes like 12 to 19 minutes. And then you have to transfer to Starknet, so that takes another 12 to 19 minutes. And then finally you have your funds like an hour later on starknet. And so if we think about what's required for unified composability and liquidity, I think it's two things. So first is super low latency composability between chains, and the second is fungibility of assets. So right now we have this problem with bridges where if you want to bridge directly between Polygon ZKVM and starknet, you're accepting this trust assumption. So you're having to trust some property of a third party bridge. But you're also running into these liquidity issues. Because if I'm using Wormhole to bridge between Polygon and starknet, I'm not getting like I'm taking ETH on Polygon zkbm, I'm trying to bridge it. I'm not getting ETH on starknet, I'm getting a wrapped synthetic version of eth. And there might be liquidity for that token, there might not be, but that's imposing a cost on the user in terms of complexity and then in terms of accessing liquidity to convert from a wrapped synthetic to the native version of that token. And so both from a usability perspective, but also from a cost perspective, that's just a very bad thing. And so the AGG layer fundamentally solves these two issues. So it provides a foundation for safety and cross chain low latency composability. And then it provides a unified shared bridge so that we can achieve fungibility of assets. So if you, within the Polygon ecosystem, if you have ZKVM or if you have Ethan ZKVM, you can transfer that to POS or to OkX or to A and you'll just get eth, you won't get a wrapped synthetic version of that token.
**Speaker C:**
And I think because the first negative Nancy reaction to what you just said would be like, yeah, okay, you don't have to withdraw down to mainnet and move back over. And we have some, we're creating more ZK bridges that are able to see what's going on in zkevm and able to know that eventually the ETH that you send into that bridge is going to be able to be moved over to starknet. So they can immediately give you the ETH on the starknet. But at the end of the day, what the big insight is is that if you ever want to transfer ETH from one chain to another, eventually someone is going to have to do the reconciliation on mainnet. Like every different project has a different person that's actually paying for it. But I think like the big insight for aglay, or just to put it like almost crassly, is like, how do we create a system where we just don't do anything on mainnet?
**Speaker A:**
Yeah, yeah.
**Speaker C:**
And so I guess with that as like the preface, why don't we walk through like literally what is AG layer? And I just. Because that's an open ended question, I'll give you some structure, like talk to us a little bit through about the architecture, like what exists on mainnet, If I'm deploying a new chain using cdk, whether or not it's EVM equivalent or not, like how does that interact with whatever is on mainnet? Like just talk us through what is actually being built here.
**Speaker A:**
So the AGG layer is a decentralized service and it does something very, very simple. It accepts chain states from new chains or from chains in the Polygon ecosystem. It accepts proofs showing that those chain states are valid and then it allows those chains to declare dependencies on other chains or other atomic bundles. And so this enables asynchronous and synchronous composability. And so the idea is if I have chain A and I want to send ETH from chain A to chain B, chain A doesn't need to generate a proof, it doesn't need to post anything to Ethereum, it can just send a message to chain B and chain B can say, okay, I've received this message from chain A, I haven't received a proof from chain A, I don't know that this state is valid. You know, chain A, the operators of chain A could be trying to rug me, they could try to double spend. But I'm going to submit a new state to the agglayer and I'm going to include this dependency from chain A. And so I know that the only way that my updated chain state can be posted to Ethereum is if chain A is the state of chain A that I am depending on is valid and also posted to Ethereum. So we can generalize this to atomic bundles. So if I have a bundle of transactions that I want to execute on chain A, B and C, I can guarantee via the AGG layer that that bundle will only be executed on each of those chains if all transactions execute successfully across all of those chains. So I can't have a mint on chain C if I don't burn the equivalent amount of tokens on chain A. And so the way that the really nice thing about the agglayer is that it's agnostic to the mechanism by which chains coordinate in a direct. So those chains could use a shared sequencer, they could use what Justin Drake calls super builders. So like heavily centralized, professionalized block builders that execute across many, many chains, they could use relays. Chain B could just decide to trust chain A. And so there's a varying degree of risk that chains accept by, by participating in cross chain composability because it is possible that chain A could generate an invalid block and that chain B could depend on an invalid block and the finalization of chain B to Ethereum would be delayed. And so that's a liveness failure. But fundamentally we know that we need to be able to have super low latency cross chain composability. And fundamentally we believe that the best way to enable that is is by allowing chains to navigate and to sort of like negotiate the trade offs between liveness and low latency for themselves.
**Speaker C:**
So going back to the last thing I said, which like the insight here is that we just want to avoid touching mainnet as close as much as possible. Do you think this is a fair characterization to say the way Agglayer works is we put all of the assets for all AG layer chains in one smart contract and then using the magic of zk, we create this sort of virtual accounting that says like we're going to do all the transfers and we're going to be ensured through ZK that they're valid, but we're not actually going to move any of the assets on L1. So first of all, is that fair?
**Speaker A:**
Yeah. And if you want to be the new Polygon head of marketing, stuff like that, then yeah, that. Well, no, we have a very good head of marketing now. So I'll retract that. But yeah, that's the perfect way to put it. There's no sort of on chain footprint or rather there's a constant size on chain footprint because periodically we submit a proof to Ethereum that finalizes the states of all of the chains that are in the Polygon ecosystem. But this is fundamentally a constant size proof regardless of like the number of, you know, transfers between chains or the number of chains in the ecosystem. And so it's like a really scalable from the perspective of Ethereum system where we're not doing a ton of, of L1 interaction.
**Speaker C:**
Yeah. And I think just to hammer the point home, the, the point is that these proofs that are going down to Ethereum mainnet to be verified, they're happening anyway. Like that is the concept of the ZK L2s. And so we're not creating any like I'm sure someone's sitting there being like, okay, so you're saying you're doing less proofs but you're still doing proofs. It's, that's not the point. The point is, is that the regular proofs that are just part of how L2s work through the magic of ZK get this like virtual accounting system that is the magic of AG layer.
**Speaker A:**
Yeah. And, and so the, the, the nice thing too is that we can do proof aggregation. So rather than all of these chains paying to very, for Ethereum to verify their proofs, they can aggregate those proofs off chain and then Ethereum pays to verify or they pay Ethereum to verify one proof. There could be a million chains in the ecosystem and we're still paying 200k gas every five minutes. And so that's a really nice thing from a cross perspective.
**Speaker C:**
So something I want to cue in, in what you said was the interactions between multiple agglayer chains. So with the caveat that everything is opt in and you know, if it's opt in and you're interacting with a bad chain like that is on you. So it is what it is. But like in this world we have chain A, chain B, chain B gets a message from chain A, accepts the message, integrates it into their just com. Their commitment of their transaction, sends it down to base layer and it doesn't verify that creates this situation where in my head like chain A could just start griefing chain B with a bunch of things that aren't going to verify and the person paying that cost is chain B who just has to keep resubmitting proofs. Until one finally verifies. And so I guess my question to you is, how is that avoided?
**Speaker A:**
Yeah. So again, the cool thing about this design is that the trust assumptions are navigated between chains themselves. The agglayer doesn't make chains opt into anything. If you want to accept super high latency and only interact with chains that have finalized to Ethereum, you can do that. It will be an extremely poor user experience for everyone using your chain, but there's nothing that prevents you from doing that. But there are a bunch of different ways to avoid chain B having to constantly revert or have finalization delayed of their chain. They could run a full node for chain A is an easy way. And so they could see, even without a proof at super low latency, they could see that the block that they're depending on from chain A is valid. They could run within a shared sequencer. So the shared sequencer would be running full nodes for A and B. This is really easy to do. It can be done in parallel across multiple machines. There's no sort of scalability limitation in that model. Or it could run on a relay. Those two chains could interoperate in a manner where they only accept cross chain interactions when there has already been a proof generated. So they could rely on sort of like super builders who are constantly executing transactions synchronously across many chains. And those builders are just providing proofs that they're their transactions are valid and the AGG layer accepts those blocks when they're approved. And so I think that the really important thing to emphasize and sort of how we look at it is the AG layer is sort of foundational infrastructure that will enable emergent coordination mechanisms to arise on top. And so we don't know what the best way to enable asynchronous or synchronous composability is. But we do know that it's a core and required ingredient to have safety in any of these mechanisms. So you can't have a situation where chains are trusting a shared sequencer and the shared sequencer can rug chains or can steal funds from different chains. That just doesn't work. And so fundamentally we believe that if we provide this foundation of safety, then different companies and projects and individuals can find and optimize the best coordination mechanisms that navigate the trade offs between liveness and low latency.
**Speaker C:**
Yeah. So I want to shift gears a little bit and ask this question more from a business development standpoint than a technology standpoint. But when you're looking forward, let's say five years from now, like the last bull cycle brought, brought enough awareness to this space that like companies just understand that this is where you go to when you want to do specific things. Let's say there's 5,000 CD CDK chains and I'm Anheuser Busch and I come to you and I'm like, we want to do this specific thing like from again, a business development standpoint, not a technology standpoint. Like how do you see the N+1 person entering this ecosystem and being able to make decisions on the other end chains on like which ones do I want to connect to? Which ones are trustworthy? How do you see that all playing out in this world of permissionlessness and inspiration and creativity and all that stuff?
**Speaker A:**
Yeah, sure. So I think that the really important thing to emphasize is that like from a crypto economic perspective, it's actually very, very bad for you to create liveness faults in other chains because like, there's no payoff for you. Like you might delay the finalization of a chain by 30 seconds or a minute, but you don't get like you, you can't really steal from that chain and you're jeopardizing your ability to interoperate with other chains because chains can make blacklists or white lists of different chains. And so my assumption is that the vast majority of chains will never create liveness failures or sort of misbehave or engage in activity that might delay finalization of other chains. And so the way that I look at it is if we have this ecosystem with thousands of CDK chains and all of them are engaging in what's fundamentally a positive sum win win relationship as polygon, we don't impose any restrictions or requirements besides the requirement to generate a proof to join the AG layer. You can use your own sequencer, you can use your own token for gas fees for staking. We don't do revenue share. Like, like there are no restrictions or requirements on chains. And so fundamentally like the bet is if we have this ecosystem or this economy that has value, where there's liquidity and there are users and there's state, that's valuable. Like every marginal chain, like every additional chain that's able to bring positive economic value to this ecosystem, whether through an NFT mint or through some new defi primitive or a game or whatever, that is fundamentally a win win both from the ecosystem perspective because everyone in the Polygon ecosystem will be able to access that NFT or that defi primitive or that game and vice versa. Anyone minting an NFT can access a huge amount of liquidity within the Polygon ecosystem. Like we think that this will be like a self reinforcing mechanism where like it just makes sense to join the AG layer. Like you're not giving up anything. Like the only change is you're paying less to settle to Ethereum, but you're able to access like this tremendous amount of value and users and liquidity.
**Speaker C:**
Yeah. You know what's funny is as we kind of just progress down the Ethereum roadmap, I find the answer to a lot of questions kind of is like we're going to build a reputation system and like I'm especially seeing this with like the, the restaking and like the active validation services and there's something like totally normal about that and crypto is perfect for it because all of our reputation can be put, can be built off of the immutable ledger. And so like that's awesome. But there's something just like slightly uncomfortable for me that reputation means trust and we're saying we're going to solve our trustless system by just like having a trust system.
**Speaker A:**
Yeah. Although I do think in this case there are ways that you can get around. If you don't trust a chain, you can trust a relay or a shared sequencer or a builder and so, so you don't have to like maintain or like track the reputations of, of other chains. And so I will say, like, I think it's hard to like explain in great detail on a podcast, but, but, but there are like a lot of mechanisms that I think that we can use to kind of like scale and enable like a lot of sort of chain to chain interaction in a way that's like fundamentally trustless. And I think that there's a lot more to sort of write about and to explain and to get into. But I think that this approach where we provide safety as the foundational thing, you're only sort of accepting these compromises for liveness. And you know that this is inescapable because fundamentally your users are not going to be willing to accept super high latency. If you want to enable unified liquidity, you need to do so at very low latency. And I think that once you have safety, there's a huge design space that opens up for finding mechanisms and ways to coordinate that. They kind of offer optimal trade offs for different cases.
**Speaker C:**
Yeah, I like that. You know what, maybe you're right about your reputation system comment, but at the end of the day, what we're not sacrificing is safety. And like that's what's kind of important here. And so here you like, you'll never sacrifice on that. So Brandon, before I. I know we're running a little bit low on time, but I can't let you go without talking a little bit about Polygon POS and this transition of Polygon POS into the technology that is like so exciting and being deployed right now. So don't, don't want to ask any like sensitive Alpha or any secrets or anything, but can you just talk through a little bit about long term, like five plus, 10 plus years, what is the role of Polygon POS and like how do you see it interacting with this? Like, and I don't mean to like throw punches here, but this like much more exciting technology that, that you're putting out there.
**Speaker A:**
Yeah. So I think the simplest way to put it is that Polygon POS will become a CDK chain and it will become like sort of the flagship Philidium. And I think that there's a ton of room in the market for a chain that has super, super low cost and is still ZK proven and already has a ton of usage and you know, exchanges are onboarded to it and users and wallets and tools. And so I think that at least in the short and medium term, it's really exciting that Polygon POS can join this ecosystem.
**Speaker C:**
Well, that's super interesting. So can you just talk through a little bit more about what that transition will look like? And I'm interested for Polygon pos, but really why I'm interested is because if it's possible to migrate a chain into the CDK and into Agglayer, a lot of the questions I have around specifically optimistic rollups and how they are going to compete when they're just objectively a worse technology start to get answered. So can you just talk through a little bit about what would that transition look like? I mean, I'm assuming mainnet, the tvl, moves over to the different contract, maybe the proving system, just talk through what does a transition look like?
**Speaker A:**
Yeah, that's a really good question. So the first thing is right now we're not using ZK at all for the pos and so we're verifying consensus votes on Ethereum. And this is pretty expensive and it's annoying. And so first thing is to move that to zk so to verify consensus with zk and then the second thing is to verify everything so state validity with our type one prover. And so like every transaction that happens in the POS gets verified and we submit A single proof with every checkpoint that shows that everything that happened in the POS was valid. And so the third point, which you correctly identified and which is I think like an area of kind of like undersold complexity is migrating the bridge over to the unified lxly bridge that's running with the AG layer. And so we have some roadmaps for how we'll do that that we'll share in the future. But that's something that we just want to ensure is done very safely and so that no funds are at risk. And so that's something that's really important. You alluded to optimistic rollups upgrading to ZK rollups with the Type 1 Prover, and that's something that I'm personally very excited about. And there's this question that comes up, which is you're working on ZK tech, you have this fundamental technical advantage over optimistic rollups and you're just going to give that to them in the form of the Type 1 prover because you've open sourced the prover. And so now every optimistic rollup is to be able to take it. And I think that that misses kind of the beauty of the AG layer, which is like, fundamentally we believe that all of this execution environment tech will become commoditized, it's all open source, like anyone will be able to use it. Any sort of advantage will be competed out over time. But fundamentally, like what we believe you should be building is the AG layer, because we believe that no single chain is going to be able to accommodate all of the demand for scalability that will come when crypto reaches mainstream adoption. And so from my perspective, I would love if optimistic rollups upgraded to become ZK rollups and joined the AG layer. And that's something that again is like a win win for them. They can use their own sequencer, they can use their own tokens, they can use their own shared sequencers and make their super chains within the aglayer. But fundamentally the agglayer is this required and necessary component for interoperability. And it's something that we believe that every chain will need to use.
**Speaker C:**
Yeah, and it sounds like if you're thinking, you know, if you're starting from scratch, like just use agglayer, use cdk, it's fine. But if you're thinking of migrating over, the questions are always trade offs. Right? So if I move into agglayer, I understand, I get composability, I get liquidity, I get like all this stuff that we've been talking about for the last hour. And then the question is like, okay, what am I giving up? And I'll put you on the hot seat here. Literally the only thing I can think of that you're giving up is that you don't own the private keys to your L1 smart contract with all the assets anymore, which you probably shouldn't own in the first place. But can you think of any other reason that any other thing you're giving up by moving over to AG layer?
**Speaker A:**
No, I mean, I think that maybe there's like a narrative cost where. But I think that's like purely a short term thing and I think that will become completely insignificant as the Polygon ecosystem evolves to being more than just Polygon. And so I think there are like five or ten chains that are currently queued up to deploy on Ag Layer. And so I think in the near future it's not even going to be like a sort of Polygon ecosystem. It will be this shared ecosystem for Ethereum. But I mean, to me, like, I, I really struggle to think about another downside besides the one that you identified. And I think that very correctly, you, you said that like, we shouldn't be relying on multisig security anyway. And so, yeah, I mean, and truly, you know, if a chain wants to exit the AG layer, there's nothing preventing them from doing that. They can migrate. Like, just like they migrated their assets on, it's possible to migrate your assets off. And so, like, for us, it's like purely a positive sum, win, win engagement for any chain that wants to join the aglr.
**Speaker C:**
Yeah. Ma'. Am. Incredible. Well, Brendan, this is the hardest part of my day. Like, I literally would love to keep you here and go for another two hours and just like go through the details of what we're talking about, but for your schedule and the sake of everyone's attention span, I will cut us off here. So first of all, just thank you, thank you for this podcast and thank you for just the transformation that you're, you're bringing to Ethereum. And like, we're at the stage right now where you look to your left and you can see Ethereum, the mediocre blockchain. And you look to the right and you see the world computer. And like, when I look right in front of me, I just, there's like eight entities that are like the part of that transformation. And it is so wrong to not recognize Polygon as like one of those eight, if not the head of the table. So just, first of all, thank you, thank you on behalf of Etherians and thank you on behalf of the World Computer.
**Speaker A:**
Yeah, thanks Rex. It was really fun.
**Speaker C:**
Of course before I let you go, can you just let the audience know where they can find you, where they can find Polygon And I guess if they after this conversation are interested in either deploying a CDK chain or migrating over, how do I start learning about that process?
**Speaker A:**
Yeah, sure. So I think our website has a ton of resources. You can follow us on Twitter Eurox, Polygon, I'm I think underscore B. Farmer on Twitter and yeah I think reach out to anyone on the team and we would be really happy to to talk about either starting a CDK chain or upgrading an existing EVM chain.
**Speaker C:**
Incredible man. Well, so excited. I, I can't wait to see what's coming up in the next year and beyond. And again just man, thank you so much.
**Speaker A:**
Yeah, thanks so much Rex.