**Speaker A:**
Foreign.
**Speaker B:**
Hello, and welcome back to the Strange Water Podcast. Thank you for joining us for another episode. Let's begin with a question. How can participants in a blockchain network be sure that all the data of a newly proposed block is available? First, let's start with why this even matters. If the data is not available, a block might contain malicious transactions which are being hidden by the block producer. For example, without having access to all of the data, a malicious proposer could transfer all of the eth from the burn address or the deposit contract into into his personal wallet, and the Ethereum network would be forced to accept it without even understanding what it was accepting. Under normal circumstances, this isn't a particularly challenging problem. A block producer is required to publish all of the data within the block, and so every other participant can simply download the whole block and verify every transaction manually. While this paradigm works, it quickly runs into scaling challenges. Right now, according to Etherscan, There are roughly 6,000 individual computers participating in the Ethereum network. Now, we can't know that for sure. That's just an estimate. But every 12 seconds, a new block is produced. And this new block needs to be sent to every single node on the network. Let's say a block is roughly 1.5 megabytes. So every 12 seconds, roughly 9 gigabytes need to flow through the Ethereum network. Furthermore, each node needs to be able to download, process and verify each block every 12 seconds. And so block size has direct implications on the minimum requirements of the computers participating in the network. But what if we started to get a little more sophisticated? What if, instead of requiring every node to receive and process every block, we spread this data out? This would not only free up a ton of bandwidth, but would lighten each node's individual load. The result? We can both increase block size, which means throughput, and decrease minimum requirements, which means decentralization. And so, with all that background out of the way, I am so excited to announce today's guest, Prabhal Banerjee, the co founder of the Avail project. We began this intro with a question. Avail was built to answer the question we call the data availability problem. During the next hour, you'll get an incredibly deep understanding of how Prabal and the Avail team are building the next generation of data availability infrastructure. And by the end of this episode, you'll begin to see what I saw. That data availability is just the first foothold that Avail will use to unify web3. One more thing before we begin. Please do not take financial advice from this or any podcast. Ethereum will change the world one day. But you can easily lose all of your money between now and then. And now. Mr. Prabal Banerjee. Prabal, thank you so much for joining us on the Strange Water podcast.
**Speaker A:**
Thanks a lot for having me.
**Speaker B:**
Of course. Man, I'm so excited. Right now is this incredible data availability moment and you kind of can't have that conversation without talking about avail. So perfect timing and perfect conversation. But man, before we get into it, I'm a huge believer that most important part of every conversation are the people in it. So with that kind of as a frame, can you tell us a little bit about yourself? How did you find crypto and like ultimately what got you passionate enough in blockchain scaling and data availability that led you to avail?
**Speaker A:**
Yeah, I mean in, in general I had a pretty academic background so I was you know, studying computer science and cryptography. Always something which fascinated me. So. So nothing, nothing crazy so just went ahead and studied more of it. Went into a PhD program in that I encountered blockchains. It seemed like the perfect place for me to explore a lot of different fields inside computer science and beyond, including economics, game theory, those kind of stuff which always fascinated me in general. So it only seemed to, you know, seemed wise to pursue blockchains. So did four years of research there, got completely blue pilled by it, you know, the different facets of it, the trade offs completely fascinated me. So yeah, then went ahead and was not getting the thrill just doing academic research. So you know, dropped out of the PhD program to join Polygon which was matic network back then. And yeah, then led research there and then finally spun avail out of Polygon, like within Polygon as well. Like one of the first things that I started started working on was data availability. We always knew that ETL was going to be one of the most like groundbreaking innovations that is going to happen in the blockchain space. And in general we knew that execution scaling needs to go hand in hand with data availability as well. So that was one of the things and as you know, Polygon already was working on execution scaling. They wanted to scale Ethereum so. So yeah, that's how we started this journey. And then it slowly became clear that we need to make a whale credit neutral. Spun things off to create a separate entity with my fellow co founder Anurag, who is also a co founder of Polygon. So that is my journey in a nutshell.
**Speaker B:**
And I guess before we like really get to like avail the company in this moment that we're in, I'd like to talk a little bit about this Polygon time and like really. So we as a blockchain community and as Ethereum community, this whole strategy towards scaling blockchain has gone through a lot of iterations and like originally we thought we were going to do execution sharding and like when that became a mess that we like came up with this idea of like state channels which turned into plasma, which turned into rollups. And only once you get to roll ups do you start to understand like, oh, there might be a role for something called data availability. So I guess like I would love to let's rewind back to like one year head of research at Polygon and just like doing your regular job, can you help us like understand what it was like within Polygon to realize like, okay, we're changing paradigms. Like that has major implications for Polygon proof of stake. And I guess the question why I'm ultimately trying to get to is why did you slash Polygon decide like, we need to create a data availability solution ourselves and not just think like, hey dank, sharding is coming. That's the data availability solution. We should be like pushing forward the native Ethereum solution as opposed to creating your own.
**Speaker A:**
Yeah, I think I would like to roll it back a bit more and try to explain my journey more from my research perspective. So first of all, in 2016 when I started looking into this field, the main thing that we were already seeing is the huge transaction fees, right? So that has been a big challenge. And of course one of the major things that we were looking at even while looking into Bitcoin was how to scale it, how to make it affordable to normal people in order to achieve the goals of decentralization in mass. Right? So otherwise technology in its nascent from, if it remains a niche, then it's not good enough, right, for every person to use because without the users, technology do not survive, right? In that regard we were already looking, at least I personally was looking into, you know, lightning channels and such. And there were many different attacks that we were looking at and how to make it better, make it more secure, make it more dependable, make it more cheaper. So these kind of things were already on the top of our mind and fast forward like after this few years of research into these kind of scaling solutions and fairness and you know, making scalable distributed systems. When I joined Polygon, I wouldn't like to take the credit for this entire shift because they had already done that journey from 2018 to let's say 2020 when I joined that they had tried to make plasma. They realized that plasma by itself is going to be hard and they had to do a pos. So when I joined it was a plasma POS chain. So you had both, you had the hybrid nature of it. And that's when I realized that I came to see the hard implementational nuances of what a plasma solution needs to go through. So at that time, Polygon and the people like Anurag and JD and others at Polygon, they already were convinced that it has to be a roll up centric. The only way to scale is through off chain execution. That was very clear, right? And that is the fulcrum over which plasma, Polygon, pos Polygon rollups they all stand upon. So all that journey was already like, the thoughts were already there then. The point is that if you want to take a roadmap which is off chain execution and roll up centric, what do you need to build to be able to support this roadmap? It was pretty clear that rollup was one of the things that we wanted to build. But roll up by itself will not give us the ability to expand beyond what is already available. And that is why we started the journey with avail and so on. On the other hand, the Ethereum roadmap, of course there were different stages of it, again depending on the timelines and I'm not good with dates, but in general there was, as you said, there was execution sharding, then there was the roll up centric roadmap. In the roll up centric roadmap also it has to go. It has gone through various different changes. Even in the dime Sharding roadmap, the number of different shards and such have been changed. The, the different specs has been changed, as should be the case, right? So research evolves, technical specifications evolve, and so on and so forth. So at that time we knew that it would be in the end, Ethereum Dankshari. But you know that it might take a lot of time because Ethereum is securing a lot of funds and it's very natural for them to take so long to be able to make such a big change. Because the change, if you sit down and look at the amount of change that is needed to be done for Dank sharding, it's immense. It touches all the different parts of Ethereum today across all the clients. And at that time even client diversity was lower. But at that time it was already clear that it's going to be a big challenge. And that's why if you want to be at the top of your game, you need to lead the innovation and not just wait for something to happen. Right? So, and Polygon has always kind of tried to do that, that show by shipping a solution. And that's mainly, I would say, what the theory was at that point, what the spirit was within the team at that point in time. The best in class solution which was there on paper at that point in time was the lazy ledger pepper, the Celestia guys. So, I mean, huge credit to them for specking that out so well at that point in time already we knew as a company that we want to heavily bet on validity proofs and not fraud proofs. So it became very clear that we don't want a DA solution which is validity proof secured, whereas we are looking at execution solutions which are secured by validity proofs. It's a mismatch. Like why should I have execution which is secured by validated proofs, but have the da which I have to wait for some time? That is one of the few of the reasons why we were thinking of building a solution of our own. Of course it started as a pocket, right? It started as a poc. It even started before that, just as a, you know, as a scribbling on a page. And then we were like, okay, can I build a poc? Is it hard how complicated it is? And so on. Is it possible even to do it? Because the validity proof solutions can be computationally heavy and we were seeing immense improvements in the ZK space which made it, made us realize that it is possible. So from there on we kind of, you know, slowly, slowly we started building more and more of it and that's when the conviction really hit us.
**Speaker B:**
Got it. So just to like summarize what you said and tell me if I got the vibe right, is that essentially like even before you arrived at Polygon, it was clear that we are going down, if not the roll up centric roadmap, a roadmap that like scales by disassociating execution from settlement and all basically like disassociating settlement with like the results of settlement. And what you guys realize is this is the way that Ethereum is going. Yes, we understand that this is going to be built into the core protocol at some point, but like if we wait around for that, our lunch is going to be not only eaten by like other companies that are already starting this, but like you, the technology, the understanding of the technology might change right out from under you, like in the time that you're waiting for, for it to happen. And so you guys just decided like, if we believe in this, the responsible thing to do is to start building it.
**Speaker A:**
Absolutely, absolutely. And that is, that is exactly the kind of thing which also you can see that today a lot of the EVM centric boundary pushing has been already done by the Polygon pos. The activity that it has seen, the amount of state growth that it has seen, the amount of traction it still gets and the amount of transaction, the sheer number that it processes even today is, is something admirable. Right, so, so that has always been the spirit.
**Speaker B:**
So let's talk about what AVAIL is. And I, I think that anyone who has found this podcast, like, doesn't need a 101 on like what a rollup is and then how data availability kind of like folds into that. But maybe if you could give just two sentences on like what is the problem statement? And then I would love to get into like, how can we understand avail in contrast to, I'll call it the public option. Right? Like dank sharding as well as like some of the other competitors that are coming up. And we don't need to be specific, but just for the audience's knowledge, like what I'm kind of referring to here is like Celestia or Eigen DA or like the other programs that are coming down the pipeline. And I'm not going to ask Probal to like directly do a competitive analysis, but can you just help us understand like what makes avail different from just like the generic data availability layer?
**Speaker A:**
Yeah. So I think our goal is to unify Web3, right? So there has been, you know, so much segmentation in this space because there has been so much activity and some of them has been in silos. So the goal has been to unify the web 3. And if we think about what does it mean to do that, right? What does it mean to go ahead and unify something? We have to understand the forces under it, right? So the forces right now is that there is a lot of demand and the base layer supply is not enough. And that has led to huge congestions that leads to roll ups. And all of, all of us know that roll ups are the designated way to scale blockchains. Under that thesis, what we need is abundant blob space. And that is why Avail DA is what we are building first. Right? We need blob space today in order to support all the roll ups that are getting built right now and all the application specific chains and the explosion of it that we are going to see in the recent future. So that's what avail DA does. But we also were thinking very hard about what does it mean to give this abundant blob space, how does it look, how does the ecosystem look at that point in time? And that's where we realize like, you know, we have to solve the fragmentation. The thousands of roll ups cannot operate by themselves, right? So they cannot be like standalone systems. That's not how, you know, your typical web2works. That's not the experience that we want to give the users where there is so much headache when you want to do anything beyond a single blockchain. And that's where we are doing Avail Nexus. Avail Nexus is essentially a proof composition system that we are building on top of avail da which any of the rollups can use to seamlessly interoperate between them. So that is what we are trying to do with availd access. And then we were thinking that yes, all of this builds a very seamlessly integrated system, but we also need a very secur secure system. And that is why we are building ail fusion which brings in external assets like a Bitcoin or an Ethereum into the avail ecosystem so as to significantly improve the security of the system and the entire ecosystem build around it. So that's avail. I know it's a, it's a mouthful and that's why we can keep talking about them in bits and pieces. But I would like to go directly into the second part of your question which is what about other people in this space? So the first thing to say is that we are very, very glad that all these players are in the space. And that is because, because as I just mentioned right now, some of them, like all of them, inspire us. All of them, you know, push us to learn and be better at the same time. There are significant differences in terms of trade offs. It's not about who is good and who is bad. There is no absolute terms in technology. None of the systems or the people building are fools to take a bad choice. So everyone is kind of picking a different point in that spectrum. That is why it's important to understand the trade offs. One of the primary trade offs between let's say a fraud proof secured system and the validity proof secured system is the user experience, the latency, the verifiability of it. That is where we decided, as I discussed a while back, that it was from the very beginning that we had decided that we will go with a validity proof secured system rather than a fraud proof secured system. Because it does. It just didn't make sense for us to use another one. And after that we knew that we had to make a decentralized system because it's going to be like if we had to build a very secure base layer, a base layer needs to be heavily decentralized. That is where we chose some of the design choices that we have, like a fragment election system, a nominated proof of stake. So those kind of things allow us to have let's say 1000 validators today and then maybe tens of thousands of validators in the future. And that too without centralizing the entire stake or the control of the system to the hands of a few, we can keep the system highly decentralized and still being efficient with a large validator set. So those kind of things we were trying to do. And at the same time we also knew that ordering and polynomial commitments and such need to be enshrined product part of the protocol because if we offload these essential parts to off off chain then it creates more problems because a base layer needs to do sequencing because that's where the crypto economic security comes from. So those kind of things kind of, you know, inspired us to choose the trade off space, the trade off choice that we did in this space. And that's how you know, we are very, very different on the dealer, apart from the dealer, as I say, there are many other things which we want to involve and that's a discussion that we can get into.
**Speaker B:**
Yeah. So again, just to summarize and tell me if I kind of got the message of this right, but what, what you're pointing out is that like in order to really understand the differences between different layers, you have to really think about like the very technical and very specific trade offs that you're making. And part of that is about like technical reasons and performance and like whatever. But really what it comes down to is those choices which like don't sound interesting, like fraud proofs versus validity proofs, like those ladder into real user experience. And if what you're kind of saying is that what avail is doing is taking the approach of like let's start with the user and then let's figure out which like technology choices we need to make in order to get there as opposed to not picking on any names at all, but somebody else who might just say we understand what the technology needs to do, let's build it and then let's figure out how to make it like more compatible for users.
**Speaker A:**
Yeah, absolutely. I mean that has been the journey from the very beginning, right. That it has to be user centric and at the core of this entire technology, if we do not put the users, then we will keep being a niche, right, which has all the technical jargons, but doesn't have the essential part of it, which is the user. So of course they were the. The center point around which we have built the technology. And that is why we are not only building the dll. Because if you go and ask a user about what a DLL does, they shouldn't care. Like, whether they know it or not doesn't matter. They should not care about what is the DLA or what are the security guarantees, because they need to be abstracted out of it. And if we are, the essential reason why we are building Ovail as a whole, as an ecosystem, is to make it possible for the user to be ignorant about these and still have a meaningful experience that they would know.
**Speaker B:**
So you're on crypto Twitter, so I know that you have seen the argument that I'm about to bring up. But the thing I've noticed in the last few weeks is this whole conversation about Solana versus Ethereum. And I've heard the Solana guys say, like, Solana is focused on user experience and Ethereum is focused on technology. And again, these are Solana maxis saying, like, things about Solana. But in that debate, I like, the question that I kind of like, want to yell at. The podcast that I'm listening to is like, man, when you say users, like, who are you talking about? Like, because it's easy to be like, I want my user to be like, this guy who's right about to like, get into meme coin trading and like, cares about like, the, the how it feels to push, like, sign a transaction, all that stuff. But like, I don't think that any real humans will ever be touching any of these core protocols directly. And so, like, it begs the question, like, well, if your users are other protocols, like, maybe the right thing to be doing is like, optimizing for protocols and not for, like, human, like, retail use cases. And so I bring this up to ask you, when you say that you're. So when you say that avail is optimizing for users, what does that mean to you?
**Speaker A:**
No, I think you bring in a very pertinent question, right? So there are many different takes to kind of, you know, talk about here. We can have maybe a full podcast on just this topic. But let me try to kind of enumerate it a bit, right? The first thing to say is that any kind of user is a user, right? I don't Care whether they are a meme coin trader, whether they are a bot, whether they are extremely crypto savvy, whether they have never touched a computer system. They are all different users. Maybe not my target user, but they are genuine users in this space. A bot is also a genuine user in this space. Because if you just, if you do not consider bots, you probably be having nothing in the space. And that's the reality whether you like it or not, right? And, and this is in general, right? This is not about just, just the crypto space, it's also about in general trading space. You will see a lot of these are, are bought and automated and things like that, right? So, and that's part of it, you have to build for these kind of systems as well. So all of them matter. The next thing is it's not about, it's not about user experience for the protocol level, it's about the user guarantees at the protocol level. Protocol levels give you guarantees. Users are interacting with applications which abstracts these technical guarantees, right? At least that's how I would like to think of it right now. How, what do I mean? I mean that for example, when you go to a Google, you do not care about the correctness of the auction that is happening. When you hit the search term, right, there is an auction happening, you do not care. You have, you might have not studied game theory, you might not have studied auction mechanisms and so on and so forth, but it doesn't matter. It shouldn't matter. What you care about is like, okay, if I use Google, I get better search results, let's say, than any other search solution that I use. Or it's the de facto standard. I have never used anything other than Google at all ever in my life. Whatever be the reason, they are all legitimate users and these users are being secured not by the protocol themselves, but by app developer who is using various sets of protocols in order to enable their users to a particular service, right? And that's where the world that we want to go in with app chains, so we want to see a world where applications are nothing but application specific chains which are, you know, custom tuned to the service that they are offering to their user. And for avail DA and the avail Nexus, these are the customers we are trying to. Our users are actually these app chains which abstract us out to give the final user an experience that they would want. So again, there are plethora of, you know, hierarchical systems here at play. But I would say that no matter whom we are building it for, we need to give verifiability because that is the reason why we are building web3. Otherwise web2 was good enough. Like we didn't need, you know, another set of huge investment and both in terms of money and in terms of brain power by the, you know, best people in this world into a separate space if we already had it since 20 years. And that's why verifiability is the key. What kind of assurances you are trying to give to the user are the key. Otherwise web2 systems were good enough.
**Speaker B:**
Yeah man, for sure. Everything you said just resonated so much. And you're right, like we could just do a whole podcast on this. So I will just kind of take the turn back to avail. But other. But, but just quickly. Thank you and like that's such a good answer. And I think to like bring it back to what I said about Solana versus Ethereum. I think for builders specifically, like at the end of the day all of our end users are real humans that want to get something done. But like I caution anyone when they're building core protocols that are supposed to be used by other protocols, you shouldn't be thinking about like the experience of like, okay, like then metamask needs to pop up, then I need to like read through the meta because like that's just not the end game state. And like nobody thinks that we're building this so that like grandma can be doing like direct trades on Ethereum L1 in Uniswap.
**Speaker A:**
Yeah, I mean that is, that is generally true, but at the same time there are nuances there as well. And I again we have to be covering both sides of the story as well. Right? So for example, like when we talk about EIPs, for example, there are many EIPs which would definitely improve the ability for these wallets to abstract out stuff. Right? And I mean I was just having this discussion today and this has been so like happening so often is that the experience of transacting on any blockchain is just not good enough. Even today. Like the no matter what kind of tools that you use, if you want to do things very securely, then you know, it comes down to a UX versus security trade off at the very end. And that's why some of the improvements at the protocol level also need to enable applications to be built on top that can help the end user whom they are trying to go to. Like for example, a Web2 example here would be yes, you can build a service which just exposes APIs, but if you do not have a rich enough set of APIs then no good application is going to get built on top because of the lack of APIs.
**Speaker B:**
No, no, totally, totally fair. And again we could keep going. I'm just going to bring us back to Avail. So back to Avail, we talked about basically three pillars of the business that you're building. Avail Da Avail Nexus and Avail Forge. Right, Fusion. Oh, Fusion. Fusion, yeah. Okay, so let's kind of walk through each one and I think that again we've gestured enough at data availability. People understand like what data availability is. Hopefully I'll put some links in the show notes in case you don't. But can you talk to me a little bit about like the trade offs, the specific trade offs an app chain is making when they're choosing to use Avail as a data availability layer as opposed to the blobs that are within Ethereum themselves. And I think that is kind of the important question. We have the public option that is actually built into the crypto economic security of Ethereum. And so my question to you is like if I'm an app chain and I want to deploy this new system that leverages data availability, like how would you sell me on choosing Avail as opposed to the built in option of Ethereum?
**Speaker A:**
Yeah, I think again it comes down to the guarantees that you would want to give and the features that you would want to expose. So one of the first things that I keep on saying is that verifiability and that is not because of what we have been building and so on and so forth, but essentially it's one of the most important parts around which we build the entire ecosystem. So by verifiability I mean that is where the trust minimized interactions come from. And without trust minimization we almost have nothing in this field. And now if you think about using an Ethereum versus using Avail for data availability, right now as it stands with the proto Rank sharding already implemented, you would have to rely on the Ethereum validator set in order to know that they kept the data actually available. Or you can run a full node of your own to know that. So either you have to host a full node or rely on the economic guarantee of Ethereum. So on Avail we already have a working light client network which you can use to sample and effectively know by yourself whether the data is available or not. And data availability sampling is the fulcrum why all of the ecosystem that we are trying to build can be built on a way. So what does it mean? It means that you on you, in your, in your smartwatch, you can essentially go and run a program which samples and knows whether the data is actually available or not. And you don't have to rely on anyone else and doesn't matter who but anyone else in order to know whether the data is available. So that's one thing. And then the second thing is again, we already use the technology that Dank Sharding is going to use in the future and we do almost the last parts which are already optional in the Dang Sharding roadmap, right? So that's things that we have built for the last two, three years is essentially very crucial to the peer to peer network that you need to build of the light clients which allow you to effectively sample and keep the data published, even if this super majority of the validators tries to, you know, hold it back in the end. So those kind of guarantees are going to be there on avail. Then there are of course things about, you know, size and cost and things like that. So right now 128kb blobs you have to use all at once, right? So you have to use the entirety of it. Even if you want to use, let's say 1 KB of DA, you still have to use 128 KB and you still pay for it. And there are only three blobs available on average, six in total. Which means that the effective, you know, the effective limit that you will have is something like 350 400kb. Right? So on the other hand, at Ovail we have a 2mb block size today and we can increase it because we have been empowered by data sampling. We can increase the block size to a much, much larger size. We have tested up to 120 REM blocks and that works without any problems with our optimization sensor. So those kind of things immediately improve what we can offer. And then not to say about avail, Nexus, where you can then talk to other rollups which use avail and beyond to seamlessly talk to one another, do message passing in a trust minimized manner and so on and so forth and also have a very high security. So that's the overall pitch that I would say.
**Speaker B:**
Awesome. So I want to get to Nexus in a moment, but while we're still on just like the straight up data availability side. So first of all let me just make sure I understand why data availability sampling matters. And it's essentially because without it, like in the proto Dank Sharding that we have today in Ethereum, like in order to verify that these blobs are available, you as you said, either need to download, like create a full node, which I do in my, I do in my fiance's closet. Like let me tell you, everyone out there, like it's a, it's a pain in the ass. Like it's a big deal, right? So you either need to do that or you need to trust someone like inferior and alchemy. And then on top of that, like they have the whole blob and they just need to send you the whole blob. And then like that's how you verify that it's there. And what data availability sampling says is, I know that every computer in this network is supposed to have this blob, but I don't need to like see the whole blob to believe that. Let's say if there's 10,000 pieces, I'm just going to call like 50 random people and ask for one of that thousand pieces from 50. And if all 50 come back and verify, I can assume the entire blob is not entire thousand are there. And that just gives you like, by leveraging the power of probability, you're able to like, like I said, you know, check 50 instead of a thousand and like that, that's like your scaling factor. So first of all, is that correct?
**Speaker A:**
Yeah, absolutely, absolutely. You put it really well, I think, better than me.
**Speaker B:**
Well, thank you, thank you. And so what you're saying is that you're correctly pointing out that like that's in the roadmap in a little bit of a pipe dream right now for dank sharding, but like avail with the power of small team, move quick and just the expertise that you're bringing, you're able to fast forward to the end of the dank sharding roadmap and then get two chapters later and think what other features could be built on top of this thing and then you make that available today for developers to build today as opposed to dank sharding, which hopefully we'll have within the decade.
**Speaker A:**
Yeah, absolutely. And again, this is, as I said, like Ethereum is a highly secure network, right. And we want to do our job in order to assist it, in order to show that what the bottlenecks can be. And as you know, like, it's extremely, extremely heartwarming to see people talk about blobs, which we have been trying to push for so long because blobs are the essential, you know, data structure that we use inside avail as a data publishing. Right. So it's, it's very good to see that. And that is exactly, you know, the reason why we are trying to push for and show the, show the world that this is so much, so much power in this technology that can be built and that's why it will, it will unlock real power of blockchain systems.
**Speaker B:**
So last question on straight up data availability, which is we have spent the last 40 minutes essentially talking about like the value of crypto is this credibly neutral space and how do we get this credibly neutral space? It's through these technologies that provide trust minimization and like I am so totally on board for the roll up centric roadmap. And like how do we supercharge a roll up centric roadmap Is data availability. The thing that like gets me a little bit caught up in like what are we doing here? Is the data availability paradigm. Like I don't know what your specific configuration is, but like avail is not supposed to hold these blobs in perpetuity. They're supposed to hold it long enough so that like they're available to indexers and long term storage and like, so that someone can come and pick them up and hold them in perpetuity. And again this makes sense from a scaling standpoint. But like what really starts to freak me out is Ethereum has now transformed into something that like all of the necessary components to operate it are not internal and they're not under its control and they're not even really like affected by its incentives or like slashing or anything. And so I've heard a thousand people give a thousand different answers which basically boil down to like, well Etherscan is just going to like suck up things, all these, these blobs and that's where they'll live. But I would love to hear from someone actually building these systems like how do you think about not the data availability side but the long term data storage side and how to build that in a way that's congruent with trust minimization of Ethereum.
**Speaker A:**
Yeah, I think that's a very fair point. So there are again two ways to answer it. First I will say the popular answer, which you will not like. And then the second one would be a less popular answer which would probably be a bit more technical. So the first thing is essentially what you are saying is that Ethereum has network effects which doesn't allow it to arbitrarily do something which breaks all the tooling that has been built around it. And on the other hand, the flip side of that is that the tooling is so mature that it can support that network. The first way to answer it is that we are aiming to have that network effect so that the tooling around it becomes so mature that you can rely on that tooling to keep the data available. And you do not need to. The tooling includes archive nodes and these block explorers and so on and so forth. Yeah. The second way to answer it is that the rollups are the people who should keep the data available for a long time. That's one way to talk about it, is that whose data is it? What is the user interacting with? And that is the reason why the applications need to keep the data and not the DA layer. The DA layer is essentially doing only one job, which is publishing and making sure that the author or the publisher of the data is actually not hiding significant pieces of it or any piece of it. So that's the role of the DA layer, Right. These two are again the popular answers which might convince a few, might not convince a few. The other way that we are trying to address this problem is that we cannot rely that the tooling will be built, right? Because we are so new to this space, we have to actively work on making sure that the tooling is there. So there are two things that we are actively doing. First, for significant time when the tooling doesn't arise, we will keep the data ourselves, just like we will run archive nodes just like any protocol does. And this is across all protocols, even Ethereum themselves, they also run archive nodes, no matter how much tooling is out there. Right. So we will also keep on doing that. That's one. But the second thing that we are doing is we are actively working with like projects like filecoin and Arviv and so on in order to take that published data and then put it into our storage. Right? And that means like putting it into some of these storage providers for long term storage. The other thing that we are also kind of doing here is the trust minimization aspect, right? So the trust minimized way of doing this is essentially what is the digest against which you are going to query and how, where are you getting this digest from? Right? And that is where our headers contain the commitments to the data. So if and any, any blockchain is defined by the header, you can never, you know, you know, delete the headers and say that, oh, I have forgot the headers and I still want to have any, any confidence in the system. And that is why if you have the headers, you always have a commitment to the data for which the block was built. And the publication guarantees that it had. Hence after you pull it back you can verify exactly that the data that you published is the data that you get back. And this is the same guarantees that we use in the light plan network as well as you explained yourself. So I as a light client, I pull in 50 different stuff from others and then verify. How do I verify? I do not verify by downloading the entire block and checking against the entire block. I verify using a proof of inclusion and the same things can be done even if I have the data in a long term storage which is then retrieved back later and then checked against. So again I'm giving a very, very long answer because this, this actually is a, is a real problem because we are just in this space and Ethereum, I mean many people still don't know that the Ethereum is going to delete it after 18 days, right? And they always get amazed and that's how it's going to look like. You will probably not even know because the tooling around it is going to keep it secure. But we are trying to actively find the right balance between what we are doing and how the ecosystem mach yours.
**Speaker B:**
So everything that you said about like the technical solutions and like kind of the big P picture strategic things that like the Avail foundation can do like work with someone like an AR Weaver, a filecoin, something like that makes so much sense. But I want to highlight what you said about like maybe we push this on the applications and like it's their data, they're the ones responsible for it. Because like I just want to highlight this. You're so right, right? Like today Ethereum, you know, like with some caveats is trust minimize. But like every single time you go on a website, like the website is acting as the middleman between you and Ethereum and like they're just sending you a transaction that basically none of us know how to read and then we click the button that says sign and it goes through. And so like there's this layer between us called the applications that none of us really address as like they're fully trusted. It even like the most decentralized uniswap front ends like GMX front ends everything. And kind of what you're saying is hey, we're already trusting those and they're the ones generating the data. Like what it means to be trust minimize is that the base protocol can verify whatever these applications are saying, but it's still on them to hold it.
**Speaker A:**
Yeah, absolutely correct. I just want to add one subtle point here because you already touched upon it is that when these front ends load the data right when they query the data they query through centralized endpoints, which means they don't even verify whether the response that they are getting almost always they don't even verify that they are actually included in the blockchain. So if Infura or Alchemy that they won't. But if they chose to go on a different fork, all the user applications are going to be on that because they do not check for finality guarantees of Ethereum. On the other hand, our light client interactions are built up in a way where every interaction that you do is actually checked against a finalized header. That is the commitment against which you check. And that is why it is much, much more trust minimized than experiences that you are getting today without even going through the centralization risks.
**Speaker B:**
Yeah. Yeah man, awesome. So again, could continue on this for another whole podcast. But and I'm just going to be honest with you, I don't even think we're going to make it to Fusion. But let's move to Nexus right now. And I think what is so interesting about building and pushback, if you don't like this term, but building an interoperability solution within a data availability network is that data availability is inherently about creating records of transactions that are verifiable. Right. And so I guess I would love for you to hear, sorry, I would love to hear from you a little bit about how avail Nexus works and really want to pick apart. Is this just kind of a bridge in which we're using like the state of each chain to like verify certain things and once we're able to verify, we just release assets from a smart contract or is this really about looking at all of the blob data that comes in and then doing the actual analysis to ensure that these things are valid. And once you can see that they're valid, like fraud proofs, validity proofs, who cares? Like that data is good and so you can facilitate that transaction. So I guess the question is, can you explain to us how avail Nexus works?
**Speaker A:**
I have found the perfect answer because you have basically given the two ends of the spectrum and my answer is it's the best of both worlds and I will try to define you. So in bridges, the essential. The problem with bridges is that one chain needs to trust the other chain in order to talk to each other and that is why bridges have been so much problems have been with bridges, right? And that is because one chain has no view of the other chain and it has to trust the super majority of the other chain. There are many bridges which are way inferior than this setup. But at least we take the status quo, which is light client with one another. Now the problem is that there are security deltas here. What I mean by that is let's take two Cosmos chains. Two Cosmos chains with very varying security guarantees because of very varying amount of stake that are that they hold. They wouldn't necessarily want to talk to each other because there is so much gulf in security, which means that one asset, when bridged to the other chain, which is of much, much smaller economic guarantee, it can get manipulated because it's so easy to overtake this chain and manipulate assets inside and then bridge it back. Right. So that's the, you know, tension when you are doing bridging between two different chains. On the other hand, what you said exactly makes sense that you would because you have the data publication in the same layer, you can take a look into the transactional data and see whether the transactions were done correctly and then infer on top of it. What you were essentially trying to probably say is that they are shared security because they actually share the security layer. But when I say it is the best of both worlds, I want to also emphasize the point that they need not look at the entire transactional data. They can only take the validity proof and the commitment to the transactional data. And that is good enough. And that is the superpower. You do not need to download all the transactional data. You just need to do data sampling on top to know that all the data that it was committed to was available. And then you need a very succinct validity proof that the execution was done correctly. And that is powerful enough. This gives you the entire view of the other chain which is using the same DA layer because they share the same security zone, so to say. So that is essentially how Nexus works. When two roll ups, they use Nexus for da, one chain can just verify the entirety of the other chain by verifying a single validity proof and vice versa without having security trade offs. So that is essentially the proof composition of Nexus here. Does that make some sense?
**Speaker B:**
No, no, that makes so much sense. And I just. So I want to find like the most delicate way to answer this question. But sorry to ask this question because I know that you guys are like completely polygon aligned and like believers like obviously, but can you just help me understand like when I'm looking. We had Brendan Farmer on this podcast and he talked to us about Ag layer and so like can you help me understand like how these two products like interact are they serving the same purpose or what it sounds like is that, that maybe they're just like kind of completely different technologies where AG Layer is really focused on sharing value. On Ethereum L1 specifically, where avail is like kind of has the same functionality if you're just looking at Ethereum. But the point of avail is to expand that much beyond Ethereum.
**Speaker A:**
I would say yes and a no. Yes. The scope is much larger and the no part is that it's basically complementary. So they are not exactly completely different systems that are going to sit outside each other or not talking to each other. The idea here is essentially that when you look at a Polygon AG layer or a starkware fractal scaling or Arbitrum orbit or Optimism sidechain super chain, sorry, and these kind of entities which are coming up, Zksyncs, hyperchain, all of them talk about similar operations, right? They talk about composing things either between the rollouts which use their stack or beyond. Right. But the fact of the matter is that this leads to, you know, fragmentation in that space. And that is where what we think is we can act as complementary to these solutions because we are credibly neutral. We can, you know, take those proofs, compose them and make them talk to each other irrespective of what ecosystem do have they have, whether that be all Ethereum L2s or beyond. So that's the main power of Nexus. And at the same time as, I mean we will also take let's say an intermediate proof from an AG layer and then work on top of it because that's the only way to grow, right? And that is the reason why we want to focus on the unification aspect of it because that is extremely important for us.
**Speaker B:**
Okay, perfect. I think I very much understand because when we were talking to Brendan and we've talked to a few interop like dedicated companies and my question like always boils down to it seems very clear that in order to make this multichain future, we just need to minimize the amount of transactions on Ethereum L1. And how are we going to do that? Basically every well funded huge company is going to create the shared value contract and then use something like ZK to create like an accounting system on top of it. And my question for that always was like, okay, so is the way that Defi or crypto going to look is essentially we have like 8 giant honeypots on Ethereum L1 that don't talk to each other. And like if you want to move from an Arbitram orbit chain to a starkware, Stark Net chain. Like you have to go back through L1. And I think what you're talking about is like, because in all of these superchain ecosystems there can be rollups that opt into avail that kind of gives you these hooks into each of these giant pools of capital that will allow you to uniquely transfer value between them in a way that like somebody like Polygon directly would never be able to. Because like Starcore would be like, no, why would we just let you be the winner when we want to be the winner?
**Speaker A:**
Yeah, I mean, and, and we have had this conversation, for example, with all these ecosystems when we were within Polygon. Will you use us as a DA layer? The answer was no. Right. So, and it was very clear. And the, the, like, the, the direction is pretty clear there. On the other hand, I would also want to say that we can, because the DA layer is, has to be part of the security guarantee. A guarantee cannot be given by just the validity proof itself. A validity proof is not a, a silver bullet to solving all scaling solutions. Right. A validity proof need to be checked against a commitment. And the commitment comes from the DA layer. And hence to give a complete security to the end user, you need to have those proofs against the commitments which only a DLAER can provide. And that's why it is not only just complimentary, it's actually a lot more to what they can offer across all these ecosystems because they either have to rely on the validity on the DA guarantee given by the base layer Ethereum, or they have to have an economic guarantee given by any other DA layer, but they cannot be giving it in a trust minimized manner.
**Speaker B:**
Yeah. And I think just to take a step back and kind of tie this whole conversation together. And again, you are so interesting and your project's so amazing. Like, I'm annoyed that we didn't even get the time to come cover all of it. But just to take a step back, the thing that I'm really taking away from this conversation is coming into it, I'm not going to lie to you. Coming into it, my question is, okay, if you're going to use data availability, you're supposed to use dank sharding and everything else is what if we took this thing that's part of the trust minimize thing and then a bunch of private people made the same thing and choose us. And, and what I'm realizing from this conversation is the data availability is this kind of interesting layer of infrastructure that as you build just to get the basic functionality, it puts you in A position to tie together so many other of the big hairy problems that are in front of us. That yes, you need to say data availability because that's the project you've been working on for four years and like that's the one that you can get VC funding for and that's the one that people understand now. But after this conversation, it wouldn't Surprise me in 10 years we look back at avail and then the story will be like, yeah, they started with data availability, but that's not what they are. And I guess I would love to leave you with just any final words of. Does that resonate with you when you think of the endgame of Avail? What. What are you envisioning?
**Speaker A:**
As I began my words by saying we wanted to unify Web3 and that is the goal. We have been fighting amongst each other, we have been trying to push the boundaries within those silos and it's time that we get mainstream adoption by just being more collaborative, by being a more united force. And that's what we are trying to build here. And yeah, I mean I would just like to say that, you know, we are our insert device. Testnet is done. We probably will have by the time you publish this, a few good announcements to make. But at the same time our mainnet is launching very, very, very soon and hope to see you guys all there using it.
**Speaker B:**
Yes, well, you, you cut out in front of me a little bit and I'm pretty sure there's going to be no major announcements by the time this is released because this will be released tomorrow. But with that, so everyone listening right now, you need to get on Twitter and follow Avail and like understand that big things are coming. So with that, Prabal, can you just help the audience understand where they can find you on Twitter, where they can find avail and like if they're excited about building in this, you know, like modular, like true integrated web3world, like what's the best way to get started?
**Speaker A:**
Yeah, I would say that again. I'm Prabhal Banerjee on Twitter. Avail Project is the official Twitter handle. Go follow us. The best way to get started is just to head over to our discord and start asking questions. You can even DM me or any one of our team and we'll be, you know, glad to, you know, talk through and help you in whatever way we can. And at the same time, again, if you are excited about building on this space, anything which is roll up centric, application specific, anything where we can try to help, we would be, you know, happily, you know, trying to engage with you. Right. So that's. That's us. And hope to see you guys on Discord and on Twitter, of course.
**Speaker B:**
Man, this is so awesome. And let's hang up right now so that we can start talking about scheduling to come back on and, like, hit all the things we didn't get to talk about and, like, so much more questions that opened up. So, Prabal, thank you, and have a good rest of your day.
**Speaker A:**
Thanks a lot for having me. It was really fun. Thank you again and have a good day. Sam.