**Speaker A:**
Foreign.
**Speaker B:**
Hello and welcome back to the Strange Water Podcast. Thank you for joining us for another episode. At this point in the history of the development of Ethereum, we're pretty confident about certain things. For example, we can be pretty assured that Ethereum will scale by following the rollup centric roadmap and that the Ethereum endgame will have many different blockchains somehow attached to the core backbone of Ethereum Mainnet. And while we can already foresee that this paradigm is very powerful, it introduces some of its own problems. Namely, now we've got to communicate and coordinate amongst all of these L2s. In most circles, the solution to the L2 communication coordination problem has already been solved. We've got Oracles using a combination of off chain compute and on chain contracts. Oracles are able to pass messages as well as assets across and through blockchains. It's just a matter of building out this layer of middleware as we get more and more L2s. Right, let's zoom out for a minute and really think about what's special about crypto. Here's a quote from the Bitcoin white paper. What is needed is an electronic payment system based on cryptographic proof instead of trust, allowing any two willing parties to transact directly directly with each other and without the need for a trusted third party. Well, here's a question for you. Does an Oracle based solution really sound that congruent with the principles Satoshi left us with? We've baked cryptography directly into the system in order to give us trustlessness. Why do we need oracles to pass around the truth when we should be able to access and verify directly? Today we have the perfect guest to answer these questions and more. Marcelo Bardas, Co Founder of Herodotus Herodotus is a service that provides developers and dapps with the tools they need to access and verify the blockchain state directly. As we discussed in this episode, imagine that Herodotus provides a queryable database for L2s to directly access Ethereum mainnet as well as other L2s. This episode is jam packed with huge ideas of where this industry is going. From how this technology enables a new class of applications to how the inevitable prover markets are going to evolve over time. Anyone interested in cryptography and distributed computing will take away so much from this episode. One more thing before we begin. Please do not take financial advice from this or any podcast. Ethereum will change the world one day, but you can easily lose all of your money between now and then. Okay, let's get to the Show. So before we get started into Herodotus and you know, proof land, I'm a huge believer that the most important part of every conversation are the people in it. So with that as like kind of a starter, can you give us a little bit about who you are, how you found crypto, and I guess why you didn't run away when you found crypto?
**Speaker A:**
Sure. So, hey, I'm Marcello. I think I got into like crypto probably 2016, 2015 actually, surprisingly from like a tech side, which is, which is pretty weird, especially, especially back then. I was initially of course pretty interested in bitcoin, how it actually works. Actually I was looking for some libraries in C and here we go. I found out Bitcoin core and this is what this bitcoin thing is about. And then I got into the Ethereum rabbit hole and. And here I am.
**Speaker B:**
Yeah, you actually are kind of a rare breed here because most of us had our brains broken by finance, realized that this was a better way, and then just like fell further and further down the rabbit hole. So I guess like going back to that time when you first heard about bitcoin, what was special about it?
**Speaker A:**
So initially when I always heard bitcoin, like digital money, I was like, okay, how does it work? Probably when you hear about something very exciting, you tend to make some assumptions. And my assumption was that just someone exposed a web server and is adding records to a database. I wasn't super interested because of the treasure. And then when I actually understood how it works, I was like, you can have some consensus, you can have some redundancy and actually works and you can have some system that's completely independent, doesn't have a single operator. And that was like completely fascinating to me and felt super ignorant for just not getting that idea before.
**Speaker B:**
So can you like just really briefly walk us through, like let's say you're. The time after you've joined crypto, you found bitcoin core to, let's say, I don't know, the peak of the last cycle. Like what. How did you like really just start to get involved and eventually how did you figure out that, you know, like storage proofs and validity proofs are like the next big unlock.
**Speaker A:**
So I got pretty deep into Ethereum rabbit hole, of course after some time realized that Ethereum is great but doesn't really scale. But then everyone was talking about like roll ups, layer two, some plasma, etc. Get pretty curious about how the systems can work. And back then it seemed like optimistic roll ups were the way to Go. But just a few people were talking about ZK rollups, and that caught my attention. Like, what even is ZK? And in 2018, I saw that, okay, this is this crazy idea. You can improve some execution. Awesome. Then I was just like reading about it, et cetera, trying to understand even the idea behind it. And then when those systems were like, closer to production. Closer to production. It's a huge word. But as users, we had this vision that maybe we'll have cheap fees. I think it was end of 2019, maybe early 2020. I said, okay, amazing. We have these L2s on the horizon, but what an L2 really is. Okay, pretty controversial question, but I was like, okay, doesn't really extend Ethereum. You just create a new chain that inherits the security of Ethereum, which is great, but still, if you deploy an application on an L2, it's still a siloed ecosystem. You cannot really access L1, you cannot access other L2s, and so on. Then I thought, okay, but hold on a second. Ethereum is constructed in that way. And if we can enforce messages between the L1 and the L2 in a valid way, it means we can use the fact that everything on Ethereum is committed in a cryptographic manner. So that also means that then inheritance the ability to read the state on the L2. And that's how I got into storage proofs and realized, wait, maybe we can even take it a step further. Allow accessing arbitrary historical data so that on these L2s, which provide a lot of compute, you can read the state of L1, other L2s, access the historical state, and even go further and access not only the state, but also transaction receipts, block headers, et cetera.
**Speaker B:**
So I guess we'll kind of unpack everything you just said as we talk about what Herodotus is. But so just to like, make sure I understand what you're saying, the. The thought you were having is Ethereum is awesome. It's slow and barely works and is expensive. We have this idea around L2 and roll up based scaling, but we're kind of just like relying on inheriting the security of eth and then walking away intellectually from the problem. And your thought was, look, like even if we have this security, we're still creating all these silos. And your aha moment was like, we still have all these constructions and all these pieces together, but someone needs to figure out how to connect them so that the L2 vision isn't just the L1 multi chain vision, but with this branding of Ethereum. Is that right? Yeah. Cool. All right, so let's talk about. I guess I'll put this in your hands a little bit to help the audience just really understand what you're building. But what makes more sense, should we talk about what are storage proofs or should we talk about what is Herodotus?
**Speaker A:**
Maybe that's something. Herodotus. So Herodotus is the company we're building. Like here I have. This is our logo. We're basically building infrastructure and tooling around storage proofs. And that's our main mission because storage proofs can get quite complex. And also using them at scale requires infrastructure and you want to them to be used to scale because they get just substantially cheaper. So now what are storage proofs? Storage proofs, we call them a weird combination of validity proofs and in most cases, micro inclusion proofs that basically allow you to attest the validity of some data and digitally prove that something ever existed in a database. Because the blockchain in reality is just a database in some sense. So through executing a specific workflow and adding a lot of validity proofs and other proofs to it, you end up with a storage proof.
**Speaker B:**
So I guess if you have maybe a layman's understanding of how Ethereum works and like what, I guess like the EVM and the whole blockchain system works, you know, your first thought is, I kind of thought the whole point of the block headers and the Merkle trees that exist directly in the blockchain give us the ability to like, have these proving systems inherent in Ethereum. So can you talk a little bit about, like, why, even though we do have block headers that are part of the protocol, that is not sufficient enough to, you know, get like, let's say ZK or cryptography level assurances when you're, you know, operating in environments outside of the L1?
**Speaker A:**
Sure. So maybe first off, what, what is the block header? So the block header is basically, to put it simple, it's a list that contains all the fields from the most important fields from a specific blob, such as block number, of course, state, root, maybe timestamp, etc. Etc. In our case, we probably care about state route, but of course the state route is not enough to access any piece of data because you need to first prove that an account exists and then within this account, you need to prove that a specific storage slot exists. But again, a storage slot is just a storage slot. You probably want to access some variables. So there is like another layer of abstraction that you need to add on top of that. And then if you want to access some historical data, it implies that you have to access historical block headers. And because a blockchain is a blockchain and it's a linked list, you basically have to traverse every single block in the middle to reach whatever information you want. And that can get pretty expensive on the evm. So that's where ZK steps in and offloading computation off chain. That's why you should care.
**Speaker B:**
And then. So how does Herodotus work? I guess like in the base case before we're talking about bells and whistles. Are you guys building an off chain, I guess prover that is watching the Ethereum L1 progress and then each time there's a new block it's generating kind of a separate set of proofs that are easier to work with. Or talk to me a little bit about like what is Herodotus actually doing?
**Speaker A:**
Okay, so Herodotus, we expose, like I said, a set of tools and also infrastructure where developers can submit so called queries saying hey, I'm on for example Zksync or starknet and I want to access this information from Ethereum or I'm on Ethereum, I want to access this information from Ethereum. They define a query saying hey, I want this information from this block, etc. They send us such a, such an API feed and then we'll just process it and execute the workflow, generate the proofs, etc. Etc. And just notify them when the information is on chain and that they can basically access it. So that's how it works. But of course an API is not enough because as a regular solid developer, Cairo developer, you don't want to think about the system where you have a user, this user interacts with the contract, but when he interacts with the contract you first need to hit an API, et cetera. And for that reason we're building something that is called Heraldurbo, which is a fully on chain interface which is actually synchronous in reality everything is synchronous under the hood, but it makes you feel like you have enriched version of the Sload opcode that allows you to not only access the current state, but also the state of other rollups, other domains. And also you have this additional parameter block number. So that's that. That's what Herodotus is.
**Speaker B:**
Okay, got it. So what you're saying is that as it works today, it's like an off chain on your side is just watching all of the chains and it's like building these proofs and then for the developer side, it's an API completely off chain in which they send a request and then you process the request, you get the information they need, and then you feed them back off chain the proof and the data that they asked for, so that they can verify that the data they ask for matches up with the L1 that they have with Ethereum that they have in front of them. Right. And then phase two, what you're saying is, while this construction is like totally valid, this is the magic of zk. It breaks down a lot of the paradigms that we have gotten used to, which is if you're within the evm, don't leave the evm. And, and so what you're saying is you're building Herodotus Turbo, which is I think the exact same API, but as a smart contract interface, is that correct?
**Speaker A:**
In some sense? Because there are multiple ways. You can basically bind together the on chain world with the off chain world, for example, by just making asynchronous calls. So you can call a contract and make an event. You catch that event off chain and you start the query and then after some time it's on chain. But of course, imagine you're building an application based on such infrastructure. That implies that the user basically has to interact twice with your system because first he needs to request that information. That would be eventually proven. And then once it's on chain, he will have to somehow consume it. So you can use Herodotus that way, but Herodotus drew something that completely abstracts the way the user just clicks and the transaction, all of a sudden this transaction has already this data injected.
**Speaker B:**
When you think about this construction at a super high level, where the developer is responsible for accessing Herodotus off chain, we're talking pre Turbo here, but accessing Herodotus and then using the data that comes from Herodotus and kind of folding it into whatever's happening on chain. Have you found that developers are like a little confused or squeamish about this? Or to me this seems a little bit like how, let's say Infura or like the RPC construction works, in which like, yes, we understand that we can access the EVM directly, but no one really actually does that. It's all through these like API like services. So I guess my question to you, is this like a parallel construction or is this something you really have to educate people as like a whole new construct?
**Speaker A:**
Yeah, actually initially we thought that this approach where we just educate developer and Say, hey, you have an API, you just hit it, you have your request id, and you just keep asking, hey, is the proof ready? And we tell you, hey, the proof is ready. We give you the proof, you construct the codata, you call the contract, and that's it. We thought that this approach would work, but that requires really a lot of education. It's not that simple. Even though this API really abstracts a lot of things like generating all the proofs, executing the workflow, batching multiple things together. It's simple, but not simple enough. And that's why we are building Turbo.
**Speaker B:**
That makes a lot of sense. So I guess since we're talking about developers building on Herodotus right now, I guess first, before we talk about how you convince developers that they need to build these features in, can you talk, maybe give an example or two of the type of systems that really become much more powerful when you have the infrastructure that Herodotus provides? Like what? Who is the ideal customer on the development side for this type of technology?
**Speaker A:**
Sure. So we clearly see that the compute problem of Ethereum is getting solved today, especially for ZK rollups that just offer uncomparably more computation than layer one, which is already great. But there is another dimension that okay, I can have a lot of computing power, but what do I use this compute for? I usually need some data, and data is actually being a problem. Seems like it's the bottleneck. So if you want to access the data without sending messages back and forth from L1, which can be very costly, you can use for example, Herodotus and just access data at scale. And if you have data and compute, you can build super powerful applications. So that's one thing we are particularly excited about. But there is another dimension that we're all building on. L2s mean a certain way nowadays and very likely want to access data from L1, for example, a token balance for a specific user. And based on that token balance, you will authorize like an action. How would you do that today? Well, you probably need to send a message from layer one, and the user in order to send a message would have to interact with layer one. So imagine how bad the UX be. You connect first your metamask, you invoke a contract on layer one, you wait until the transaction is confirmed on the same ui. You're asked to change the network, you make sure that the message is delivered. So you're waiting, waiting, waiting, waiting and waiting. And finally you can make another transaction. It's just not such a great ux. And with Storage proofs this problem does not exist because in some sense like this L2 where heralditude is deployed and here is not only the security, but also all the data present on Ethereum L1. And maybe to add on top of that, because all layer 2 commit back to L1, it also implies that if those commitments are available in L1, you can access those commitments from another L2. So from L2 you can access other L2s, which is pretty nice.
**Speaker B:**
So I'll take this on a slight little tangent here, but something that like the promise of rollups, let's say, which are a specific type of L2 in which not only the state route, but a record of every single transaction is posted back to mainnet. We like to think that this paradigm allows us to have full sovereign control of the roll up and our assets on the roll up on Ethereum mainnet. At least that's the idea. And then when it comes down to it, like basically no roll up has permission with withdrawals, like everything is gated off while. And we're all in these growth phases. So it's fine, whatever. But I guess my question to you is, as a builder and as someone who's building tools to ingest the Data on the L1, are you really saying that all of the state changes and the transactions and the computation and everything that are happening on L2s are actually accessible on the L1, or is that like more of a pipe dream that we're working towards today?
**Speaker A:**
As long as the commitments are available, we can prove it, right? It's more a question of like data availability, who keeps the data.
**Speaker B:**
So let me just be like really specific with this and I'm not trying to pick anyone, I'm just pulling a roll up off the top of my head. But let's say I said I want you to recreate the arbitrum state just based. Or let's say my account balance today on Arbitrum and you can only use the Ethereum L1. Or is the construction of everything at the point where like it might take some time and effort, but you could not only do that, but use Herodotus to prove it. Or are we still kind of a few steps away from that dream?
**Speaker A:**
Yeah, absolutely. You could do it because Arbitrum post its state routes, maybe not finalized because you need to wait for until they're finalized, but you can Access them on 1. If you can access them only 1, you can, for example, access them. As to Archnet, if you can access the state routes, you can prove everything. Everything that was ever on Arbitrum against the state routes.
**Speaker B:**
So I guess that that's today, right, where Arbitrum and all rollups are posting the copy of their transactions into the EVM into the call data portion. But we all know the roadmap of Ethereum is proto dank sharding and then dank sharding, where all of this Data, the transaction L2 data, gets moved out of the EVM. And so can you talk a little bit about does the dank sharding and like the blob structure change what you're able to do with Herodotus? Because that transaction data is not accessible within the EVM anymore.
**Speaker A:**
In some sense it changes, but in most cases it's not a concern as of today. Why? Because what is always getting posted is the state route. And the state route is usually enough for us to prove basically anything. Because what gets posted on most rollups are, is either like transactions or state differences. Do you really need the transactions to like access the state? Well, not really. We have the state route, right? So as long as there is a party such as for Obvious or anybody else who's just running a node for say optimism and we have access to the state, we can generate inclusion routes, right? So we can access this, the state route. It's already on L1, so we can prove everything against it. So it's already quite good. But with prototype sharding, yeah, of course, if you want to prove that the specific transaction was included, that might get tricky. But again, as you have access to the historical data on L1, you can access historical block headers. So also you can access the block space route once it's available.
**Speaker B:**
So that makes a lot of sense and a question I have for you. I guess this is less for Herodotus and more for the Ethereum community, but in this prototype dank Sharding. Dank sharding world in which these blobs do expire, do you imagine that like you're going to be copying all of this data and providing it to users like past expiry, or is that not like which types of entities do you see filling the role that is going to be needed once dang Sharding has like expiring data?
**Speaker A:**
That's actually very good and interesting question. Like for sure Herodotus would be such an actor, like excluding our literacy. As long as users want to access this historical blobs, someone has to keep them. And as we provide this proving services and we can prove any specific. I don't want to use the word state, but storage, whatever was ever present on the chain. Yeah, like we can prove it. So we of course have to keep the data. So I think that actors such as, such as Herodotus and Block Explorers, et cetera, infura, Alchemy, so there's definitely interest for, for some external actors to, to keep this data.
**Speaker B:**
Yeah, and I think we'll see like whole new business models come. I mean, Herodotus, that's a whole new business model that came up because of crypto itself. Like, as we keep, you know, maturing these systems, we'll figure out that there's need and if there's need, then there's an opportunity to build a business. So. Makes a lot of sense. Let's. So let's. Now that we're talking about the business that you're building, to me this sounds like the hard part computationally is again getting all this data from the L1 and then making it accessible and queryable in an API. First and foremost, this sounds like a big centralized compute problem. I guess the way to solve this is with big centralized compute. So one, can you talk a little bit? Is, is that how Herodotus is being built or are you building it more distributed? And then second, do you think that these, these things that we get with distributed systems, censorship, resistance.
**Speaker A:**
Like the, the.
**Speaker B:**
Inability for a single actor to change things like, are these important for proving systems like Herodotus, or do you rely so much on ZK to give you these properties that you don't really have to worry about it?
**Speaker A:**
That's a very good question. So as of today, we are mostly thinking about how do we provide soundness because of course, decentralizing the prover. I think it's still a pretty open question. And mostly companies building rollups are trying to solve it. And as we are pretty much focused on providing data to rollups, we are kind of awaiting for them to solve this problem first. Like I said, soundness sounds like more of a problem. And this is something we're focused on today, so making sure that every piece of data that we claim is proven is actually proven, and there is no way to break the cryptographic system involved in the future. Do we have plans? Yeah, definitely we have plans. Actually, one of the components is Turbo is in the future. How do we solve livency issue, legacy issues, how do we solve censorship, etc. But as of today, end of 2023, it's not really a major concern yet.
**Speaker B:**
No. And I think that that is what we do here in crypto is like, you Build the technology and then once you know it works, then you start to worry about decentralizing it. And I think what we're going to find as we get more and more comfortable with ZK is like, we're going to realize that there's other tools to solve these problems that are not just like, I don't know, put it on as many computers as possible, give them a token, and just hope it kind of works. So for sure. Cool. So, yeah, again, I think, as you said, there's a lot of people working on this shared sequencer stuff, but I think the other thing that's super interesting and kind of at the cutting edge of distributed computing going on right now is we'll call it eigenlayer and restaking. But just this whole concept of using crypto economic forces to achieve the same effects of decentralization without worrying about a bunch of different computers. Do you have any thoughts on how. Do you have any plans on building. Sorry, do you have any plans on building Herodotus like into a world of restaking? Or is that again, just like so far down the roadmap that you'll worry about it when you're worried about decentralization?
**Speaker A:**
We only want to use, like, cryptographic systems. We are not planning to have any economics and incentives involved in our system to secure the data feeds. Why? Because I think that it might be a problem for modularity. What do I mean by modularity? So if you think about it, supply chain attacks can be problematic. So imagine that you have an application that integrates Herodotus, but some specific part of a Rotus that, for example, uses to improve the economic incentives, and then this other application is integrated into something else, this something else into something else. And suddenly if one component of the system is weak, like the entire system is as weak as the weakest component. Like, really, you want to avoid this type of situations. And also for that reason, we're not using in any way economic incentives in our system. Just. Just cryptography.
**Speaker B:**
Yeah, I mean, I have so much respect for you on that one because I think that in this space of crypto, which both includes like deep cryptography and the token projects like it, it's tempting to just slap a token on everything and say, like, any problem we have will fix this just by like, paying out the people that are causing us a problem in the future with this fake money. And I think always the right answer is like, you, you start with the technology, you start with the cryptography, you start with building a good product, and then down the line you can do whatever you want, but like build a product, don't build a token.
**Speaker A:**
Yeah, yeah. That summarizes pretty much everything.
**Speaker B:**
Yeah. Cool. So let's start like to shift this conversation over a little bit into like what a world looks like with Herodotus just fully out there and enabled. Right before we started recording, you mentioned that we're in Testnet phase right now. So let's start there first. What is this Testnet and what are the types of stuff you can do with it? I guess just for the developers listening, how do you access it?
**Speaker A:**
Sure. So what the Testnet is. The Testnet is gurney. What you can do for product is you can build nice applications that can access historical state on girly. But also we can for example, deploy an application on Optimus Girly or ZK Sync Girly or starknet Girly and access the state of layer one, the state of like optimize. And whenever I say state, I also mean the historical state. So you can build pretty nice applications that probably never ever thought about before. Because some limitations of the AVM just simply fall thanks to this technology.
**Speaker B:**
That's super interesting. Can you expand a little bit? What are the big limitations of the EVM that fall away because of this technology?
**Speaker A:**
Sure, the EVM maybe. Let's start from Ethereum. I hope I'm not saying anything controversial, but Ethereum is a database. It's just a key value store. The EVM is pretty much limited to this key value store. You can access a specific key right now and that's it. And that key, you can access it only within your own smart contract where this code is being executed. Which is pretty bad because first of, if you're some smart contract and you want to access some arbitrary data from another smart contract that is not necessarily exposed through like a getter function, you really can't. If you want to access some data from one block ago, two blocks ago, 1 million blocks ago, you can't. It's impossible. You would have to use some other solutions like Oracles, or just pass this data in Coldata and assume it's invalid, which does not sound like a good idea. So that's already a problem. And another problem is that today in the world where we have a lot of L2s, many people talking about L3s, are you going to post messages around all the time and just create a mess? Probably not. Sounds like storage pools are just a very elegant way to solve it. So yeah, we invite developers to experiment with the stack and build nice applications.
**Speaker B:**
Yeah. So the way that you Just framed it, like, gave me a really good image in my head. Because I think the big question for people that aren't in, like, proof world is, like, I thought we figured all this stuff out with Oracles. Like, I. I don't understand why we need Herodotus. Like, I thought this is what Oracle's for. And I think what you just said is exactly correct. Where, like, if in this future where we have hundreds of L2s and thousands of L3s, in order to achieve anything through Oracles, everything requires these cascading of messages through these different systems. And I'm sure it's possible. I'm sure Chainlink will remain one of the top 10 most important protocols. But everything gets exponentially more and more complicated as we add more and more entities. And what Herodotus is doing is saying, let's not mess around with messages or pathways or entities or any of this stuff. The whole point of crypto is it's all open and out there and, like, queryable. Like, let's just provide that functionality back to the developers. Even though all the users have it on Etherscan.
**Speaker A:**
That's basically it. I think it's a very valid point. Like the way the EVM and this virtual machine is running on blockchains. The way they see the blockchain is completely different from the way a human being sees the blockchain. And I think this is pretty much what we're trying to solve.
**Speaker B:**
Yeah. Cool. And so, okay, so we're in Testnet right now. Do we have. Do we know when we're going live?
**Speaker A:**
Actually, very soon. But we are first going live on Starknet, and soon after we will go live on EVMs such as Layer 1, Optimism, Zksync.
**Speaker B:**
And so when you say you're going live on Starknet, what that means is on Starknet you'll be able to prove Ethereum L1 data. When you're going live on Starknet, does that mean you're able to prove Starknet L2 data?
**Speaker A:**
Ethereum L1.
**Speaker B:**
Okay, cool, cool. So what you're just talking about is the API endpoint that allows you to access it. But at the end of the day, we're always talking about proving stuff on Ethereum L1.
**Speaker A:**
Yep.
**Speaker B:**
Cool. And then I guess with that, my immediate second question to that is, what do you think about alternate L1s? Let's just start from forget about resources. If you have unlimited resources and time in devs, do you believe that it's important to build these same kinds of tools for alternate L1s? Or is your thesis about this space really that Ethereum is the backbone, Ethereum is the settlement layer, and what's important is to support L2s on Ethereum and run down that route.
**Speaker A:**
So we feel like Ethereum nowadays is clearly going the direction of modularity and basically moving all the applications to layer twos, which clearly creates a need for solutions such as Herodotus. And when it comes to alternative layer ones, such as, for example, Solana, like Solana is monolithic. Do you need solutions like maybe, maybe to access historical data, but are those solutions as needed on Solana is on Ethereum? No. Probably no. That's why we're focused on Ethereum.
**Speaker B:**
Well, what about a world in like, let's say somebody on Solana wants to take out a debt on whatever Solana's version of AAVE is, using assets on Ethereum L1. Is there a world in which Herodotus can facilitate that kind of not. Say it out loud, it doesn't really make a lot of sense. But do you think that Herodotus has a role to play, bridging the communication between Ethereum and non Ethereum L1s?
**Speaker A:**
Technically, this is possible. However, one of the reasons why we are not doing this today is because we don't prove consensus, we just prove that the state is indeed integral and data integrity is preserved. So, yeah, as of today, it's not possible. In the future, maybe, but as of today, it's not a priority at all.
**Speaker B:**
No. And I'll just be frank with you. I think the answer is don't worry about Solana. I think the end game for Solana is either it just doesn't survive or it shifts to becoming an Ethereum L2. It's the perfect execution engine. To me, every time we get into L1 wars, it blows my mind a little bit because I thought the point of crypto was that we have one credibly neutral space in which we can all compute in. And I don't really understand where we just keep doing these Ethereum killers. And this is the same Ethereum, but faster. This is Ethereum, but cheaper. And it's like. No, the point was we have one space. One.
**Speaker A:**
Completely agree. Personally, I'm an Ethereum maxi. Personally, yes.
**Speaker B:**
I hate the term Ethereum maxi because it makes it feel tribal, but it's just like. No, I just thought that's what we were doing.
**Speaker A:**
Yeah, let's keep redundancy on one state machine.
**Speaker B:**
Yeah, yeah, cool. So, okay, so as you said, we're in Testnet right now, Mainnet coming Soon we're going to start on starkware and then you listed a few other chains.
**Speaker A:**
Optimism, zksync and Mainnet. Ethereum Mainnet.
**Speaker B:**
So let's fast forward to that world in which we have like multiple chains that have access to live Herodotus. Do you have any cool concepts or ideas of an app that would be truly cross chain that might leverage starkware for this part and then Optimism for this part and then be using data for L1? What does a truly multi chain world look like now that we have this kind of webbing infrastructure?
**Speaker A:**
Sure, that's a great question. So maybe I'll actually give a few examples. So I will start with something we are working on with Snapshot. So Snapshot is building Snapshot X which allows users to vote directly on chain, not off chain. But the problem is that gas fees on L1 are pretty high. So you don't want to vote directly on L1. So what do we do? We just deploy Snapshot packs on starknet where it's much cheaper. Cool. But the problem is that all the tokens that we used to vote and actually give me the the right to interact with a certain contract or change the state somehow are on L1. So are we going to force the users to breach them? Not really, because they have to interact with L1, etc. Are we going to force them to send a message? No, because it's the same as bridging them and then bridging back, etc, it's just a mess. So simple solution, just provide the users with the proof and they can prove to the company that hey, I'm indeed the rightful owner of those assets. Let me vote. So that's one example. So we can obviously take a.
**Speaker B:**
A.
**Speaker A:**
Step further and prove that, hey, I have this token on Optimus, this Token on Ethereum L1 and this token on Zksync. And based on that, I can do some action without sending messages around and sending a message first from Optimist to mainnet, from Zksync to mainnet, and then from mainnet to Structure. That would be just terrible. So that's for sure one place where herodotus can help another use case. I'm particularly excited about this. How can we make Ethereum like one single place and incentivize users to actually go to the places where for example, compute is cheaper. So imagine an application on zksync where let's say a protocol such as Uniswap decide, okay, all the users who actually spent on our platform more than $1,000 in dollars in gas fees can now trade gaslessly on zksync. But first off, how do we prove it? Well, we can prove it by just going through all the receipts and as a user can provide those proofs and show, hey, I spent more than $1,000 in gas fees in order to interact with Uniswap. And because we have native account extraction on zksync, I can have a paymaster who will pay for that. And the pay master can be subsidized by, for example, uniswap. So this is something that's pretty cool, right? I interacted with Ethereum back then. I spent a lot of gas and someone because of this reason will just pay for it from zkc. Pretty cool. It's all provable, so I don't have to trust anyone. So this is one example. Another example are like reputation systems. So again, I can prove that I did this action, this action, this action, etc. Etc. And another use case we're exploring internally is actually building some sort of lightning network like solutions on top of rollups. So imagine that for example, we have just a server and a user making some interactions. And those interactions are somehow conditioned by the state of L1. They just keep interacting together. The user is happy because he's paying no gas fees at all. And then let's say after two weeks, three weeks, or even one year, they decide, okay, it's time to settle. We settle all the transactions. Like on the layer 2 work, all data is super cheap because maybe we don't post a punch in at all, but because we have Herodotus and we can access the historical state, we can say, okay, this is that transaction. Maybe more precise, that interaction we entered to like a year ago is indeed valid. And you know, you have to pay me like $10 because of whatever reason we can build this type of system. Like really thanks to storage proofs and this ability to access whatever piece of state at whatever piece of time, like truly enables a completely new dimensional applications.
**Speaker B:**
Yeah, man, I like, my mind is so exploded by just like what you. The most profound thing I think you just said there in your first example, you said for doing snapshot voting, we could just send a message down to L1 to see how many tokens there are. But what you said, which is blowing my mind, is if you're sending a message that's basically the same thing as bridging anyway, and so you're not, you might as well bridge it. And I think, can you, I can barely wrap my head around it. Can you just unpack that one comment a little bit more because I think that is like the clutch crucial piece to understanding why Herodotus is so powerful and such an unlock on top of Oracle. So can you just unpack that a little bit more? Why is messaging the same as bridging?
**Speaker A:**
Sure. So maybe let's talk about what the bridge release and how it works. So a bridge system that allows me to move 1 tokens from 1 place to another place, but as some people say, layer twos do not exist. And layer two that are just like a server running somewhere and we just put proofs on top of that. So how do we build bridges? We implement a contract on layer one which accepts your tokens, logs them and sends a message or maybe just writes to the stake on at once saying hey, this guy wants to move this tokens to this chain id amazing. We have some guy in the middle just watching for those messages and just making it claim, hey yeah indeed. This user deposited and logged the tokens. Okay, cool. So we can mint them on the other side. That's basically a bridge. So of course it requires sending messages. So that's how it works. And with sourcefruits we don't have that burden.
**Speaker B:**
Yeah, and I think to me every day I'm more excited about Ethereum and like this is one of these moments here because I think the, the more you're able to step away and you my friend, entered this space through the tech and not the finance, so I'm jealous of you. But like the more you're able to step away from these just like totally bizarre concepts of like hard money and financial systems and all this stuff and really look at Ethereum for what it is, which is like a paradigm of computing and like once you are able to pull away all this just like intellectual overhead about finance and really are just start to understand Ethereum as a computer and just a slow computer in which again like you, you can't access like Ethereum doesn't have the ability to access Ethereum like we do. There's no Ether scan internally, nothing like that. And so I think what's pretty amazing about what you're building is my question for you is are you sure it should be a company and not an eip?
**Speaker A:**
That's actually interesting question. I think it should be a company. Why? Because the best way to generate proofs is to not have to generate them at all. And that's something that our infrastructure achieves. So basically you receive all these requests, intents, however you call them, we basically Translate to a set of actions we need to take and then we just see, okay, this action that is required to execute this query is redundant because look, there is another query that also requires this. Or maybe you have already proven this thing in the past. So maybe we can just make a lookup to our database and see, hey, we have this proof that's not generated, so that's what we provide. And this can drive the cost significantly lower and also the latency because we have this infrastructure and we have a lot of batching, et cetera and eip. I think it's not really a good idea, right, because someone has to keep the data at some point. So do you want to really enforce all the validators or potentially block builders to run this infrastructure? Probably not. It's just better to have someone dedicated to that and just keep the system more modular.
**Speaker B:**
No, and I meant that a little bit as a joke, but just to highlight how kind of crucial and necessary the technology you're building is in order to achieve like the true world computer vision. So point taken. Shouldn't be a eip, can be a, can be a company. But man, literally everything you say kind of opens my mind to a new thought because I never even considered the proving that you're doing to be. And I'll jump ahead to the punchline, which is like you're talking a little bit about a marketplace here, right? You have intense people that want proofs and then you have provers, the ability to actually run these proofs. And like Herodotus has the ability to like be a little smart, match up the intents with the wants, like make sure there's not wasted like double work. But part of me is wondering like, do you have the thought or the ability or the vision to open up Herodotus to be a two sided marketplace where you can also have a bunch of people willing to do proofs that just show up and say, like, what are the intents? Let me run the proofs for you.
**Speaker A:**
That's a good question. We actually spend a lot of time thinking about it and that's one of the components we have inside Turbo. So the basic question we had to answer was it really comes down to prover decentralization. So how do you decentralize the prover and also in some sense the sequencer? Because this transactions, this API requests in reality just express some willingness the same way just a random string of like bytes you're sending to Ethereum. RPC just is a transaction, but in some sense expresses some, some willingness. And in what order, we should, we should execute it, et cetera, et cetera. So it really comes down to, in a sense, to how do you decentralize the sequencer and then how do you decentralize the proverb? And I think it's way too early to think about it because again, how do we usually do it? We just have a set of validators who will just compete. Who has the right to do that? Because this is how we ensure liveness. But again, this creates redundancy. So if you want to have 10 people potentially starting the race, who's going to be the first at generating these proofs, you need to incentivize them somehow. So how do we incentivize them without introducing some weird mechanics? You just have the user paying for it. So if the user wants a redundancy of 10x, he needs to pay 10x. And do we know if users are really willing to pay 10x as of today? No, we don't. So we are not really tackling this problem yet. But we have some designs in mind.
**Speaker B:**
No, no, fair enough. I guess a larger question would be whether or not it's right for Herodotus. We are entering a world in which it's clear there's just so much demand for proofs and we're not even really sure what the use for proofs are yet. We just know that there's going to be a lot of people running them. And I guess, do you see a world in which like let's say, stakers, there is just this wide diverse group of people, everyone from hobbyists like myself to, you know, full professional proving outfits that are part of this ecosystem competing for proving work? Or do you think that it's going to be much more like the aws, like Google Cloud model, where it's like only super professional, super high fidelity, like super centralized entities that are actually running the math.
**Speaker A:**
That's one more amazing question. So the way we like to think about it throughout this is that it depends. It depends on a lot of factors. So today we see some trends, especially around like hardware proving. And maybe in the future we'll see Asics specifically, specifically some proof systems. Or maybe we'll go the direction where you can just prove everything with GPUs. And of course GPUs are way more accessible than a specific ASIC. And if GPUs would become a thing that yeah, maybe it makes sense to open it and literally allow anyone to provide those proofs and just open up the repos on GitHub and just say, hey, yeah. Pull this code, like set up your hardware and just participate in the network. But again, if the industry will go the direction. Yeah, we probably need ASICs and we need specialized hardware. It's a huge assumption to make that everyone will have access to this hardware. So why would we add that much redundancy and let anyone participate?
**Speaker B:**
Is your answer. We have no idea. And the reason we have no idea is because we don't really have a good grasp on the, I guess the curve of functionality to like hardware power that's needed and wherever we ended up on that curve is going to like basically tell us how many individual participants we have in this ecosystem.
**Speaker A:**
Yeah, I think it's a very risky thing to assume how the future is going to look like in the, in the next one year or two. Like things move way too fast, especially in the ZK world.
**Speaker B:**
Yeah, I mean, come on, two years ago we didn't even know if ZKEVM was possible. Yeah, no, I mean, actually just let me hover on that for a moment. Like one of the most incredible moments for me in crypto was, I think it was about a year ago now, Vitalik was on Bankless and the Bankless guys asked him like, what's the, what's one of the most surprising things about Ethereum here right at the end of 2022? And what Vitalik said was I always thought that like the reason we were doing optimistic roll ups was because like ZK EVM was going to take forever if it was even possible. And what I'm just most sur about is we're now at the end of 2022 and I'm pretty sure we're going to see not one, but two or three, like fully done ZKEVMs before we've even really figured out how to implement fraud proofs correctly. And I think that is just like such a testament to like how crazy and fast ZK is moving. So yeah, please don't make any big bets. Cool. All right, so with the last few minutes here, I guess I just kind of want to open it up to you and help us understand what is the long term endgame vision of Herodotus. We've talked so much about the opportunity to use proofs and use this API to make accessing blockchain to other blockchains easier. I have to imagine that's not the end game for you. And I have to imagine that you see applications for this technology that are maybe outside of blockchain or bridge the gap between regular compute and blockchain compute can you talk a little bit when you're in front of VCs trying to raise that like $50 million. What is the end game vision that you're selling them on?
**Speaker A:**
So we see Horadosis as a data access middleware. Basically our end goal is, okay, Ethereum is scaling, it's scaling clearly horizontally by just deploying multiple L2s, L3s and so on. Applications on Ethereum are still poor, but come on, it looks like we solved compute, now let's solve data with data and compute. You will have amazing applications. But again, we are sharded so it's time to uncharted. And this is where we step in with our proofs and cryptography and what we want to do. We want to make the system as most efficient as possible so users don't really feel that there is some middleware in between. It just works out of the box. So it will make absolutely no difference if you're like interacting with that specific tool or another. You're just from Ethereum. And this is what we want to see in the future and hopefully we'll be part of that evolution.
**Speaker B:**
When I talk to my friends, especially like my non crypto friends, about the end game of crypto, every, like my favorite example is like what a more exotic asset like Frax, for example. Like I don't imagine a world in which like my grandma is holding like a bunch of assets and like this token that's supposed to be a dollar called frax, that's like basically reliant on like one kid's like AWS account, you know, like I imagine the end game of crypto being like for grandma, for the end users it's just the same as being on the Internet. Like you have no ability to like really discern like what's crypto, what's not. You just know that you have an identity and you have assets and then really everything that we're building is six or seven layers deep under that. In the same way that every single one of us all knows that when you type in a website, you type in HTTPs, but none of us know anything about SSL and TCPIP and DNS and DHCP and all of this crazy stuff that make it happen, I'm trying to pin you down on this specific thing. But I so know that once web3 is truly web3, we won't be thinking of it as distinct, we'll be thinking of it in the same way as HTTP. And so I guess I'm so excited once you get there to hear how are you going to sell non crypto people on the powers of like the, you know, Ethereum plus like Herodotus and like what that allows them to do with their just like forget Dapps, forget any of that. But like with their applications for sure.
**Speaker A:**
Like I think that snapshot is a very good example. So you don't have to bridge your assets. You're just like on Ethereum, like okay, you're interacting with enough to, but you still like inherit it. And that's actually great. And this is like something we're aiming for. Like people should just be on Ethereum, they should just have some sort of proofs and they don't even have to think about it. And also developers and that's one of the reasons why we are building Turbo. So it would be just like an enriched S flow to say hey, I'm deploying my application here and I want to access this data from that specific point in time. Or you can even go crazy in the mid future and leisure write like an SQL query and execute it and get the summary of all your transactions, man.
**Speaker B:**
So I think that's like such a good punchline and way to wrap this up is what Herodotus is doing is not only enhancing S load, but turning S load into a SQL interface for crypto. And that's not going to mean much to anyone who hasn't actually developed smart contracts. But just take it from us, it's like developing on that same computer that's worse than your TI89 that got us to the moon in the 60s. It's slow and terrible and doesn't work. And just like these quality of life things are what it takes in order to turn this like niche financial gambling technology into core infrastructure for mankind. So man, I'm just so excited for what you're building and like and first we're so excited for what you're building and how you're making Ethereum more accessible, but also so excited that you're one of the few people that are real that really saw that like cryptography, the science and like this new stuff that's coming out like needs to be built into products.
**Speaker A:**
I really like to think that today we're in a place where Ethereum is basically you have a microcontroller and you connect an SD card to this microcontroller, you have a little bit of compute and you can literally write and read from a. Txt file. But layer twos give you a nice processor where you can run a lot of compute. And herald is literally the database under the Hood, you don't have to just write things like txt file you can build forever. So yeah, I think we'll see a lot of nice applications, hopefully.
**Speaker B:**
You know what I what they love to say in like defi world is that crypto is speed running the history of finance. And what this conversation has made me realize is that. No, no, it's not that defi is speed running finance is that crypto is speed running the history of computation.
**Speaker A:**
Yeah, that's a good way to put it.
**Speaker B:**
Cool, man. So I think that's a beautiful place to wrap it up. So before I let you go, can you just let everyone know, where can they find you? Where can they learn more about Herodotus and if they want to get involved and start building, what are the best next steps?
**Speaker A:**
Sure. So please go on our website Herodotus.dev and check out our blogs, check out our docs@dogs.herodotus.dev check out our gcub, which is again herald of this dev. And yeah, I think that's probably the three best places to look for updates.
**Speaker B:**
Perfect, man. Awesome. Well, again, I'm so excited we're in Testnet right now. I'm so excited for mainnet and just to again watch Ethereum transform from basically more and more exotic financial instruments into just a more and more holistically accessible and usable platform. And I think that what the types of tools that you're building and the primitives that you're bringing to the table are, are like a prerequisite to getting there. So I just want to say thank you so much for, for building what you're building, for taking the time to talk to us and good luck, man.
**Speaker A:**
Thanks. And likewise.
**Speaker B:**
Sa.