**Speaker A:**
Hello and welcome back to the Strangewater Podcast. Thank you for tuning in for another episode. Here's what you need to run an Ethereum node, a dedicated computer capable of running the software that makes up Ethereum, an always on and unlimited power supply and Internet connection, and at least 32 ETH, which at today's prices is equivalent to roughly $75,000 today. This is just barely within the realm of widely accessible, and that's really only if we stretch the bounds of widely and accessible. You could get a dedicated ETH staking machine for about $1,000 or maybe less, and you probably have access to enough power and bandwidth by default. But I'm a little stumped on $75,000 no matter where you are. That is a lot of money. And tomorrow, well, things are only going to get worse. Protocol upgrades like dank sharding and enshrined PBS are only going to raise the minimum requirements for nodes, and they're also going to significantly increase the bandwidth requirements of running a node. But perhaps the most daunting part are the ETH requirements. If ETH reclaims all time high, 32 ETH will cost over $150,000 and we all hope it continues well past ATH. So look, while it's true in a relative sense that running a node and staking from home is accessible, from an absolute sense, things are pretty dire and they're only going to get worse. So is Ethereum screwed? Well, my friends, the answer is is an emphatic no. And today we have the perfect guest to talk through the technological revolution that will save Ethereum decentralization. Distributed Validator Technology, or dvt. Brett Lee is the Chief Growth Officer at Obolabs. If not the first, then the most prominent company building distributed validator technology. Today's conversation is a fantastic deep dive into all things dbt, why it's needed, how it works, how it affects home stakers, how it affects liquid staking protocols like Lido, and so much more. This episode is a fantastic listen for everyone with eyes on the Ethereum roadmap and concerns with how we will enter a middleware backed modular blockchain paradigm by the end of today's talk. You should walk away realizing that in the Ethereum endgame, each point in the Ethereum network universe isn't just a star, it will be an entire constellation. One more thing before we begin. Please do not take financial advice from this or any podcast. Ethereum will change the world one day, but you can easily lose all of your money between now and then. Okay? Ladies and gentlemen, Brett Lee. Brett, thank you so much for joining us on the Strange Water podcast.
**Speaker B:**
Thanks for having me.
**Speaker A:**
Of course, man. So, like, I've said this to you so many times in private. I'm so excited to talk distributed validator technology and especially Obel. But before we get there, I am a huge believer that the most important part of every conversation are the people in it. So with that as a kind of frame, can you tell us who you are, how you found crypto, how you found yourself in this, like, weird corner that is the start of hopefully the next bubble. And I guess, like, what about crypto, like, drew you in as opposed to repelling you?
**Speaker B:**
So I consider myself a Web two and a half person. I've spent most of my career in Web two and I was always fascinated by the potential of technology to impact people's lives, to make the world a better place. But for the most of my career, I was in B2B Web2, which in my mind is the thing that is like, you know, maybe controversial to say, but furthest away in my mind from impacting people's lives. Right, right. And I, you know, did a lot of. I was a sales engineer at the start of my career, did product, you know, halfway through. But I was kind of caught in this flywheel or kind of this mouse wheel of constantly just chasing the next deal trying to sell software. And I was pretty far removed from understanding kind of the impact of technology. And so my entry into web3 really started with the launch of the Ethereum whitepaper and just learning more about the impact that a Turing complete decentralized platform could have. And I feel like the potentials were endless and the impact it could have was everywhere. And so really got interested in the tech itself and because of the tech, started doing a little bit of, you know, personal investing and got the opportunity to jump full time into Web3 with ConsenSys in 2018. So I was there during the ICO hype, you know, when everything was really starting to pick off, pick up. Consensys was fascinating to be a part of in still the relatively early days of the Ethereum ecosystem developing all across the world, seeing the early days of the wallet, of Metamask developing and of the various projects, Gitcoin et cetera, starting to take shape to think about how Ethereum could be applied to impact different parts of the world, got exposure to the enterprise side. This was back when all the enterprises we were talking about hyperledger fabric and using blockchain for supply chain for tracking. I Was really fascinated by the potential impact that I think blockchain could have on businesses because that was the world I was familiar with.
**Speaker C:**
Right.
**Speaker B:**
And so did some work there and then also traveled quite a bit to China actually. I was part of, you know, I'm, I'm Chinese and had a lot of interest in how blockchain and crypto in general could, could impact a country like China with its political structure and everything. The hype there was very real. There was a lot of people jumping in. So anyways got a lot of exposure to kind of the early days of crypto and was quite fascinated again by the, by the impact. After about a year though at Consensys I actually decided as a marketer myself, a growth person myself and one that wasn't interested in trying to hype tokens and make a quick kind of pump and dump cycle. I thought it was too early for kind of true product grounded marketing to be done and growth to be done. And so I actually left the space in 2019 and went back to Web2, joined an early stage startup, grew a marketing team from just myself. I was employee number seven at that startup and by the time I had left it was over 100 people kind of went through that journey and, and decided it was time to come back early last year and kind of found Opal wanted to get into kind of infrastructure product and infrastructure project that was building the foundation of Ethereum. I still think there's a lot of need for better developer tooling, better infrastructure because that sets the foundation for all the innovation that's happening and all the innovation that's going to happen in the future. And thought that distributed validators had a really critical role to play in proof of stake Ethereum and making the foundation of Ethereum stronger in the long run and ensuring the decentralization of the network. So yeah, now I'm leading growth for Obel and kind of 2024 is going to be interesting year for us. So really excited to kind of dig into that as well.
**Speaker A:**
Yeah, for sure. So before we get into Obel, just while we're still in your background I what I love about your story is you are like the person who entered crypto were here so early and then realized like that this space wasn't if it hadn't reached the maturity in which like your skill sets were needed and so you left crypto and then you came back and so like obviously you made the right choice but I wonder if you just have any reflections on like why crypto, let's say pre 2019 was not ready for formalized marketing and really like marketing outside of like token prices and this kind of like pump and dump cycle. And then in that answer, I would be curious to hear like, what's going on now? Like, where do you think the turning point is that like you have like felt that the industry has reached a point where like the things that you really believe in are valuable?
**Speaker B:**
I think back in 2018, 2019, the way I would characterize the Ethereum ecosystem and kind of web3 in general was that there was a lot of really smart, really great dev work and research being done. You know, we were talking about, you know, proof of work versus proof of stake and the different consensus mechanisms. There was a number of chains launching on different consensus mechanism and trying to solve, you know, the, the scalability trilemma and, and all of the kind of really technical work that is required to build an ecosystem as vibrant as Web3 is today. And then there were a lot of people trying to make a quick buck. And I think at the time the hype kind of was leading the actual maturity of the technology and there was a large gap between what the public was thinking about with crypto and where the technology was. And because of that, there wasn't a lot of like true products. All the use cases were still proof of concepts. You know, there were early days of designs of things that are, you know, of how things could be applied in different things. But nothing was mature enough to actually be ready to be taken to market where there was a user base and there was demand and there were needs to be addressed and product mature enough to be able to do that. So I think that's why, like, for somebody like me who my core is product marketing, you know, I lead growth, but like, at my core I'm a product person and I want to see technology being applied to actually impact people, to change, you know, make it easier for somebody to do something or to make their lives a little bit better than it was before. And none of that was quite ready, I think, back in 2018. Since then, I think a lot of things have happened. You know, Defi introduced, we had NFT hype, which from a utility standpoint kind of draws into people's creativity and into like, you know, digital ownership and all of those things which I think are quite exciting. There's a lot more tooling available. I think from a hype cycle, it's kind of died down a little bit, which is why I went, actually I left during a bear market, I came back during a bear Market which was a little bit by. By choice because I think the true development and building happens during bear cycles. Right. And at paves the way towards future bulls. But yeah, so I think a lot has happened in the space since 2019 to mature it and I think there's a lot of true products now that need to be built and need to be figured out how to apply it from a go to market sense. So pretty excited about where the space is and you know, I'm the reason I came back. The plan was always to come back after, after about three years and kind of see where the space developed and I think now's the right time to like start to kind of build towards the future. I think a lot of exciting things are on the horizon.
**Speaker A:**
Yeah, for sure. And I think that one of like the exciting parts of this like kind of pre bull run phase is that the, the narratives that are going to drive the next bull run are like starting to like become apparent. And I think like it's so clear, it's about like LSTs and about the staking layer. It's so clear that it's about Eigen layer and restaking. It's so clear about the modular blockchain and so totally hear you on that I think. I guess like last question before we talk modern day, I would love to just hear, do you have any like thoughts or just anything you can share about the experience of being outside of crypto during a full bull cycle? And that's both like the exuberant highs and the crazy amounts of money and like Luna at 120 and also just like the cataclysmic falling and the like dinosaur extinction event and SBF and all of this like what was that like experiencing a space that you love so much but just not in it.
**Speaker B:**
Yeah, it's a interesting question and I think for me when I was out of it I frankly didn't really pay too much attention to the headlines because that to me was still indicative of just a layer of hype that sit on top of the true kind of maturity and continued steady growth of the space.
**Speaker C:**
Right.
**Speaker B:**
I think that's where I was actually quite interested in what was happening.
**Speaker C:**
Right.
**Speaker B:**
All the stuff with people trying to like capitalize on the hype to me is frankly the part of, of crypto that I hope goes out of the system over time. You know, I think you want to have good energies, you want a good excitement, you want to have excitement that fuels progress and fuels growth in the space, but you want to get rid of the stuff that, you know, people are looking for opportunities to. To kind of capitalize on that hype. So, you know, I don't feel like I necessarily missed out on the bull. You know, I, you know, I was still holding assets and kind of riding through it. I obviously was excited on the side, but I think what truly got me back into the space after, and I felt that it was time to get back into the space was all the growth and development that was happening as part of the tech itself. You know, Ethereum finally figuring out how to move to proof of stake, you know, after years and years of just research and debates and development, et cetera, and now getting to a place where you now have layer two scalability being built on top of it. You have, you know, restaking and kind of different use cases starting to be talked about.
**Speaker C:**
Right.
**Speaker B:**
And you have a pretty mature now foundation that can sustain and support an entire ecosystem of innovation to happen on top of it. So I think it's really exciting where we're at now.
**Speaker A:**
I'll Never forget in 2022, I remember specifically where I was, which for Americans, it was super late when the merge actually happened. And I just remember thinking like, this is such a tragedy that this is happening, like at this moment where it was. It was so far after Luna that, like, the people who are going to leave have already left and there wasn't even, like horror porn stories on it anymore. And then two months later, any excitement over Ethereum and the merge would be wiped out by sbf. And so, like, I just. Mainnet is so interesting and so exciting. And to, like, call anything that we've experienced a bear market is to, like, I guess, show your cards that you're where you're paying attention. And so, like, I just totally feel you on that.
**Speaker B:**
Yeah. And I think, you know, mainstream understanding of Web3 is going to be. We're still quite a ways away from that, you know, and I think we're developing steps towards that. I think the recent Bitcoin ETF approvals puts it back a little bit into that limelight in a positive way.
**Speaker C:**
Right.
**Speaker B:**
And I think there is steady maturation from how mainstream sees crypto and how mainstream sees Web3 in general. But I feel like those narratives, impressions will have to change over time. I don't think we'll get there overnight.
**Speaker A:**
Yeah. And part of that is developing the capability to. That is actually applicable to real people's lives and not to Internet speculators.
**Speaker B:**
Exactly. A hundred percent. A hundred percent.
**Speaker A:**
Cool. All right, so let's Pivot into Obel. And I would love if you could kind of like give us an idea of what oboe is, but from the standpoint of yourself, as you're like looking for new opportunities in the web3 space. So can you like, kind of give us the like the Bret Li view of like, what is going on at Obel and why is it so exciting just from a super high level.
**Speaker B:**
So Obel's roots comes from, even back from my days at consensus in 2019. And as proof of stake was, the idea of moving Ethereum to proof of stake was being talked about, right? And this was pre genesis, there was still a good amount of people concerned about the potential centralization risks of, of a move to proof of stake. You know, the fact that now people are voting with their assets and it's very easy basically to trust your assets with professional parties that are experienced. And then over time that leads to basically pooled, you know, assets into a small group of professional entities, which is mirroring, you know, where we're at today with Web2.
**Speaker C:**
Right.
**Speaker B:**
And so I think that fear was pretty prevalent back, you know, as the idea of the move of proof of stake was being talked about. And so the two co founders of Obel, Colin Myers and Ocean Kind, were both ex machians, they're both at consensus and basically started thinking about this problem in early days of how to make sure that Ethereum stays trust, minimize, stays credibly neutral, stays decentralized and kind of resists again these natural forces of centralization that happens with proof of stake. And the idea of eventually matured into distributed validators. And so the core concept of distributed validators, in short, is that today a validator and Ethereum and with any proof of stake network, frankly is one node, one validator. So you basically have the entire 32 ETH sitting on one machine. If that machine goes down, that node is offline, right? So there's no failover, there's no redundancy. Now that has a ton of ramifications. As a solo validator, that puts a lot of burden to make sure this machine stays up. And you need to have DevOps experience, you need to know what you're doing, you need to know how to troubleshoot it, right? And that's a lot of stake to risk on one machine. So it's much easier, much less risky for an individual who doesn't have that skill set to trust their stake to somebody else, to a professional entity, you know, to a liquid staking protocol, et cetera.
**Speaker C:**
Right.
**Speaker B:**
And that again creates that centralization impact that I was talking about earlier, second is even professional entities, they want to ensure that they minimize the amount of time a machine is offline because if, if that validator is offline, they're not earning rewards and that lowers their potential, right? So what do they do? The only option today is for them to have a active passive setup, meaning I have a machine that's live and running. I have a failover that in case that active machine goes down, that passive mission can be spun up and to basically be, you know, running that validator in lieu. The problem is if these two machines are ever online at the same time, that same private key is on both machines and that's a double signing, that's a slashing event. And so that creates a ton of risk, right? Even for professional validators who want to try to keep that high availability. So there's no easy way necessarily to solve these kind of offline issues without introducing slashing risk to basically improve the resiliency, redundancy of the network. And so there's ramifications on the solo operator side, there's ramifications on the professional operator side. And basically distributed validation validator allows this to be solved by taking a single machine and run, or single validator, I should say, and running it across multiple different machines. So now you're validating as a cluster of nodes with natural active, active redundancy. If one of the nodes goes down, as long as three of the four in a cluster are up, then that validator is up and running and then you can increase the number of nodes to 7, which increases the redundancy to 2 machines going down to 10, 3 machines going down, etc. Etc. And that lowers the barriers for solo validators and it improves the resiliency of larger operators as well. So it has a ton of ramifications across the board. Now why, going back to your question of, like me as Brett Lee, why I was interested in this, as I mentioned as I was getting back in the space, I still think right now the things have that have the clearest need is still developer tooling and infrastructure projects and infrastructure products, right? And I think the mission of what distributed Validators has, which is to ensure that Ethereum stays decentralized, that, you know, Ethereum as a network stays trust minimized, stays credibly neutral, is an important one. And distributed validators is critical to ensure that that happens.
**Speaker C:**
Right?
**Speaker B:**
That's why it was part of Vitalik's Ethereum roadmap or merge roadmap. It's kind of one of the final steps on the merge itself. And that's why I thought it was, it was a critical project and I could lend my skill sets to it, to help it grow, to help it find product market fit, to help it gain adoption. So that's why I decided to join OPAL a year and a half ago.
**Speaker A:**
I really liked the way you framed it as like pre distributed validator technology. We live in this world where it's one node is one validator. And for the pedantics at home, like, yes, I understand you can run multiple validators on one node. We're talking about the other direction. The one validator equals one node and.
**Speaker B:**
That actually increases the risk, which is if you have now multiple validators running on one node, that's one machine still, if that machine is offline now, multiple validators are offline.
**Speaker C:**
Right.
**Speaker B:**
So it even just increased the risk. There's no way to run one valid on multiple machines. And that's what distributed validators, distributed technology kind of solves for.
**Speaker A:**
Yeah, and this might be kind of a basic point, but I think there's something that gets a little bit lost in just the vocabulary in this space, like the validator node. And I think, you know, like, it's really helpful to think of the validator as something that is completely virtual. It only exists with within the EVM and like only has a purpose within Ethereum, whereas a node is an actual computer, like a real physical computer, or part of like a server cluster. Yeah, you're right. But like what, what you're talking about with distributed validator technology is today, without it, like your validator within Ethereum corresponds to a physical machine, one physical machine. And what DVT allows you to do is say, okay, we're going to say this validator is actually backed by usually an odd number, somewhat divisible by 2/3, but like 5 or 7 or 10 machines. And the true key insight here, which you've already brought up is that what Oval is doing is developing software that makes it safe to run redundant backups of your machine. That like today is not safe. It is just not safe.
**Speaker B:**
Yeah, 100%. An analogy I like to use is actually if you think about a validator as an airplane, right, the body of the airplane with passengers in the airplane, let's call it 32 passengers, you know, one for each. If you have there a node essentially is an engine. And today we're flying on single engine planes. And you know, you can imagine you would probably not want to get on a single engine plane and fly across the Atlantic Ocean. Like it's just probably not safe to do because there's no redundancy. If that machine dies, guess what? That plane is going down, right? So now with Jupiter Validator, essentially is what you have are multiple engines on a plane. And with a Boeing 747 you have four engines. If one of those engines dies, you know you're still okay because those other three engines will carry you to the other side of the ocean. And so that's another way to think about it is, you know, a plane is the validator and the engines on the machines running it, they're the nodes. And what you now enable is redundancy because you can have multiple engines powering this validator.
**Speaker A:**
And I guess this again might be a little bit basic, but can you just talk through a little bit about why it is, I will never say risk free, but it's relatively safe to run an Ethereum node until you start thinking about coming up with redundancy. And I guess like first, like why might you care about having redundant setups? But two, like you've briefly mentioned it, like, can you just talk through, like, what are the actual risks you're introducing by having this backup?
**Speaker B:**
Yeah, so at the core of every validator is a, is a private key, right? It's, it's one Ethereum key that you are using to sign attestations and to do block proposals. Essentially it's your key to having this validator be run. And every private key is unique on purpose because you want to prevent people trying to essentially sybil attack the network or hack the network. And the way to do that is by, you know, for example, having the same private key multiple times. And you're trying to basically like mimic, be a bad actor and mimic a true, an actual good validator on the network. And so inherent in Ethereum, as with all proof of stake networks, there is slashing penalties, which is it detects some bad behavior in the network or incorrect behavior in the network and it's going to penalize you for that. And that prevents bad actors from trying to hack the network. A very common or the basic one for slashing is double signing. If you are trying to do the same duties or attest to the same duties with two machines that have the same or two validators that have the exact same private key, that is a big no, no, you're going to get penalized in the network, right? And so because today again, one validator, one node, one validator, one private key, right? The only way to provide redundancy is to basically have a copy of that one validator with that one private key sitting in a different machine. The reason you would want redundancy is if that machine goes offline, you're as, as you're performing duties, proposing blocks, you're getting rewards from the network, you're getting rewards from, from, for, from consensus, right? And if you're offline, guess what, you're missing out on rewards. And so that decreases the amount of essentially yield that you're getting for staking Ethereum, staking your eth. So for more sophisticated operators, right, who wants to maximize the amount of rewards that you can get, they're incentivized to try to keep uptime as high as possible, to keep every machine running 100% of the time, ideally if they can, right, it's never exactly 100, but as close to 100 as possible. And so the only way to do that is if you think about in the cloud sense, like if a machine goes down, another machine gets spun up. Nope, nope, nothing's lost, right? You're fine. But in Ethereum there's, the only way to do that is as I mentioned, to have a passive machine that can be activated when that active machine goes offline and that creates that double signing risk. Because if both machines are on at the same time, same private key, the network detects it. Big no, no, you're going to get slashed.
**Speaker A:**
And sorry, just to like draw out an example here, the idea is like, so your primary, like for whatever reason falls offline and so your secondary is like, no worries, I got this. And it turns out that your primary didn't actually fall offline, it just like missed 20 minutes. And so on the 21st minute it's like, I'm back in, let me sign something. And the secondary doesn't realize that the primary is still in, so it's signing stuff. And now you're redundant. Your redundant setup, which is like done for the safety and good natured operation of Ethereum, has now created first a security problem for Ethereum. And in order to stop that security problem, Ethereum's response is to take your money.
**Speaker B:**
That's exactly right. Yeah, that's exactly right. And one actually nuance here is that with offline missed rewards, you're just not getting the rewards that you would have gotten if your machine was offline. But a slashing is the eth that you have staked. We're going to take a piece of it and you're actually getting penalties, you're losing money. In that case you're losing assets in those cases.
**Speaker A:**
Got it. So makes so much sense and I think, I hope everyone realize why this technology is so crucial. Because in the modern world redundancy is how we create scalable billion person systems. But the construction of Ethereum does not allow us to have safe redundancy and that's what DVT is. So I think now is a perfect time. Can you walk us through how DVT or OBOL with the Charon. Sorry for the pronunciation. The client works and I think Karen. Yeah, I think it might be helpful to talk first through like as an ETH staker, what is the software that you're running? And then now if you want to upgrade to become a distributed validator. Sorry, a distributed validator, like how does that change the dynamic of what you're doing on your machine?
**Speaker B:**
Yeah, good question. So there's basically two ways you can set up a distributed validator. One is where you run all of the, let's call it, say there's a four node cluster, so four machines running one distributed validator cluster. You can run all those four by yourself or you could have four people each running one node and operating as a multi operator cluster. Right. So those are two kind of distinct paths for the first path. Basically what you would do is. So we've developed what we call a distributed validator Launchpad. Actually our head of product was back at consensus and developed the original Eth2 launchpad. So when you want to set up your Ethereum validator, you have now a dap, you go online, it's a ui, you walk through the steps and it'll walk you through exactly how to set up the validator. We've developed something similar for setting up a distributed validator.
**Speaker C:**
Right.
**Speaker B:**
So you go, go online, go to this web portal and we'll walk you through the steps. So the first step really is what we call a distributed key generation ceremony. So you can think of now instead of having one private key, each of those machines, four machines within a cluster is running a key shard or a key share. Those shares, when they attest, will through some fancy math, be put back together and then create essentially the full signature to be able to sign as a single logical validator. Right. So a distributed key generation ceremony creates the key shares and it's a one time synchronous event. You develop it or you run it and then you have the key shares. Then you set up the key shares onto those individual boxes as if it's a regular validator. So really it just adds an additional step of that DKG and then you basically set up the distributed validator as if each machine was its individual validator and it'll through Charon, which is a middleware, it'll wait for each of the, it'll monitor each of the different machines, wait for it to be online.
**Speaker C:**
Right.
**Speaker B:**
And then basically once it's online it, the caron is what aggregates the signatures and then you know, validates the entire sorry signs as if the entire cluster is a single validator. Now in a multi operator setting, the exact same steps, the only difference is those key shares goes to four individual people and they would be setting up their individual machines as if it's a full validator, again with caron connecting the four. So basically the difference is you just have this additional step of a DKG and then you're basically setting up multiple different nodes in order to activate this validator, not just the single machine.
**Speaker A:**
So just for the sake of simplicity, let's only talk about if our clusters are operated by separate people and then you guys at home can figure out how to reduce that down to one person. It's just easier. But so, okay, so I think first like let's, let's return back to this model that we have where validators are virtual entities that only exist within Ethereum and nodes are physical machines that exist outside of Ethereum. So it sounds to me that like what's going on in this distributed key generation phase is that on the Ethereum side, on the validator side we're generating a private key that looks like every other validator private key. From Ethereum's perspective, that's what your cluster will use to sign the messages. Like all good, totally normal. What's interesting that's happening in Obel and all DVT technologies is that you're taking that private key that you generated and then splitting it into four parts. And then I think the key magic is that you don't need all four parts in order to reconstruct the original key. You just need, sure, it's configurable, but you just need 2/3 of the keys in order to generate the full private key. So first of all, is that correct?
**Speaker B:**
Almost. The only nuanced difference, which is actually a pretty important one, is that there's two ways to essentially create these key shares. One is you have a full private key and then you split it.
**Speaker C:**
Right?
**Speaker B:**
But that means that at some moment in time that full private key is sitting somewhere on a machine, which could potentially be a security vulnerability.
**Speaker C:**
Right?
**Speaker B:**
A DKG actually, you create those key shares at the same time, that full private key is never present, ever. Even after you deploy it, you can never actually get the full private key. And that creates a lot of security value because as a hacker, right, if somebody tries to compromise that validator, they can't just compromise one machine. They can't basically have a man in the middle attack where they can take that private key when it's created and steal it. You have to basically attack all four machines in order to generate what is equivalent to that private key. So that. That's an important nuance for people that are, you know, a bit more educated on distributed validators. It's an important one because there's a security improvement that you gain from actually a true dkg, which is you never have that single private key and you're not splitting it. You're actually, when you create the private key, creating the key shards and the key shares.
**Speaker A:**
Yeah, it makes a lot of sense. And I think, like, the big caveat that we'll have here is that we're not really going to get into like, the cryptography of how any of this works. I think you just need to take it for granted that, like when we say you can split a key, then sign stuff with the split parts and then recombine it so that it's like the original key signed, the original thing. Like, sorry, guys, just take it for granted.
**Speaker B:**
Take it for granted, Exactly. If you want to read all about it, there's plenty of articles on, on Threshold Key Signing and, you know, DKGS and all of that stuff. Stuff. So there's a lot. There's a rabbit hole you can go down for sure. But yeah, probably too advanced to talk about today.
**Speaker A:**
Very cool. So I guess I kind of just gave away the punchline a little bit, but can you talk a little bit about. So we're post DKG and let's say you, me, Vitalik and who's another person, Tetra Node. Yeah. Are all decided to run a oval distributed validator cluster. And so we've done that. DKG Ethereum understands us as being a single validator with a single private key. We each have our shards of the private key, which don't give us any information about the original private key, but can be used. So can you talk a little bit about, like, from the perspective first of like just the Brett Li machine in this cluster, what's going on? And then can you talk about the step where that the. I'm sorry, the Charon that you care on that your client comes in and coordinates these four machines to like ultimately result in a single validator signature.
**Speaker B:**
I think we can simplify this in that each individual node is just really, to the person operating it, it's just acting and behaving like a regular validator, just doing a regular attestation duties. You know, if it's a block proposal, you're still doing that just regularly.
**Speaker C:**
Right.
**Speaker B:**
The only thing is, because when you're signing it, you're signing it with a key share, that signature is going to, is being sent to Caron, not directly to the beacon chain, right? It's, it's being sent to Charon, which is a middleware and Charon is connected to all of the four machines within a four node cluster.
**Speaker C:**
Right.
**Speaker B:**
And when that signature goes to Charon Caron, all it's doing is waiting for the signature to come. It's saying okay, three of the four, four of the four have come. Let me try to aggregate these, see if it's. Are those all valid signatures? And if so, then once I have enough, once I have more than 66%, more than two thirds.
**Speaker C:**
Right.
**Speaker B:**
So in a four node cluster it would be three out of the four. I can now create the full signature for that entire validator and then send it to the beacon chain and send it to, you know, to the Ethereum network essentially. So really to the individual, there's no difference to operating a distributed validator node or distributed or operating the validator itself.
**Speaker A:**
And so I think like the one piece of information that we maybe dropped out of this was that from the perspective of cryptography, a key shard and an actual private key, correct me if I'm wrong, but they're the same thing. And so therefore you can sign a message with a key shard in the same way that you sign it with a private message.
**Speaker C:**
Right.
**Speaker A:**
Or sorry, private key.
**Speaker B:**
That's correct. It just the key share would not be valid to, if you sent it directly to the beacon chain, which you can't, but let's just say you could. That wouldn't be a valid signature because it's not a full private key.
**Speaker C:**
Right.
**Speaker B:**
And so you have to have at least three of the four in order to generate the actual valid signature for, for the validator as a whole.
**Speaker A:**
Makes sense. So let's jump to a 10 node cluster just to make the math a little bit easier. But like let's maybe walk through. I don't know if this is the right framing but like some of the failure modes. So like totally makes sense when all 10 nodes are like humming along, attesting to the exact same thing, sending the same thing to the car on the Caron middleware client and then that's aggregated and sent to Ethereum. Great. So can you maybe talk through what happens like first let's say when four of the six. Sorry, four of the 10, so less than 66% are offline and like kind of what happens there? And then can you talk through what happens when we're not in an offline scenario, but we're in a scenario where it's, let's say nine of the nodes say one thing and the tenth says something conflicting? Like how do, how does, I guess like the oval system work through some of these like tense consensus bits?
**Speaker B:**
That's a good question. So I think, let's just start off with the scenario where four of the nodes are offline.
**Speaker C:**
Right.
**Speaker B:**
And you have the other six. Charon is basically acting as a signature aggregator. That's, that's all it does. And it's trying to put these signatures together and to generate the full signature. And so it, because of the cryptography, it requires at least 2/3 of the signature signature to be valid in order to generate the full signature. And so in the case where for the nodes are offline, then it wouldn't have the adequate amount of signatures from the key shares from the individual nodes to generate that valid signature for the entire DB cluster. And so it just basically doesn't send anything to the beacon chain because it says it's not valid, we're just not going to send it.
**Speaker C:**
Right.
**Speaker B:**
Similarly, in the case where you have some Byzantine behavior, you have a malicious node within the cluster, Caron will actually be able to detect again if that signature is valid or not, if it tries to self slash, etc. If it doesn't have again the amount of valid signatures necessary to generate the valid signature for the validator as a whole or for the distributed validator as a whole, then it just won't send anything to the beacon chain. So it actually acts as a protection layer here.
**Speaker C:**
Right.
**Speaker B:**
You basically have to have at least in this case seven nodes compromised in order for any bad thing to happen with the validator. In order for the self slash or to do anything kind of malicious, you have to at least have seven of the nodes compromised, which creates a level of resiliency in the actual validator entity itself to prevent against these malicious behaviors or offline scenarios.
**Speaker A:**
Part of the, I hate to say trick here, but part of the trick is that like with Ethereum, if You just don't participate when it's your turn. It's like, not exactly, but it's basically like net neutral. Right?
**Speaker B:**
Yeah. You're just missing out on rewards.
**Speaker A:**
Exactly, yeah. Whereas if you do something wrong, like the hammer comes down, like the financial hammer, the ban hammer, like you're in trouble. And so I think kind of what you're saying is that the Chiron Charon. I don't know why that's so.
**Speaker B:**
We'll get this, we'll get this right by the end of.
**Speaker A:**
I know, I can't believe we're recording this. This is very embarrassing for me. But anyway, the Caron client is leveraging that dynamic to basically say, like, if there's any sort of question, what's going on here, just, just don't show up. Right. Like, we're just not going to send anything. And like, yeah, we might miss little attestation money, we might miss like a huge Mev boost proposal, but like, if there's any question, we're just not going to do anything because the worst case scenario is slashing. And like, that's not on the table when you use our product.
**Speaker B:**
That's correct. That's correct. It's. It's an extra layer of protection at the validator level that prevents against malicious behavior, which today doesn't exist. Because today if a validator is malicious, it'll try to send malicious, it'll try to sign on the actual beacon chain itself. There's not that additional layer of protection at the validator level, which is what Caron acts as.
**Speaker A:**
This is a little bit of just detail, but like a cryptography detail. But like, let's say I have 10 signatures and I, I hope that they're all valid. Is it pretty easy for me to check, like every subset and figure out which ones can be combined and which ones can't? So for example, If I have 10 signatures, eight are valid and two are not, am I able to like quickly identify which two are invalid, put them aside, then aggregate the eight correct ones and then like present that to Ethereum as a signature.
**Speaker B:**
Yeah, essentially it's, this is getting a little bit into the cryptography, but like it's, it's based on elliptic curves. And so it's not necessarily like you're not adding things together. You know, you're actually trying to like fit things into a math puzzle to find a dot on this curve.
**Speaker C:**
Right.
**Speaker B:**
And so, yeah, essentially if something takes you off the curve, you could throw it out and take the next one.
**Speaker C:**
Right.
**Speaker B:**
And so this is getting A little bit deep. Probably would want my CTO here to answer things more definitively if you will. But yeah, it's, it's pretty. You'll be able to find the, the valid signatures and, and be able to sign it.
**Speaker A:**
So. Okay, let's. We'll jump back to our four person cluster where it's you, me, Vitalik and superphys and like everything that we're talking about in DVT is so, so awesome when you're thinking about like redundancy and like your ability to just like create a more distributed. It works so well when you really trust your cluster.
**Speaker C:**
Right.
**Speaker A:**
Problem is we're in a trustless space. And the thing like I'm specifically thinking of is in our four person cluster, let's say, I don't know if you can tell, but I'm in Mexico right now, let's say, like there is a problem in my house and my computer is just down. And then Brett has really pissed off his isp, so they say no Internet to your house. And so now Superphys and Vitalik are like rockstar E stakers. They're only doing the right thing. They've done everything right and now they've entered into a situation where their trust in us has resulted in penalties for them. And so I'd love for you to just talk through the trust implications of this trustless solution to centralization.
**Speaker B:**
Yeah, it's a good question. And to answer that question fully, I, I do need to go a little bit into kind of how Opal has designed our implementation of distributed validators. So as a design philosophy, we wanted to build Oval in a way that basically allowed it to be modular, allowed it to be used in all the different contexts of staking.
**Speaker C:**
Right.
**Speaker B:**
And the most pure essence of doing that is to build it as a middleware, as essentially a new DB client that sits inside the validator tech stack. It sits between the validator client and the consensus client.
**Speaker C:**
Right.
**Speaker B:**
Now, with that said, we're not changing any of the trust assumptions actually in this case around the validator itself. And so in our current version, which we're calling V1, you essentially are treating the logical DV cluster as a single validator. And so all the people within that cluster we're not making any assumptions of that means the onus of trust is on the operators themselves today. Meaning, like I need to trust these three other people within my cluster.
**Speaker C:**
Right.
**Speaker B:**
To an extent.
**Speaker C:**
Right.
**Speaker B:**
One of them can be offline, et cetera. They have time to come back on, but they all need to Be good actors. I need to know like, that I'm in a cluster with them. And, you know, if a couple of people are offline and there's some permanent issues, then we have some social consensus to be able to say, let's exit the cluster and let's create a new one. Right. So that onus today lies on the operators of those clusters. Now, with that said, we know the future looks to be more of, you know, we want to develop something that's, that's even more trustless than it is today. Today we're, we're saying it's trust minimized.
**Speaker C:**
Right.
**Speaker B:**
But it's not fully trustless. You kind of have to know the other people within your cluster. We are doing some, like, we're doing a number of different areas of research to think about what that, what we call V2 will look like where it's, it's more fully trustless. Because then you're getting into penalties and you're getting into, you know, tokenomics and those. There's a lot of, there's a lot of interesting areas that need to be developed in order for that to happen. So, so that's the answer, I guess, to that question. And again, the last thing is the reason that we did this is to basically have a product that would be able to be used in various formats today.
**Speaker C:**
Right.
**Speaker B:**
And so like as a, for example, a professional operator, you know, I trust myself, I'm the only one operating it. There's no trust assumptions here, but there's still value to DBT to be able to like, have redundancy within my architecture. And Oval allows me to do that.
**Speaker C:**
Right.
**Speaker B:**
For people that, you know, for liquid staking protocols that are running in many different operators, you know, they can have DVT implemented in a more kind of trust minimized sense. And there's community clusters being rolled out today that people know each other and they have friends that they've staked with.
**Speaker C:**
Right.
**Speaker B:**
And they want to now have a redundancy across that. So those are the use cases we tackle first and we will tackle the kind of more trustless, fully trustless type of structure as part of our V2.
**Speaker A:**
Makes a lot of sense. So I think this is a good moment to pivot into like, kind of the implications of DBT and of Obelisk. And I think we've touched on a lot of them throughout the last 45 minutes. So don't want to be too pandemic, but like, yeah, I would love to just hear how you see this transforming what Ethereum is And let's just take the easy ones off the table. It's like, reduces the costs of participating in staking from 32 ETH down to a more approachable number. Right. I think that's the big one. Like good. Thank you.
**Speaker B:**
Yeah. For community. For community validators. Yeah.
**Speaker A:**
The two that I am really curious to like hear your thoughts on are number one, the implications for liquid staking providers and like of course the big elephant in the room, Lido. Right. Love to hear how DVT is going to transform just the space in general and maybe any like specific like projects or things that are happening right now. Then the other thing that I'd like to talk about also super topical is we've seen so much communication about the importance of client diversity over the last, I don't know, three weeks whenever the BESU bug happened and then two days later the nethermind bug happened. I would love to hear the implications of DBT technology not necessarily for creating these clusters of, of different people to create more decentralization and redundancy, but like for me myself, like what benefits could I, like I guess I've already given away the answer here. Right. But like I myself could run a cluster that has all the different combinations of execution clients and then know that if one has a bug like that is not going to take down my whole stake. So those are the two big ones I have in my head. The implications for LSTs and the implications for client diversity. And so love to hear your thoughts and love to hear if there's any other big categories that you see as DVT just changing the game.
**Speaker B:**
On the liquid staking side, there is a big push in the community and I think there has been for a while to make sure that liquid staking protocols stay decentralized with Lido obviously being the one kind of in the direct target of that because they just, they're the market leaders, they have a lot of stake, you know, so they're running a good part portion of the network. The challenge with liquid staking protocols is that they're putting a lot of trust into their node operators, right. In Lido's case, they have 27 node operators. Each of them runs like 1% of network state. It's a lot, it's a significant amount of assets. And because of the trust assumptions that liquid staking protocols need to put onto the node operators, they really can only trust the most professional node operators.
**Speaker C:**
Right.
**Speaker B:**
The ones that are battle tested, the ones that have gone through that know what they're doing, that won't go offline, that won't get slashed, you know, that can be trusted frankly, with this amount of stake that creates a very natural centralization effect on liquid staking protocols. And you know, LIDO and others have realized that in order to decentralize they need to change some of those trust assets assumptions. That's where distributed validators can come in is because now if one of the node operators goes down, you have failover, you have redundancy, right? If one of the node operators frankly go malicious or try to like, you know, do something bad on the network, you now have 3, 6, 9 other node operators who are basically minimizing the amount of risk that that can pose on the protocol.
**Speaker C:**
Right.
**Speaker B:**
And so Distributed Validate is a critical technology for liquid staking protocols to be able to decentralize the node operator set, in fact to get more medium sized community valid, heck, even solo stakers involved in being a node operator for liquid staking protocols.
**Speaker C:**
Right.
**Speaker B:**
And so Lido, we've been working with them for quite a while and recently we're on the path to launching the first first module, additional module on Lido V2, which is their next generation of how they're managing their node operations. And this new module is called simple dbt. And it's the first time that just reader validators is going to be implemented into the LIDO protocol as part of their node operator set. And the reason for this is again to allow them to start now onboarding community validators into their node operator set and to have safety and be able to trust that they can safely do so.
**Speaker C:**
Right.
**Speaker B:**
So simple dbt, we just passed a critical milestone which is our testnet. With them we had minimum bars for effectiveness, uptime kind of performance metrics that we needed to pass as part of this test net in order to get approved or move towards approval for mainnet deployment. We are happy to announce that all of the performance bars were passed and we shared a report recently about that. And so we're on kind of the final step towards mainnet deployment of SimpleDBT, which is quite exciting. It's gonna onboard now 200 new node operators onto the Lido network today there's 26. So essentially with a single module you've almost 10x the amount of node operators on Lido. And this is only going to grow as we, you know, put onto, as we implement onto mainnet. We're going to run it for a while, see how it performs and then over time that module is going to grow and I think it's a great test bed for the impact that DVT can have for liquid staking protocols running to decentralize kind of the node operator set for liquid staking protocols. I'll also mention real quick that we have also been working with etherfi, which is another liquid staking protocol, for quite a while. They actually started with a more permissionless model. They wanted to have a decentralized, permissionless node operators almost from the start. And so together with them we launched what we're calling Operation Solo Staker, which is a program to basically sign on individual solo stakers, individual solo operators to be able to be node operators for etherfi. And that has also used DBT at its core. So we now see multiple liquid staking protocols leveraging distributed validators to basically create safety while still be able to decentralize their node operator set.
**Speaker A:**
Yeah, awesome man. And I think we just have to take a moment right here and just congratulate you and thank you you Brett, but you Obel, because like, look like the, the LIDO problem is a real problem and it's one of those things that we were, I mean it's tough to say sleepwalking into because of just the amount of noise about it, but it's just something that was happening despite how loud the community got about it, right? And I think that we were gifted a gift from the Ethereum gods, which is like say what you will about Eigenlayer and this whole points thing, but like it has created the momentum needed to like create other viable LSTs in size. So like that's incredible, but they're creating the exact same problem that LIDO did. And so that's why this like oval moment is even more important and even more special because not only are you like actually doing something that like takes us in real steps away from the LIDO problem and like the Lido problem I've taken to its fullest extent is like the end of Ethereum. Like we've just created slower shittier aws, right? So I think, I think not only are you doing big things for Lido, but you're doing big things for Ether Fi and soon I'm sure Puffer Finance and after that maybe Frax eth and like all of these staking providers that are doing interesting things like within the evm, but like at a cost that is so high in terms of centralization that like it, it draws into question like the whole enterprise. So first before I move on, just congratulations on like passing all your tests on getting closer and like on, on, on doing big things for the LIDO problem.
**Speaker B:**
So thank you, I appreciate that. The one thing I will say though is that in, in working with the LIDO team, I think, you know, in general they also recognize the importance to maintain decentralization. And I think, you know, that's why they have been so proactive in delivering simple DBT and kind of, you know, they've also started building out a second module called their community kind of validator module. And so I think in general, like LIDO also recognizes the potential risk of being too centralized. But, and I think as a whole the entire community is just more aware of the risk of centralization, especially as there is more excitement around staking. There's more now with restaking that added excitement.
**Speaker C:**
Right.
**Speaker B:**
It creates additional need and additional importance on making sure that the node operator set of every liquid staking protocol is decentralized and truly decentralized because it creates risky forces in the market if there's too much centralization in any one part of it.
**Speaker A:**
All right, unfortunately we're running low on time, so I'm going to have to say this is the last question, but man, this is. I really could go on for hours and hours. So with the last question here, I fully feel you and am on board with like, this is the right way to decentralize the technology and the infrastructure behind LSTs. And this is all about the node operator set and the physical machines and how to create decentralization in the real world. My question for you is as we enter this new meta where we get more decentralized on the machines contributing that almost kind of gives us the permission structure to centralize all of the actual value into like one or two single tokens, like within the evm. And so like I find that it's kind of funny that like the more we decentralize outside of the evm, the more comfortable we are centralizing our assets within the evm. And so I wonder if like, kind of where you fall on that observation. Like one, tell me if you think that's just asinine, but two, like, do you see that more as like, that is the next problem we need to solve and we're just going to leave it to the people building within the virtual machine. Or do you see that as like, no, no, that's not a problem, that's a good thing. Like what we're doing is creating like a centralized money LEGO for defi that is backed by a decentralized trustless network.
**Speaker B:**
It's a really interesting question, right? I think there is this balance that needs to be struck between having standard ways of operating or transacting in the Ethereum network versus creating single points of failure. And I think that constant kind of balance and battle will continue for as long as Web3 is, is around, which is, you know, going to be a long, long time.
**Speaker C:**
Right? Yeah.
**Speaker B:**
So I, I think the answer to that question is, you know, just because that there's DBT doesn't mean there should only be one liquid staking protocol. Let's, let's take that to its extreme, right? Because there's other risks that are involved with having only one liquid staking protocol, smart contract risk, there's you know.
**Speaker A:**
I.
**Speaker B:**
Think just social risk, right, etc. Etc. So I, I think there needs to be this balance being struck between that. Now you also don't want to get too far in the other direction where everything is too scattered, right? I think with layer twos there's starting to be, there's just a lot going on there.
**Speaker C:**
Right.
**Speaker B:**
And so there's some standardization that will still in interoperability perhaps that still need to be principles that still need to be, still need to be applied. So I don't know if I answered your question directly but I think in short there is this kind of natural balance and I don't have a clearance about what that balance looks like because I think the ecosystem will kind of evolve and figure itself out. I think we have a good way as part of the Ethereum ecosystem of self governing and if everything is going too far one way, kind of having the community push back and push it the other direction. So I trust that we'll find the right balance for all of this in the future. But the goal is to continue to like remove and reduce single points of failure within the network as a whole. And from a distributed validator standpoint I think that's one critical area that we're, we're focused on which is reducing that single point of failure at the consensus layer of Ethereum. And yeah, I know we didn't have, Sorry, real quick, I know we didn't have the time to cover it, but the client diversity topic is actually pretty, pretty, pretty critical for this as well.
**Speaker A:**
Let's spend five more minutes, why don't we talk about it?
**Speaker B:**
Okay. Yeah, so the client diversity problem is quite interesting because going back to, I think why some of the issue exists is because you want to be using the best in class client when you only have one choice, right? If you're running one machine, you're running one validator. Like I want to make sure, I'm using the one that everybody else is using because I'm the safety, the safest, I feel the safest because everybody else, there's social consensus, social support, but that naturally creates a risk to the ecosystem as a whole because if there is an issue with that most used client geth right then a massive percentage of the network is, is, is impacted. And so with distributed validators you now don't need to make that choice because you can have with each node in your cluster a different client configuration, different execution client, different validator client, different consensus client, you know, and you have redundancy now. And so it actually makes it much easier to get client diversity because you're not making the choice. Do I go with a minority client that maybe is a little bit less proven or do I, you know, use the majority client that everybody else is using and increase ecosystem risk? You now don't have to weigh those choices because you can have a mix of clients within the nodes within your cluster. So I think it is the actual solution to the client diversity problem is to be able to run a multi client type of setup in every single validator that you have.
**Speaker A:**
Yeah. And I think that this makes so much sense for the professional staking providers. Like why wouldn't you do it as a. I'm a rocket pool home staker. So like I'm just trying to think through now like am I really going to go buy two more computers and like maybe co locate them, maybe put them somewhere else, maybe I'll get like a Amazon validator. But then like what am I doing for. And so like I do most of the stake is with the big providers anyway. So this actually might just straight up solve the client diversity problem. And then I think too that as solutions like DVT make the big players more comfortable with all the clients, like that just trickles down into.
**Speaker B:**
Yeah, you know, like even as, yeah, but even as a solo staker you can now have client diversity within your cluster. Right. So even if you're still running a single node yourself and you're maybe working with three other people to run a cluster, each of them could have a different client configuration. And so like instead of running, you know, a single node with 32 ether, maybe you're running four nodes on a single machine still.
**Speaker C:**
Right.
**Speaker B:**
But with, with four sets of different clusters, different people, each with different clients. So there's a lot of now more configurations you can, and flexibility frankly that you can, you can choose to operate with that you didn't have before. And so I think that's still a critical point even at the, at the home staker level.
**Speaker A:**
Yeah. And so like are, are you, I won't make you take a position on behalf of Obel, but do you personally foresee like once DVT and I guess specifically Opals, DVT is like permissionless and accessible to like everyone just with like button clicks. Like do you foresee that home stakers are going to be a major part of your user base or at least for this hyper growth, you only have limited number of resources phase? Is what you're doing today really about finding those professional staker opportunities and help them understand how much they gain by building in the resilience and the trustlessness of dbt?
**Speaker B:**
Yeah, so I, I would say both are equally important for us. We're, we're spending just amount of, just the same amount of effort on home stakers as we are on professional operators and we didn't have the time to go into kind of the different things that are happening on the home staker side. But I'll just say that the first set of community clusters are being started. Superfizz actually ran the first couple. You know, we are now talking to, you know, hacker houses in Africa who want to try to get more staking into the continent. You know, we're educating, we recently did a staking block with Shifi to try to get more, more people staking there as well. So you know what I hope to see, I don't know what, I won't make any predictions on the future, but what I hope to see is that the barriers to entry for home staking gets lowered more and more over time and that we do see a higher amount of participation for at least medium small validators and even individual like solo, like true kind of solo operators to have them have more opportunities to be able to participate in operating on the network. One last thing I'll say, and I know we're probably over time, is that one of the things we're hoping is to build an on ramp for more home validator, more solo stakers to participate. A big part of what I think is missing in that is credentialing is having individuals be able to prove their experience because like with larger entities they have reputation, right? Like I'm figment or I'm blockdamon. You know, you know us, we have credibility. But if you're an individual sitting at home, even if you had 10, you know, five years of experience validating and running validators, there's no way to prove that other than your Word. And so what we're doing today, we recently launched what we call the OPAL Technique credential, which is an SBT that we give out based on objective metrics of validation. So you have to maintain a certain amount of time, certain amount of effectiveness, run a DV for a certain amount of, for certain amount of time and you get to earn this credential. And what this credential gives then is we're working with Lido and Ether, find other liquid staking protocols to gain credibility behind this credential where they can see this as, okay, this is a proof that this particular individual has experience and has proved experience to be able to run a node at this level of performance, that makes me more comfortable to allow them to operate in this, in my community operator set, for example.
**Speaker C:**
Right.
**Speaker B:**
And so I think there are some of these things that are also still missing in the ecosystem to get more opportunities for home stakers that we're also trying to hope, or that we're hoping to be able to fill and at least contribute to the development of having more home stakers participate in the network.
**Speaker A:**
Yeah, and I, while you were talking, I just kind of had the realization that the point that I was making is like really, really specific. And yeah, like I guess maybe there's going to be people like me that are interested in increasing resilience, resiliency by running like DVT technology within like my own empire. Right. But like the much more important user from like the home staker like standpoint isn't going to be the guy who wants to create resiliency within his own setup. It's going to be the person who wants to participate with other people and like maybe that's with a group of their four friends or maybe that's like in a, you know, I don't know how good the technology works yet, but like I could imagine a world where it's like a 5,000 person community cluster and you want to opt in, to be the 5,000 in first person. And like, yes, you don't trust these 500, 5,000 people, but we're at like the number of people where you are like back into trusting statistics. And so like that is really, I'm sure what you mean when you say that home stakers and that ilk are just as important to you as like the professional operators.
**Speaker B:**
And, and, and some of those dots are being connected, right? Because if you think about with Lido, with EtherFi, their professional liquid staking protocols, right? The largest some of them on the network, they're they see DVT as a way to onboard more community stakers onto their network.
**Speaker C:**
Right.
**Speaker B:**
And so I think that now you see this interesting kind of opening up opportunities for home stakers even if they don't have their own E, even if they're not solo staking to basically earn enough experience and credibility to be able to now be a validator for Lido, be a validator for etherfi. And there's these on ramps that are being built to allow them to do that. And DVT is at the center of it because without dvt none of these community modules or community node operators that would be possible.
**Speaker A:**
Yeah, awesome man. Frustrating. There's only so much time in the world and honestly we didn't even get to some of the more interesting, interesting parts. Like you're gonna have to come back, like I really want to talk to you about the implications of DVT and then in latency and like we're already seeing so many games around like how the, if you have enough stake and you're willing to delay until the last possible second the implications on stake return. And now we're adding in like a new step. Like what does that mean? But just for the sake of your schedule and our listeners ability to pay attention, I'm gonna have to cut us off here. So Brett, I just want to say first of all thank you again and also congratulations to like what Obel has done so far but really like the achievements that you're starting to like publicly list off with with Lido and with Etherfi and with all these other like massive pillars of Ethereum. So thank you and congratulations.
**Speaker B:**
Thank you so much Rex. Thanks for having me. And yeah, looking forward hopefully to having another opportunity to chat again soon.
**Speaker A:**
Of course. Well, before I let you go, can you share with the audience how they can find you, how they can find Oboe and if they're interested in DVT and like maybe becoming an operator eventually. Like can you just talk a little bit about where you are in the process and like where people should be looking for when you know it's open up to the public.
**Speaker B:**
So we're already on Mainnet open Beta, meaning if you're somebody that wants to run a dv, you can do so already. You can find us on Opal Tech, our website which will navigate you to the various places you can go join our discord. You can do it from there. If you want to learn more about the technology, you can do it from there as well. Me personally happy to talk to anybody that wants to connect. You can find me on Twitter composius and also telegram on posius.
**Speaker A:**
Awesome, man. Well, I'm just gonna say it again. Just thank you, thank you, thank you. And congratulations. And yeah, man, I hope to see you again on the podcast soon. So, Brett, thank you and have a great day. Thanks, Rebecca.