**Speaker A:**
Hello and welcome back to the Strange Water podcast.
**Speaker B:**
Thank you for joining us for another episode. One of the most exciting parts of working in an industry so so close to the bleeding edge is that we get to watch in real time as new research finds its purpose in the field. In zero knowledge cryptography, we're watching as this brand new research is expanding the purpose of cryptography from just encryption to the key enabling tool of decentralized computing. ZK rollups are perhaps the most salient example these days. Computation inside of Ethereum is what we call trustless. But the same properties that make it trustless make computation very expensive. And if we were to just naively upgrade Ethereum with more computing power, we would also lose everything that makes Ethereum valuable. But ZK cryptography gives us an almost magical solution. We can do all the computations somewhere inconsequentially inexpensive and just record the result. We found the bottleneck and ZK eliminated it. While execution isn't the only bottleneck on Ethereum, storing data is prohibitively expensive. So expensive that we basically only store strings and numbers. And even that gets ridiculous. I mean, just look at inscriptions. Storage is so expensive that the idea of a smart contract having a large scale database is so outlandish that it just doesn't quite track right? But that's the exciting part of working in an industry so close to the bleeding edge. You get to watch in real time as research finds its purpose in the field. You get to watch ZK transform Ethereum into the world computer.
**Speaker A:**
Now, today we have Scott Dystra, the.
**Speaker B:**
Co founder and CTO of Space and Time. At its core, Space and Time is a cryptography company founded to answer a very specific question. How can we use Zero knowledge to prove SQL?
**Speaker A:**
The result, just like ZK lets us.
**Speaker B:**
Offload execution, it will also let us offload storage. A smart contract can craft a SQL request, send it off chain, receive back the result and a proof, and once verified, be assured that the result is as good as if it were in the trustless evm. This conversation is an incredibly fun and wide ranging conversation on everything from the interaction between AI and crypto to the importance of capturing higher and higher levels of the value stack. I learned so much from Scott and I know you will too. One more thing before we begin.
**Speaker A:**
Please do not take financial advice from.
**Speaker B:**
This or any podcast. Ethereum will change the world one day, but you can easily lose all of your money between now and then.
**Speaker A:**
Okay, let's get started. Scott, thank you so much for joining us on the strange water Podcast.
**Speaker C:**
Yeah. Great to be here Rex, for sure.
**Speaker A:**
So before we get into space and time and like all the exciting things that you're bringing to I guess both the blockchain ecosystem and the regular developer ecosystem, I'm a huge believer that the most important part of every conversation are the people in it. So. So can we talk a little bit about who you are, how you like found programming in crypto and I guess why you didn't run away?
**Speaker C:**
Yeah, background in computer science, did undergrad in comp Sci, master's in data science at Berkeley and then got into crypto kind of in 2020. Was starting to goof off with contracts but not really, really more 2021 when I got real obsessed with it. Like everyone I was buying crypto for a while but not honestly didn't know much about defi until some close friends put me onto it in 21, 2022 or sorry 2021. Then I spent, spent, spent a decade in data warehousing at the enterprise level like building enterprise data warehouses for the Fortune 500 and it was a very eye opening learning experience. Kind of boring. You know, it's big database Systems distributed scale 500 terabytes across these big clusters of compute in the cloud or on premise and data centers and a lot of SQL, a lot of data analytics. And it was all about taking the Internet's data and basically siloing it off to these black box analytics systems owned by mega corporations that essentially steal users data and use it as a product. And I contributed heavily to that. It's exciting to not be anymore and to be working on something that's sort of the opposite, the antithesis of that. Sort of the decentralizing the world's data and working on a solution that puts ownership back in the hands of the Internet's constituents with space and time.
**Speaker A:**
Yeah. And I want to circle back to just your entry into crypto and defi and realizing that there's an opportunity here. But while we're talking about your history in data warehousing, so brief background. I used to work for Anheuser Busch InBev and I was on the. My first job was in technology innovation and, and one of the major projects that I contributed to was.
**Speaker C:**
You were unironically using MicroStrategy or something.
**Speaker A:**
No. Yeah. Maybe at the time that name wouldn't have meant anything and so I don't know. But the big things that we were tasked with, which Anheuser Busch at least in 2013 was still using magnetic tape, was still using machines from the 70s that were just not working. And so we were tasked with finding like a, a good data warehousing solution. Yeah, BI tool. So you know we worked with SAP Business Objects, we work with QlikView, we work with like heroic or just I can start listing all the names and I'm sure that you're familiar with but I think like there's a lot of like really kind of crazy stuff that you don't realize without getting into it. And I think from your side you can talk about the technicals but from my side I think it's just like the realization that for all the data that's created in the world it is like completely inaccessible to the decision makers that could actually do something with it. And I would love to just hear some of your reflections and your thoughts on as the Internet and technology and computers are so new relatively speaking, how have you been watching businesses transform as we've entered the information age? And like really do you let me just be like really frank with you. My job was to get as much data into dashboards as possible so that like business leaders can make decisions. And the reality is is no matter what we put in front of them, they were making decisions based on like gut instinct or like the, the conversations that they knew they had to have with customers or like just or like.
**Speaker C:**
A great data engineering business analyst team that built great reports but the executives don't even trust the reports because they're like you might have hard on this but I'm not sure I even trust your data.
**Speaker A:**
Exactly. So very open ended question but like how do you think about the transformation of like corporate business during this information age and do you think that like there's a lot more like opportunity to unlock decision making using you know, big data or do you think that like the actual just getting things up and running? We've already like kind of achieved that goal.
**Speaker C:**
We, I think we've overachieved and now we're actually trying to scale back. Maybe mildly controversial take or hot take if you will but listeners we'll get into ZK in a second. I'm going to ramble for a minute on data warehousing and business analytics. We'll get into zero knowledge and why that's relevant in a moment. I think we've over indexed. I think we spent the last decade just adding tool after tool after tool. You experience this when you probably POC different BI tools including QlikView and SAP. HANA and name your other and I'm sure MicroStrategy at the time. I'm tongue in cheek saying that. But what we did is we, we started to build all these data silos of like pile on database tool after database tool for storage and like processing the data, like actually tools that actually store data and execute queries. And then we piled on BI tools, analytic tools like QlikView on top of that and then we piled on Python data science devs on top of that, just building who knows what. And then we piled on jupyter notebooks and it got, you know, and then mlops. And when you added this fifth layer called mlops, it just got out of control, right? I mean the corporate spend, the IT budgets to do all these greenfield projects where you're just stacking on tool after tool after tool. And then by the time you built the fifth layer, the first layer is already sunsetted and deprecated. By the time you get to MLOps, the original data warehouse you built isn't adequate. You have to sunset it and build a new one. And it got out of control. And my career has been selling and building, building and selling those products and enabling that like proliferation of too many tools for too many jobs rather than right tool for right job. And I'm hopeful that AI actually solves this. We've been over the last few years scaling back like there's this philosophical movement or this like push to a small degree, but with tools like Mother Duck and other database solutions that are actually purposely built to be micro data warehouses that are like, hey, remove all this bloat. You can run this thing on your laptop. Like you don't have big data, you don't have 40 terabytes of data. You could just run our database on your laptop with your half terabyte and call it a day. I don't think that I still think you need a data warehouse. I still think big distributed systems where you need it because the data volumes on chain are large. Like we're talking terabytes per chain and the data volumes at the enterprise grade and in video games and in social media apps are all large. So I still think you're going to scale past the laptop, right? The little micro data warehouse solution, that's the antithesis to this. 10 years of stacking on data warehousing tools is not the right solution. But that's the trend. We're kind of trending back towards removing these tools now. I'll end here. I apologize for the ramble. I believe AI actually helps. I actually think there's a potential for us to use AI to solve this problem. Because when you have AI, it can replace 10 of the 12 tools in your stack. You need somewhere to store structured data, you need somewhere to store unstructured data and you need somewhere to vector search. You need somewhere to retrieve via SQL and retrieve via vector search. And that's about it. As long as you have both of those you could just put an LLM on top. And the LLM is your BI tool, It is your MLOps tool, it is your Python data engineer. It'll be all of your business analysts and your data engineers.
**Speaker A:**
Yeah. And I think when we start talking about what space and time is and the UI that I can already see today, that will become much more apparent. But just to wrap up this part of the conversation, I think a huge part of this is that you're exactly right. Like we over torqued for just like putting basically more impressive looking graphs in front of executives and then telling them like you have like business tools now. And like I totally think everything you said is right. Like the, the next phase is going to be about refining that and making it smaller. But the other opportunity that is so clear to me. So quick antidote. After I left technology innovation, which if is not a real job, that's I, I worked in cash flow forecasting which is you know, like payments have to go out, payments come in and then every single day you just need to make sure that there's enough money that you know the, the business can continue flowing. And for Anheuser Busch in the the I was in charge of Canada and U.S. that's $20 billion a year of cash flow. And like every single day we had to forget forecast. And then around the end of the year things become like particularly important because not only are you worried about like the actual dollar balances, you're worried about like hitting your year long numbers. Right. So one year we were just like off every single day. And finally like the CFO is like go figure this out, like why aren't any of our numbers coming in? And so I flew to our call center in St. Louis where like all the guys who actually did the work were you know, doing the inputs and we finally figured out that the way the second largest CPG company in the world was like creating their like intricate daily cash flowcasts was like some guy who'd been there for 20 years was guessing every day and he's good at it. He was very, very good at it because he'd been doing it so long. But like it was literally just the Excel spreadsheet from last year copy pasted over and then him like altering the individual numbers based on, like, vibes.
**Speaker C:**
You know, after a decade of us trying to build tool after tool after tool and automate ML process after analytic process, we still do business on Excel. That's how most executives are still, like, just guessing.
**Speaker A:**
No, I mean, I can go all day on Excel, but I, I guess to wrap this up, like the, the insight that I see here is that not only is there a huge opportunity for AI to help with the like finding insights, there's a huge opportunity for AI to help us actually take all of these data inputs that are just too dense and not structured and not consumable by humans and just get them into the databases in the first place. And so that's just a huge opportunity that I feel like is the flip side of the analytics.
**Speaker C:**
Three things that AI will solve. All related to this mountain of AI of data tools that we lost ourselves climbing. Three things that AI will solve is data ingestion, like you said, getting wrangling structured and structured data into a format that can be stored and retrieved because that's a miserable, painful and a boring process today. Like, no, engineers don't want to do that. It'll also solve the analytics problem, right? Instead of a mountain of BI tools, you just need a vector search database and a SQL database and you just give the LLM access to it. You just ask the LLM questions. Great. Or have it generate Python scripts for you. And then finally it'll solve the just the complex question problem which goes beyond SQL or Python or ML. It's a question of like, hey, a business user or a business leader has a question that's a complex question. How do we solve it? Where it's not deterministic, it's not as simple as just writing a SQL query gives you the answer to that question. It's a question that requires a deep understanding of the business, looking at structured and unstructured information and making like subjective and objective decisioning. Well, where AI can replace an entire analytics team, classically it was like, okay, you have analysts that are more business focused that are looking at data and charts and trying to map data to business decisions. You have engineers that are building all the charts and building the environment, and you have business users that are like setting their requirements in. Well, AI can be that whole team. Like, if my requirement is answer this very complex question about my business that requires SQL vector searching and machine learning processes all together to answer, LLM can answer that. That's inference for the LLM to do.
**Speaker A:**
For sure. Man, super exciting. So let's move on and try to get us to space in time. So this is in 2020, 2021. You're messing around with Defi. This is also I entered during that cycle as well. So probably at the same time that you're getting smart. I was like buying polka dot and Cardano and stuff I've never touched.
**Speaker C:**
Rip. Yeah, RIP to both of those. Both of those ghost chains.
**Speaker A:**
Yeah. So can you like just give us like the brief story of how you went from like messing around maybe with some smart contracts, maybe with your money, to realizing like there's a huge opportunity to build like core infrastructure to make, you know, this really crappy blockchain computer into the world computer.
**Speaker C:**
The shocking realization that smart contracts don't have access to data. My co founder Nate and I were fortunate enough to be introduced to web3 data by chainlink. My co founder Nate was an advisor and working with the executive team at Chainlink and Chainlink became our first partner and kind of our first supporter. And we kind of born space and time within the Chainlink ecosystem. And we're close, very close with those folks over there and go to SmartCon every year. When we looked at Chainlink, we saw, okay, here's an opportunity where you have a very strong business, very good tech, very strong team. And really all they're doing is just finding a reasonably secure way to get some data to a smart contract. Because smart contracts have no access to data. And we're not talking about data availability. Don't get confused. Data availability is a whole another term.
**Speaker A:**
That got sort of terrible name. Terrible name.
**Speaker C:**
I don't know if it's terrible. It's just this, it's just, it's just too. It's a name that will confuse people because they'll think about data availability in within their own context Anyways, getting access to on chain data for a smart contract. Now Defi summer was facilitated by really by Oracle's bringing price data to smart contracts. Not a lot of discussion about this, but I mean, if you really think about it, what really facilitated Defi for the first time was smart contracts being able to know the price of a token. Once you have liquidity pools that can quote prices, then you need smart contracts to know those prices. Hence Chainlink. Hence this idea of, yeah, sure, a smart contract in the EVM may not be able to ask questions about activity on the evm. Like a smart contract can't say, hey, show me how many transactions Scott has made and the total volume and the total lifetime of Scott's Wallet. That's not a question that a smart contract can ask on chain, but maybe they could ask what's the price of avalanche aggregated across all uniswap pools that quote avalanche. So that genesis of defi with like providing data to smart contracts to make them smarter allows you to step back and say, okay, what's chainlink's real business? Their real business is actually just making smart contracts smarter and building a whole suite of tools to facilitate that. The first tool that took off was price feeds. So then we thought, all right, well let's take that a step further. Like what smart contracts really need is a database where they can store and retrieve data. They can offload computation storage from the chain to a database, but it's cryptographically secure. It can't break the Trust model of web 3. It can't break the security of the blockchain. It's got to feel like an L2, you know, it's gotta, it's gotta have that L2 security connection. And then finally it's not gonna. Consensus probably isn't the right approach. And it needs to be preloaded with index data from that chain. Like if you're gonna allow smart contracts on polygon to ask questions about activity on polygon, whether it's price, whether it's defi questions or just like general activity questions like show me all wallets that have at least a million dollars of Matic and have made 10 transactions on chain. If you want to enable a smart contract with accurate cryptographically guaranteed tamper proof data that answers that question, you're going to need to index the entire polygon chain in a database that can run that query and answer that question. And so we thought, okay, well, zero knowledge. This was the early days of Clonky and Halo and people starting to think, okay, maybe ZK can present an alternative to consensus. The chainlink approach, which I love, is let's set up a network of off chain nodes that are not really a blockchain, but a decentralized network of nodes that come to consensus and all validate data and gossip with each other that they've all validated the same data before relaying it. Smart contract that asked for the data. The ZK approach would be this. Index all the data from that chain into a database in a ZK compatible format and then allow a smart contract to ask any question in SQL against that data. Build a zero knowledge proof once on one node in the network, but verify it in a contract on the L1 that requested it. Verification can be redundant. Via consensus on the L1 that requests. But proof should be done once via ZK. Why? Because oftentimes we're proving against large data sets. I mean, just the Ethereum logs table we have in space and time is nearing a terabyte. We're talking about reasonably complex queries against reasonably complex blockchain data sets. And you kind of need like a whole gpu. You kind of need a server to handle that, and you're not going to handle it via consensus. You can't have 20 big servers all redundantly running the same massive query against a terabyte of data each and then come to consensus on that output. Ain't gonna work. The reason it works for Chainlink with consensus is because they're coming to consensus on small data sets. It's nodes each grabbing the current price of avalanche. That's doable and they rock at that. The problem is if you ask a complex question, you can't answer that complex question 20 times redundantly come to consensus and then relay, apologize for the belabored answer, but just want to make. And when we realized that, that was the realization of, okay, let's go build a database that's ZK powered and let's go solve this problem and build like the Web3 native database where a smart contract can talk directly to a database and offload all of its compute and storage, but also retrieve blockchain data. That's space and time.
**Speaker A:**
Man, that is super exciting. And I think like the, the. If I'm like framing space and time kind of in like the narrative of where we are in development of crypto, I think what's really exciting about the insight that you guys have, and let alone the fact that you actually built it, is that we've all been talking about the modular blockchain thesis since like, literally since I've entered crypto. So in 2021, like it was already there. I remember there was a Bankless episode about that. I was super excited. Like, it's been there.
**Speaker C:**
At this point, I'm like too afraid to ask. Is modular literally just these. Did these dudes just remove smart contracts and then just call it, call it modular? I'm saying that kind of tongue in cheek, but I'm also serious.
**Speaker A:**
Yeah, no, I mean, I do think that I don't definitely don't want to throw shade at Bankless, but like I. And just like that whole milieu of conversation. But I do think that a lot of people that use that terminology have no idea what they're talking about. And therefore we all have no idea what we're talking about.
**Speaker C:**
But I mean, I don't. I think of modular as like a blockchain minus smart contract execution.
**Speaker A:**
Yeah, well, I think that's like a good way to look at it, especially in regards to the L2 frame. Right. We take the execution out of the main. Net and then because there's a whole execution engine outside that kind of plugs back into main. Net. I think that's supposed to be like the modularity. And now we're entering this data availability paradigm. So you don't necessarily have to use the straight up Ethereum data availability. So that's like another module plugin. And so whether or not you really buy.
**Speaker C:**
There's a good point. Data availability is a good point because what you're really saying is there's another layer of there's another module isn't as simple as just removing smart contract execution. It could actually be more keeping smart execution, smart contract execution but removing chain state blobs.
**Speaker A:**
Exactly. And I think the whole concept behind the modular framework is that each individual application can pick and choose the pieces that work for them. So let's say that you want to be an L2 that really, really is Ethereum secure. And so you're not going to choose to use a Celestia or Eigen DA because of your purposes. Modularity gives you the flexibility to make that choice. While this other defi degenerate chain is like we just want everything to be as cheap as possible. So we're going to kind of hand wave at decentralization by throwing data out wherever it needs to go and then move on. And I think that literally space and time is the first company or group or whatever we are in this industry that looks at a new part of modularity and that's actual databases. And I think before space and time we just, we had databases, I guess they were equivalent to our TI81 calculators just within the EVM framework. But what you guys are doing is allowing this modular system and plugin to say, okay, for my dapp, I need really true deep databases that can support a game with 100 million people or can support. We can get into some of your examples later, but support tracking space debris or any of these things that are wildly complex.
**Speaker C:**
Every bullet fired in the game because that's going to lead on chain reward.
**Speaker A:**
Yeah. And like what you guys are doing is saying like hey, we're creating the interface and the first example of like a database module. And if, if Ethereum is to become the world computer, like we're going to need databases.
**Speaker C:**
Yeah, and we, we actually just published a white paper today. Our first like big announcement, like hey, here's the space and time white paper. And that's pretty much the whole thesis of the paper. That's pretty much the. You just summarized probably the first seven pages.
**Speaker A:**
Yeah, well, just full disclosure, I did not read the paper, but I just think that what you're building is like so kind of clear. And once you really have clicked into the fact that Ethereum is not really just like a casino or this weird Internet thing that is about defi and tokens and locking.
**Speaker C:**
I mean it is, it is both of those things. It is only both of those things.
**Speaker A:**
Yeah, no, for sure, sure. But I think that once you really buy into the concept of Ethereum is about decentralized compute, the necessity of, well, how do you build really deep, modern, complex databases into decentralized compute space and time? The question is answered in what you've built.
**Speaker C:**
Sure. And there was a moment in 2020, 2021, 2022, arguably even into 2023 where I truly believed in decentralization and in Ethereum as that future world's computer that will decentralize the world's compute. Now I'm just like bull runs back baby, full dj, pump my bags. I'm sure we'll go back to building next bear.
**Speaker A:**
Yeah, no, yeah, I hear you, I hear you.
**Speaker C:**
And that's sort of the thesis of our paper is that like, hey, we believe, especially now in the generative AI world we're walking into, like verifiable and community operated compute is paramount because it's the antithesis to a generated world, to a very non deterministic like everything's fake because nothing's. Because nothing's generated, Everything's generated, therefore nothing's real. In that kind of world, you need a cryptographically verifiable system to move money, to perform, to move data, to do anything where data is valuable. If you have valuable data, where there's a financial incentive to manipulate it, tamper it, change it it, attack it, breach it. In that case, you need to go way, way above and beyond to build cryptographic new modern web 3 facilitated cryptographic capabilities that counter this generative AI world we're walking into. And so like you brought up a video game. Hey, hey, Scott. Like you can't just store terabytes of video game data on the world computer that is Ethereum. So space and time becomes this conduit where you can store your terabytes of every bullet fired in your first person shooter and connect that to a smart contract just in time when needed, where the smart contract can say, hey, how many hits? What's Rex's kill death ratio? And should I give him a new reward for having a positive kill death ratio? Take that a step further and say, all right, where are all the use cases where we feel like data might be manipulated, or where the data is going to secure financial value, or where financial transactions are going to recur as a result of this data being changed, manipulated, tampered, anything? Right? And you enter this AI world where all of data is going to be synthetic, it's all going to be created, generated, and so the only thing that matter will be the data that's not the data that's not synthetic, that's actually facilitating real transactions and, and real value. And that data has to be cryptographically secure. So if you just take a Postgres database or a MySQL database or snowflake or my alma mater Teradata, and you run your terabytes of video game data there, and then you try to ask questions about that data from your smart contract on Ethereum, you've broken the cryptographic security of web 3. You've broken the trust model or trustless model of the blockchain, and your data will inevitably be manipulated by your own employees or third parties, breachers, or even partners that you give access to your database and forget to revoke their access. And then six months later you realize they've changed the price of Tesla stock on chain and minted themselves like a million dollars of rewards. I'm trying that. What I'm trying to hint at is like, we need a database in Ethereum, but that database has to keep the trustless model of Ethereum.
**Speaker A:**
Yeah, I mean, I'm so glad you said that because I've said this on this podcast many times, but I don't know where the intersection of AI and cryptography truly is. Go to Twitter, you'll see a thousand tokens will tell you exactly where it is. But I think what's so clear to me is these are two sides of the same coin, two pieces of the same technology. Because as exactly what you just said, like, AI is about generation and abundance and like, just like pumping out as much stuff into the Internet as possible. And crypto is about scarcity and blockchain is about slowing things down. And like, just to, it's, to me, there's no coincidence that these two technologies are really like blossoming, like literally within the same months and years. Of each other. I just, I think there's like, a lot of, like, questions to be answered.
**Speaker C:**
Two sides of the same coin is a tough one for me because crypto is the coin is the cash that you hand to the cashier, the blockchain is your cash register, and crypto is. Or, sorry, AI is the cashier, in my very humble opinion. And if it's analogous to that, the whole two sides of the same coin thing may not be the best explanation. I'm only laughing because I saw a tweet about that the other day about someone's whole argument of is AI and crypto two sides of the same coin?
**Speaker A:**
Yeah, I mean, look, I don't know if we need to spend that much time on this, but I will push back and be like, no, man, they definitely are. And I think that within the context of the world we're in, you're totally right. Crypto and blockchain and AI have specific roles to play in finance and all that stuff. But if we're just talking about the greater pieces of the technology, the best example is generative images. How are we ever going to be able to tell what actually happened in the real world once chatgpt 10 and mid journey 6 and all these stuff is out there? And it's like that to me. As we create these generative technologies, we also need to create scarcity technologies. And yeah, I think there's going to be as a eth maxi and a bag holder, I think there's going to be some connection back to Ethereum blockchain, for example. We could create a startup that just goes and finds all the content on the Internet as it exists today, create a commitment and then get that commitment on the blockchain so that in the future you can have a browser add on that says, like, hey, here is every single time this image was seen and therefore, you know, that it wasn't created. You know, this historical image was actually historical. But like blockchain and eth and like all of these tokens that we're creating, like, that's almost superfluous to the technology conversation. And I just like, really believe that, like, all the things that, like the worst things that could happen with AI, like, are totally possible. But the reason I'm not worried about it is because, like, in conjunction, the smartest people like yourselves, or like Jason Morton from Ezekiel or like any of the people we've talked to on this show are all like, realize are all growing these technologies, like kind of in tandem. And so I guess, like, love to hear Your response, But, like, that's why I really do believe, like, two sides of the same coin.
**Speaker C:**
In a generative world where most data is synthetic, the only thing that matters is zk.
**Speaker A:**
Yeah. Wow. Very well put.
**Speaker C:**
Yeah. And crypto is the cash. Blockchain's the cash register. Dapps are the point of sale system. AI agents are the cashier. What does that make zk? ZK is the armored truck out front. Armored truck out front. The grocery store. I don't know. I'm talking. I'm really reaching for an analogy here, but.
**Speaker A:**
All right, well, let's move on from the tortured analogies and, like, let's get actually into space and time. So I was watching your product demo at Consensus, I believe, and you put it very beautifully. You said that it was five things in one, so I wrote it down. I'll just list it for you so you don't have to come off of the top of your head. But you said it's a beautiful. Sorry, a beautiful ui, a very powerful database, a validator consensus layer, and a novel ZK proof and a blockchain indexer. So with that kind of as the framework, can you just help us understand what are you guys building?
**Speaker C:**
Yeah, this is our biggest challenge. Honestly. Space and time's biggest issue as a business is not a technical issue. It's a messaging issue. We know exactly what the market needs, and we're building that. But it's a very. It's a platform. It's not a specific little product. Right? And that platform has five or six sub products in it or, you know, layers to it. And so we've really struggled, right? Because. Because, because people go on our website and they're like, seems like you guys are like, kind of like an AI company doing, like AI SQL. Or you're like an indexer. You're kind of like indexing blockchain data. You like the graph. Others are saying, are you kind of just building like a database? You're more like Dune doing analytics. What is space and time? And the reality is most of our R and D is just zero knowledge, right? We're mostly a cryptography company building ZK for SQL, just having a cryptographic circuit to prove SQL does not provide value to developers and certainly does not provide value to business users. So we had to build like a whole platform around this cryptographic. We could have just said, all right, space and time, your whole business, all of your engineers, every dollar you spend is just working on proofs. But to make it valuable, you need index, blockchain data. Right. You need, you need a two way street between your prover and these different chains. So it's an index data from all these different chains and a ZK compatible way. So we had to build and invent a ZK compatible indexing solution that's cross chain and then might as well build an awesome user experience if you can. Nobody wants to write SQL, so if AI can write the SQL for you in a beautiful ui, that would help. We're not trying to compete with Dune necessarily. I actually think pretty highly of Dune. But hey, you've seen our user interface. You know, it's possible with good dashboards and with AI driven analytics. I did spend the last decade building analytic tools. It is my passion. Long answer to simply say space and time is a ZK proof. Our core engineering, our smartest folks are just cryptographers working on a novel ZK proof to prove queries. But to make that useful to web3 and connect it to every major chain and make that a cemented institution, a primitive in the Ethereum ecosystem, we're going to need a platform around that proof. So we have our own native database we built for the kind of workloads we expect DAPP developers to put on our work. Our database, a backend database for dapps. We preload it with indexed blockchain data. We give you a nice AI powered user experience on top. But the real core tech is zk.
**Speaker A:**
Yeah, that makes a lot of sense. And the insight that I came to really last year, hopefully we're entering a new paradigm now. But at least at the end of 2022 and 2023, ZK is so new and moving so fast that it didn't really make sense to really be building ZK products yet. All of the companies that were getting funded and were having of momentum were just like trying to turn advanced like crazy number theory into like just packages and tools that were familiar to app developers, right? Like APIs or SDKs or that kind of stuff. Co processors, that kind of stuff. And I think like what you're saying is that you guys had that same insight that like, let's create the ZK that does something functional. Let's wrap it in a, in a package that is actually usable. But then you actually took the next step which is let's build, yeah, let's build actual useful applications that leverage this technology. And yeah, I mean I think you hit that like eight different times on the stack. Whether you're talking about everything from just the actual movement of data in and out and how that happens to Complex queries and that kind of stuff. But, like, really, you are, let me say, leading the charge in what I think is like the next phase of ZK development, which is like, how do we take these incredibly impressive tools and primitives and how do we make actual applications that people need?
**Speaker C:**
I kind of worry that some of these zk, like Ezekiel, you mentioned them. Really cool, really exciting projects. Never met the team. Would love to. Awesome project, Very exciting. I do wonder though, if a lot of these very, very exciting technology projects with very technical leaders are maybe gonna struggle a little bit to go to market because they haven't moved up the application stack and they don't have killer apps building on them yet. Now, I could be totally wrong. Maybe they've solved the killer app problem. If so, hell yeah.
**Speaker A:**
No. And this, the point of this conversation is not to call any, like, Succinct Labs is doing the same thing. Axel is doing the same thing.
**Speaker C:**
Let's about Succinct for a second because they are moving up the stack. That is an example of another ZK project that's an actual platform, like space and time. Succinct went straight to platform building rather than straight to R and D around like new circuits. We did both R and D, a new circuit, patented it, invented it, and are bringing it to market. And we have a team on the circuit, but we also have the platform Succinct platform play that I think is really going to be valuable. If I was a vc, I'd put some. I'd be calling up Succinct right now. And by the way, me personally, I will be calling up Succinct because I want to partner with them and make Space and time a circuit available on their platform. This is a great example of moving up the application stack where they're bringing experience to the developer on chain and they're creating a nice interface and a nice user experience and they're abstracting away a lot of the complexity. That's the key here. When you move up the stack, it's about abstracting away complexity. Axiom's trying to do that via JavaScript. Hell yeah. The sync's doing that via this nice platform for ZK Ops, if you will. Space and Time is doing that via SQL. We're all database folks and we figured, hey, SQL is easy, it runs the world. It's a great interface for accessing data. Let's make our interface be SQL.
**Speaker A:**
No, but and again, love all these projects. Every project that we've talked about I've had on this podcast and so like I, but I do want to distinguish with you guys because like yes, Axiom, Succinct and even Ezekiel, they're all like basically trying to make complex things less complex and allow developers to actually focus on developing applications instead of number theory. But like you guys are, are taking that further up the stack and like that is what I think is like so awesome. So can you talk a little bit about like the UI that I've seen and for those that haven't seen it, just start from basics. Like when, like what's going on in that? Like you can ask like a plain English question of like, hey, help me understand this specific thing about the blockchain. How does that get translated into like something machine usable? How does that hit your database and talk a little bit about the true human level applications that you guys are already able to build from that.
**Speaker C:**
Yeah, absolutely. We're fortunate enough to be backed by Microsoft. They led our last round and they're a great partner with us and they've been looking out for us. So they gave us early access to GPT4. Earlier in the year we did a collaborative project with the AI lab at Azure. Our engineers work with the Azure AI lab and we built a pretty sophisticated, I mean arguably the most sophisticated prompt to SQL system that exists today. Now that's a Y Combinator application, right? Like that's, that's a whole startup in itself. If we, if we had the energy, we could spin off a whole other startup just doing AI powered BI aibi, right? I just don't care. Crypto is so fun. The crypto's where we need to be. So my focus on AI is really only to enable better access to blockchain data. All of my energy that's going towards prompt to SQL, which I'll describe in a second, is really just to facilitate a better developer experience. Finding your blockchain data and querying it and understanding what's happening on Chain. But if we had the energy, we could take that global to just be like a better BI tool for anybody if we don't like it. So here's how it works. If you're familiar with database tools, they're awful and they're boring and you have tables and you got to go explore them. You got to find a new employee at Walmart on the business analyst team has to first go log into six different databases and find the table they need and understand where the data is. Then they have to write these awful queries that start to join data sets together and then they can start Visualizing things and putting it in charts and.
**Speaker A:**
Sorry, just as like a, as one of those analysts like half the time too. And this is why it ends up in Excel is like you have your social media like modern tool and then you have your terrible tool from your 70s mainframe and then you have the new like teradata cluster that we just set up. And so like none of these things are in the same formats or even talk to each other or whatever. So you just dump it all in Excel and then like hope that the 21 year old like brand new analyst is able to craft it together in a way where you can kind of create a story that the CEO will kind of buy.
**Speaker C:**
Yeah, exactly. Yeah, that's well said. We saw an obvious opportunity to fix that with AI and if we, like I said, if we, we had more energy and more engineers and weren't as into crypto, maybe we'd go build a business around this. Because it's going to be the not is it is the future is here. And it looks like this. You just log in to this BI tool that is called Space and Time and you just start asking questions. You know, if it's, we use Walmart as a dumb example. Like if it's Walmart data in space and time, a business analyst at Walmart lobs in the and time. They have a text field and they just say hey, what's going on with west coast sales this month? Why, why are things down so bad in the West Coast? That's a complicated question that's probably going to require a lot of query. You know, six different queries to three different databases underneath. Once those results come back, as you know, then probably some data engineer needs to run some Python Pandas process to like and it just becomes this like complex question. Whereas the LLM can just figure out what are those four SQL questions we need to ask and just kick them off. Now. We're not quite there yet. We will be soon. Today what we've deployed is just like one query at a time. Ask a question at a time. You got a question, ask it. SQL gets generated for you and that's accurate space and time SQL and then you can choose to route that accurate space and time SQL request to a ZK prover or just to a regular database engine that's cheaper and we have a normal, you know, a centralized database as well as our ZK proven, which is more expensive but cryptographically guaranteed. So you as the end user you just choose where your query goes. AI writes your query for you, you Just choose. Do I want to ZK prove this query or is it more just to power a dashboard? I don't need ZK proofs and you use your costs associated with that. That's what exists today. And we've done a great job. It's in the market, got a ton of dapps using it. Already about 8,000 active users actively running queries and writing prompts. And it's exciting. But this is the tip of the iceberg, right? The next step is just prompt to dashboard, not prompt to query. Prompt to dashboard. Today we're kind of in the middle. We're kind of prompt to query to chart. We get you all the way from a simple, a natural language question all the way to a chart showing the blockchain data, visualizing it. But then the final step we're going to make in 2024 is get you all the way to a Dune style dashboard from a single question. The question would be, hey, what's going on with chainlink CCIP yield? Give me some insights into Chainlink CCIP gas usage and then an entire dashboard gets generated for you about chainlink CCIP gas usage. And it's roughly what a business analyst would have asked.
**Speaker A:**
Lots of things to talk about here, but just on the distinction between like running your query in a ZK prover or not. Like, I think what you're saying is you're essentially checking. You're querying against the same data. You're using like all of the same technology to get the answer. But in the ZK prover situation, you're also generating a crypto rack of, sorry, a cryptographic proof that says like, if this verifies, then you know that that data came from this database. Is that correct?
**Speaker C:**
Yeah. Do you want to talk for a second about that cryptographic proof?
**Speaker A:**
Yeah. So, yeah, I guess, like, the couple of things I wanted to ask you about this first are like, I would love to hear a little bit about, like, what is the ZK proof that you guys created? And talk through the process of like, what all the work that you had to do. And I'm very curious about that. But the other thing I'm curious about is like data is only the famous saying garbage in, garbage out. And so there's one side about proving your queries were executed against the database and then there is another side about proving that the database keeps its fidelity over time. And so I'm curious about your thoughts and how space and time is proving the query very clear to me. But making sure that like, yes, the query was run, but was the data that it was run against, like, still maintaining the integrity that we expect it to? Like, how does space and time solve for that? So, two questions, very open ended. I'll let you kind of go from there.
**Speaker C:**
Sure. I'll answer the latter. First, what we're, what we're proving is that the underlying indexed blockchain data or off chain data that you loaded from your video game, it doesn't matter that once data entered space and time, it hasn't changed. That's the first. We're proving that the underlying tables have not been manipulated. Also included in that proof, as part of what we're proving mathematically is that the actual computation of this SQL query execution was done accurately. So that we're proving that the node storing data didn't manipulate it and the nodes computing the proofs did so correct. Like actually ran the query you wanted them to run and didn't leave anything out. Because a simple attack vector would be don't tamper with the data, but just tamper with the query result against it. Right. Like add your wallet, sneak your wallet address into the query result. When it's. When a smart contract is saying give me a list of people to reward, just sneak your address in there so you get a free reward. So we're proving those two things. We're proving that the data hasn't been tampered at, the actual answer against that data is correct. And then built into that proof, of course is proving that our indexed data is accurately indexing to the chain. Right. We double check that before ingesting it into space and time. When we grab index data from all the major chains, we're double checking on ingest that it was correct. We're proving that during our CK proof. So it's really three proving blockchain data validity, proving the underlying tables haven't been tampered, and then proving that the actual query execution was done accurately. And all of that can be validated on chain. And then I forget the first question.
**Speaker A:**
Yeah, just about building like, talk to us about like, so you've built a proof from scratch, right? Or maybe not from scratch, but like you've built something novel here. So can you talk a little bit about like what's particularly scary about ZK when you're at the primitive level is like the math is so complex and so exact that like any tiny, tiny error could end up being catastrophic down the line. So can you just talk a little bit about like you've now explained a little bit about the purpose of the proof and all the different pieces that are being drawn into the commitment so that it can be later verified. But like, what's it like to develop ZK from scratch? And how do you ensure that the properties that we expect from ZK are actually there?
**Speaker C:**
Wow, that's a very good question. A very tough question to answer because now you're really asking, hey, how did you design a novel zero knowledge proof. I mean you always start with prior art. Step one is look at prior art. We started with some academic research out of the University of Michigan around virtualized SQL. Had nothing to do with zk. Actually our initial, honestly, our initial probably year, a whole year worth of R and D. We didn't even touch anything ZK related, nothing to do with ZK for probably a whole year of R and D before we even opened up Plonky or Halo 2 to even take a look. We didn't use those frameworks, but we learned a lot from them, of course. Yeah, but yeah, the first year was just like looking at academic research on this topic and figuring out what kind of commitment schemes would make the most sense. We didn't know what we were doing. Nobody does. I mean no one, no one has any idea. This is all greenfield. There's no best practices. Right. This is truly novel technology. Built a proof. Nobody's built ZK to process terabytes. The entire industry is pouring now billions into zk. Mainly for transaction processing. Right? Mainly for roll. Mainly for proving simple computations or sorry, proving reasonably complicated computations on reasonably small data sets. A small set of transactions like the most recent batch since the most recent roll up. Most CK research is simply how do we prove we did, we did this set of 5,000 transactions correctly? Not what we're doing. What we're doing is more the Axiom style, which is a new generation of ZK that has come up over the last three years, pioneered mainly by space and time, but. Or maybe not frontiered by us. And now we've got some awesome players in the space building around us. And it's exciting. What this generation of ZK is doing is not about compute relatively complex computations on a small amount of data, relatively simple mathematical computations on massive amounts of data. Like running a query on the entire Ethereum history and saying show me all Ethereum wallets that meet these five criteria. That's a massive full table scan against two terabytes. We have to prove in zero knowledge. Nobody's done that before. So the first thing we tried was just like different commitment schemes to see what would work. I have to admit, even to this day, we're still trying out different commitment schemes because there's still plenty of R and D to do. This can be endless. There's a whole, I see a decade of optimizations that we can make to make this faster, more practical, well integrated into different database services and blockchain services. I mean, this is a proof you can embed in Snowflake or Teradata or Bigquery and we're already starting. But there's going to be a whole decade worth of endless R and D and optimizations and trying new commitment schemes. And as new cryptographic primitives get invented and published, we can attempt to maybe use those to make things run faster. Very complicated. The last thing I'll say is this. You might have the perfect architecture for proving SQL queries against a large volume of data, but that perfect architecture for making proving efficient might make verifying extremely inefficient, expensive. You need your prover to be fast as hell and cheap as hell and scale to terabytes, but you need verification to be gas. Cheap, affordable, like, you know, low gas. Gas. Optimized was the word I was looking for. So we made a mistake early, I'll admit. Like we, we actually didn't take the right approach from the jump. Our initial approach was trying to balance the two, balance how much work the prover does versus how much work the verifier does. Because we weren't actually confident we were going to even do on chain verification. We thought maybe it'd be more like verify on chainlink, which we're also doing. It's another option. And as we learn more, we realize, nah, the future is on chain, we should verify it. So then we did a lot of work to switch cryptographic curves and take out a few pieces, add a few pieces back in so that put more work on the proverification. Be cheap.
**Speaker A:**
Yeah, I mean, I think that is the answer because we have the Moore's Law in the real world, right? And you can always kind of count on the fact that prove verification, like will get better. But kind of the point of Ethereum is that within the EVM it's not going to get better, right? Like maybe marginally, maybe by like a order of 10, but like what we kind of know what we get and it makes so much sense to optimize on for verification.
**Speaker C:**
That's also the elephant in the room, right? The elephant in the room that nobody wants to discuss is verification costs on Ethereum. Just so you know, Little Alpha here, 2024 is going to be a very eye opening year in terms of on chain costs for zk. I mean if you think inscriptions are bad, wait till you see ZK proofs being verified on chain. I mean the Ethereum gas cots are going to 10x.
**Speaker A:**
Well again I think, I mean as a bag holder that doesn't really do that much on chain. Like I'm happy about that. But I do think like yeah, we are entering like an entirely new paradigm. And it's because like everything before was just about like oh like it's happening. Like what little like tchotchke things can we put on chain? And everything that's coming is like how can we deploy like the full force of like 60 years of modern computing research into these like tiny proofs? And I was having a conversation with Sriram from Eigenlayer, like got a year and a half ago before eigenlayer became.
**Speaker C:**
The most valued project in all of. We own the entire narrative for 2024.
**Speaker A:**
Yeah, yeah, I have thoughts on the narrative, but.
**Speaker C:**
On the narrative around order three derivatives.
**Speaker A:**
Yeah, and it's just like I think eigenlayer is or was interesting because it was about how do we continue down the trustless computing. And honestly I thought the coolest thing about eigenlayer was like oh, you can do proof of stake, you just don't have to issue your own shitcoin, you can just use eth. And I was like oh that's great insight. And somehow it's turned into literally just put your eth into a smart contract to farm a potential shitcoin is like. I don't really know how we got there but the comment that he made to me was that if you actually look at the numbers, like we can only verify like from gas limit standpoint, we can only verify like 15ish snarks per block. And what happens when we build all of these ZK oracles and ZK roll ups and ZK data availability layers that yes, there's much more computation off chain, but still need to run a snark every single block. Just things are.
**Speaker C:**
I know what's going to happen, I'll tell you that. If you have time, I'll give you that answer.
**Speaker A:**
I mean I assume it's going to be like a fraud proof paradigm, but yeah, I would love to hear an answer.
**Speaker C:**
Oh no, no, don't say that word. Fraud proofs. That's a bad word.
**Speaker A:**
Well, sorry, okay. What I mean was that like basically we'll all just. No, no, just to expand what I meant like people will Just send their commitment on chain and will accept it as true. And then if someone sees that it's fake. No, okay, so that's my point.
**Speaker C:**
Is a bad word that, that is, you know, that to me is problematic because it defeats the entire purpose of zk. What you're doing is you're introducing optimistic, you're introducing optimism to something that was purposely designed to fix the need for optimism. Optimism is not secure. It's optimistic. It's really optimistic. Security by default. And when you start to introduce fraud proofs, you're introducing optimism to a technology that was. That introduces the hope of not needing optimism.
**Speaker A:**
Yeah. All right, man. So what's the answer?
**Speaker C:**
I'm optimistic that we won't need optimism anymore. Not optimism. The ecosystem. I love optimism. That's how op. I mean I'm optimistic that someday we won't need fraud proofs for zk. That's all I'm saying. There's no reason, it's completely antithetical. The solution is something like this. Maybe this is alpha, that I shouldn't be giving away this early because it's something space and time's going to be delivering to the market. But essentially the equivalent of an Oracle network for verifying zk. Low cost, low cost commodity hardware that's extremely decentralized where you as a, you as a user of the network, you pay a certain amount of gas for verification, depending on how many nodes verify and come to consensus and then relay your already verified data payload to the chain that needs it. Like The Celestia for ZK, proof for ZKver, prove in your own network, prove wherever you want help prove on your laptop. But you won't be able to afford to verify on Ethereum. So you'll verify on essentially a side chain for verification. That's purpose designed. Essentially an Oracle network that's purposely designed for low cost verification that's driven by consensus.
**Speaker A:**
Yeah, I guess so. I think that is structurally sound. I think the realization that I've come to is that I don't think that decentralization is that viable outside of Ethereum mainnet. I just like as someone who is a node operator, like, oh my God, is it a pain in the ass? Like I was on vacation for three weeks like five days ago, my node went down and then I just like had to sit there and panic for five days. Right. And I think, I think we can hope and expect that like we have one network that's like ten or a hundred thousand people. But like I do get like a little bit concerned when like we start to build like more and more systems that are surrounding Ethereum.
**Speaker C:**
I agree. I know, I know what you mean. It gives me a little heartburn.
**Speaker A:**
Yeah. And so I guess like, you know, maybe we, that just is how it works. Or maybe like we figure out how to really use ZK to make it all like, economic or something. Or, or, or maybe it's just like, I don't know, maybe things are changing so fast that there's going to be a new solution. But I guess with our final few minutes here, I would love to talk to you a little bit about, like, how do you think about the decentralized network that you want to build with space and time? And like, specifically, I kind of just.
**Speaker C:**
Hinted at it maybe breaking a little alpha early on this podcast, but that's kind of, that's, that's kind of it. Like decentralized network of space and time is this set of nodes that can run a single. It's just a rust binary that can be run on a single server or kubernetes cluster and what you have available as a node operator, and this server is a network of database servers that also come to consensus and generate ZK proofs. It's servers that do proving and generating queries and proofs, then talking to each other over consensus. The talking with each other over consensus facilitates using those same servers as a network, essentially an oracle for verification as well. So I know this is kind of wild, but like one single rust binary is a SQL database of ZK prover. A consensus network like has consensus and is essentially a verifier oracle. Right. It's a network of servers that can verify proofs and come to consensus that they each redundantly verified the proof. So you could have one node proving and the other 50 or 100 or 500 verifying or spots. You could have one node, maybe there's 500 validators in the space and time network. And one of those nodes proves per query, not one total for the whole network for each query, because it gets routed to one node to prove. And then a shard of 30 of them verify and come to consensus and then relay the verification to Ethereum. Now that's not as sexy as verifying on Ethereum, but this is a response to you saying, hey, Sriram mentioned that when we have, when we run out of space, what do we do? Well, you can offload a space and time. Space and time is all about offloading, baby. Offload your storage, offload your computer, offload your data access, and now finally offload the verification of your proofs to a low cost service that's cryptographically secure.
**Speaker A:**
I'm really glad you continue talking about that because that like specific insight that the space or the proposed space and time network is not really about like just creating another blockchain style network where every node is just replicating the same computation and then coming to consensus. But it's really leveraging the ZK succinct prover verification paradigm like makes so much sense. And I guess I don't really think that's alpha. I think I learned that you guys are doing a decentralized network just reading through your docs, but I would love to just fish for a little alpha and really help understand how are you going to get a bunch of people like me or node operators to get involved in. So I guess I'll just ask you the direct question. Do you anticipate some sort of token based system or how are you thinking about creating a decentralized network?
**Speaker C:**
I'll ask you the direct question. Win token.
**Speaker A:**
Yeah.
**Speaker C:**
Okay.
**Speaker A:**
That's the direct question.
**Speaker C:**
Yeah. Hoping, hoping to bring it into 2024. We got more work to do. We're not quite ready. We have arguably nine months of engineering before we're ready to drop a token. And I'll admit like in 9 months is going to be. If we do drop a token in nine months, if that's going to be a crazy, very aggressive nine months. So my guess it'll take much longer. But all I can say is this aspirationally. I'm hoping to bring a space and time token into 2024 or else I'm probably going to miss this bull run.
**Speaker A:**
Yeah, no, yeah, fair enough. Yeah.
**Speaker C:**
At the end of the day, we have a message to tell, right? We have a story to tell, we have a main net to launch and that's a lot of work to do in less than a year.
**Speaker A:**
Yeah. Well, I guess as a closing thought, I mean I, I hear you and I think we opened this conversation by saying that like the, the biggest problem that space and time has isn't really even an engineering or a company or a building problem. It's a communication problem. And I think that you have any.
**Speaker C:**
Recommendations to solve it?
**Speaker A:**
After hearing, I think, look, I think I can give you recommendations and you're smarter than me and you've got a whole team behind you. I'm sure you've thought of it. But just a comment of hope is that two years ago, literally the only way anyone could conceptualize ZK was oh, we used to have plasma. And that wasn't really working. And so ZK is going to be the answer to make this other esoteric thing work. Like, that's all ZK was. And like so far, like we've come so far from that and we're already talking about how ZK is like a paradigm, paradigm changing technology that is about offloading, creating modularity within blockchains. And so I, I'm just so confident in all these conversations I'm having on this podcast that like the story is being written right in front of us. And like, yes, we all have work to do, like we're all contributing to it in our own way. But I just like am pretty confident that over the next year the problem of like it's very hard to communicate what we're doing is going to be start to be solved by just like the community actually building the things that we've been just like theorizing in our heads. And whether it's space and time or succinct or modulus or like all of these companies that we keep talking about, like the ecosystem's coming together and the narrative is coming together and like none of us are having to work as hard as we did even a year ago to explain like what is even.
**Speaker B:**
The purpose of this.
**Speaker C:**
Maybe the, maybe the community, maybe the ecosystem will sort of define half our messaging for us.
**Speaker A:**
Well, I think, I think that is like the law of crypto. And like, let me put it this way, like back when like compound first, you know, compound aave like how much work was needed to explain like decentralized finance and like these primitives around. Like, like the one that's coming to my head is ve tokens, like how many hundreds of thousands of threads were explaining like what like locking and four year and emissions and all this voting stuff was and now like it, it is part of the expected, like, if you don't know what that is, that is like a signal that like you're not really in crypto yet. And so like to think that that's not going to happen. And like that, you know, Sriram and Eigenlayer was like our uniswap moment in ZK is I think just like a misunderstanding of how this industry works.
**Speaker C:**
Well said. Well, hey, it's been a blast.
**Speaker A:**
Yeah, no, this has been awesome. And before I let you go, can you just give the audience, just let them know how to find you, how to find space and time and like if they want to learn more about what you're doing and start to build applications that plug into this. What should they do?
**Speaker C:**
Yeah, okay, find us on Find Space and time on Twitter at Space and TimEdb. We have a pretty popping Discord as well if you want to get more involved there. And I have no doubt that a point system is on its way. Tbd. I'm chief Bildle, not build Bildle Biddle. I don't even know how to pronounce my own Twitter, etc. And come follow us. We're always talking AI and crypto, always talking ZK and then pretty active on Twitter and Discord. Would love to grow the community and love to bring you in. And if you guys want to learn more about what we're building, tons of documentation, tons of content on our website as well. So we'll see you there.
**Speaker A:**
Awesome, man. Well, Scott, I was telling you before, like, I'm so excited about what you're building and I'm so excited by like, as we've been saying, all projects that are taking this like literal magic of hiding numbers behind curves and then using that to create business applications and turning that into like real things that are changing the world. I'm just so proud of you guys and impressed by you guys and can't wait to watch like, especially the next year.
**Speaker C:**
So if you want to use the app, it's App spaceandtime AI. If you want to see this next gen user experience we've been talking about for the last hour and hour or so, go to App spaceandtime AI, connect your wallet, write your first AI query.
**Speaker A:**
Awesome. Well, perfect. All the links will be in the show notes and again, just Scott, thank you so much and happy New Year.
**Speaker C:**
Happy New Year. Talk soon. My friend Sam.