June 29, 2018

IOHK visit Google’s London offices (Part 1)

Googlers keen to talk Cardano and the future of cryptocurrency

Jane Wild

Communications Director

IOHK

Cryptocurrency is one of the most discussed topics of the moment, and whatever people think about it, they all want to know what’s next for the technology. An audience that was no different was at Google, where Charles Hoskinson was invited to talk about Cardano and the future of cryptocurrencies. At the meeting, held at Google’s London headquarters last month, Googlers around the world dialled in to hear the presentation and put questions to IOHK’s chief executive, and to Duncan Coutts, IOHK Director of Engineering. As you might expect from a company that has laid much of the ground for today’s technological landscape, Googlers asked some of the most incisive and informed questions Charles has had on his speaking tour this year.

After a brief introduction from Charles on IOHK and Cardano, the floor was opened to questions. Cardano development raised much interest, and Charles explained how its consensus protocol, Ouroboros, uses staking as a means to encourage people to join and help run the network. Development milestones were in focus too, such as one expected in July, when a test network will be opened to developers who want to play around with smart contracts on the IELE virtual machine. Later this year, full decentralization of the network is expected, as part of the Shelley phase of development, and Charles explained the background to all these topics.

There followed questions about how developers could get involved with Cardano, and about the K framework, which is part of IOHK’s test smart contract capability; how cryptocurrencies will cater for privacy, and of course, about where cryptocurrencies are headed. After the session, Googlers were kind enough to take the IOHK team up in the glass lifts to the top of the building and on to the roof, to enjoy the spectacular view across London.

Read the conversation, below.

Q. I have a question about Ouroboros and staking: is the number of tokens on offer sufficient to convince people to join the protocol and help run the network?

Charles: We published a preliminary monetary policy. The ceiling is 45 billion tokens and the current number in circulation is 26 billion so we have a little bit of room to work with inflation there, plus there’s transaction fees as well to subsidise transaction validation.

First, Proof of Stake (PoS) is an extremely cheap protocol to run, especially Ouroboros, if you compare it to mining. The odds are that the operational costs will be so much lower that you really don’t need to pay as much. But it’s a broader and more abstract question: how do you handle fees and incentives and stake pools and delegation and then get all those incentives engineered in a way that you have reasonable game theoretic reasons to believe that the system is going to behave as intended? That’s a really hard question. So we have two individual work streams. One is led by Elias Koutsoupias, an algorithmic game theorist at Oxford and a Gödel prize winner. He’s working on the incentives question, trying to create models and understand [inaudible] first example – if you want to delegate and run a collection of stake pools, how many ought there to be and what are the economics of that?

Outside Google HQ, London
Outside Google HQ, London

Then, the other side is, if I’m going to try to convince people to delegate, they ought to get a reward, so how much should that be? And then you have to do some empirical calculations – what is the operational cost of the network? You don’t want to pay too much more (inaudible) but you also want to pay enough to incentivise people to run 24/7 nodes to maintain the system. It’s an interesting question, but with the inflation that we’ve proposed we have more than enough wiggle room to work with. Not only will people participate, they’ll probably make windfall profits relative to operational costs, given the way these markets work.

We opened up registration for stake pools last month and were looking for 100 applicants for a beta test, but got 1,500 applications – 15 times more people expressed interest than we expected.

As with all monetary parameters in a beta system, these things can be adjusted depending on facts and circumstances, but the reality is that the driver here is the price of the underlying asset – the token – and markets tend to converge on that. The short answer is, it’s probably going to work out; the long answer is that we’re probably not going to have the right model to begin with. We’re either going to underpay or overpay and it’s qualitatively going to be pretty obvious, based on participation of the network. The odds are that we’re probably going to overpay in terms of rewards.

Q. On all the projects that you are driving, are there specific milestones that will for sure be completed this year?

Look at cardanoroadmap.com, for Cardano-specific projects. Month by month, it gives an update on where we’re at. We also do weekly reports and we try to be as transparent as possible about where we’re at. Our goal is to release the next major version of Cardano some time this year, called Shelly. We are working really hard towards that. It might slip, but the odds are that it won’t. It’s a difficult project. Shelley is true decentralization of the network. At the moment we’re running our proof of stake protocol in a forced delegation model. So all the PoS mechanics are there and the stake rights have been delegated to nodes that IOHK and two other entities control, so it’s a federated network. We did this because we’re not insane. You don’t go and invent a protocol in the academy, turn it on and say ‘Good luck everybody’. Instead, you have training wheels on your bicycle. You say, ‘Let’s launch this system in a federated format and gradually decentralise the system once we have momentum and assurance that what we’ve done is correct’. And also when we’ve trained up hundreds of people to run stake pools and participate in the staking process so there’s a bit of redundancy and a much more natural unlocking of the system. So, over six to nine months, that process will continue and hopefully all the Shelly mechanics will roll out.

In parallel, we are releasing testnets for smart contracts. The first one will be released at the end of the month, and this is done with something called the KEVM. We worked with Runtime Verification at the University of Illinois Urbana-Champaign, and they took the operational semantics of the Ethereum Virtual Machine and wrote them in a meta language called K. What the K framework allows you to do is implement a correct-by-construction version, just from the semantics of the virtual machine. So it’s really cool. What we were able to do is actually take the K semantics, build a VM, connect that to a fork of Ethereum and we’re now turning that on at the end of the month to test that this framework is working and you can run smart contracts on it. We also have another virtual machine that we built specially for Cardano, called IELE. And those semantics are publicly available in GitHub. We have a paper that we are submitting for peer review. That testnet will launch some time in June or July – that gives people who live in the Ethereum world and write smart contracts the chance to play around and deploy code on our system and look at our gas model and get a better understanding of how our system works. And then, over time, testnet iterations will occur and eventually we’ll pull these two systems together.

IOHK arriving at Google HQ, London
IOHK arriving at Google HQ, London

One of the architectural features of Cardano is the separation of accounting and computation. With Ethereum they are bundled together, your peanut butter is in your jelly. And that’s fun from an implementation standpoint; it’s simpler to maintain, but it creates a lot of problems. If you screw up parts of your computational model you’ll also inadvertently block your ability to spend money. Also, computation carries a much higher liability than accounting does. For example, Bitcoin versus Ethereum in terms of transactions. In Bitcoin, if I send Jane a transaction and Philipp a transaction, buying a laptop from Jane and weapons-grade plutonium from Philipp – the miner in the system would have no way of differentiating between those two transactions, they’re fungible. We don’t know the actors, they are just transactions. But if we’re running code, you might be able to differentiate between Crypto Kitties and Silk Road. There is some precedent if you look at Tor exit nodes having legal liability and being arrested for trafficking, child pornography or copyright violations. Computation, if you can discover the difference between what Jane and Philipp are doing, has higher liability. In our view, architecturally, it’s a good idea to separate them and it also gives you a lot of flexibility because you can have multiple computational models, like we’re backwards compatible with Ethereum and we had a different model and we have a functional model and you can do a lot of cool stuff with that. The downside is that you have to maintain the state of many ledgers at the same time and you also have to figure out how to move value between the ledgers, which we’re going to do because of our interoperability mandate. We decided to take this on but it adds complexity to the system, a lot more work to do. That will be gradually rolled out in stages through testnets and it’s quite a bit of work.

Duncan: There’s the compartmentalization aspect of it. Ethereum is monolithic, it bundles together all of the features, so if one thing breaks the whole thing breaks. If there’s some fundamental flaw you haven’t found, not only have all your ERC20 tokens gone but so has ether itself. There’s no compartmentalization between those things. But if you have, in essence, a Bitcoin-style simple settlement layer and then you do your EVM stuff and equivalent to EVM on different blockchains that are linked you can move money between them but they’re otherwise compartmentalized. If for some fundamental reason there’s a flaw found in the EVM that destroys that, well, that’s very sad but it doesn’t destroy the settlement layer. That’s a big advantage. And it means, as Charles says, you can add new ones of these things that can be somewhat experimental because that lets you evolve the system a bit.

Charles: We wrote, I think, the first formalization of sidechains. There was a sidechains paper written in 2017. So what a sidechains transaction is for those who don’t know it, I like to call it interledger transactions. So you have a source ledger, a destination ledger and an asset. So basically what you’re trying to do is initiate a transaction, where the destination ledger can answer two questions about the transaction. One, is that the asset does exist from the source ledger, and two, that the asset from the source ledger has not been double spent. The foundational question you’re asking is how much information does the destination ledger need to possess to be able to validate that transaction and verify those two questions? We wrote a model, first for proof of work, called ‘Non-interactive Proofs of Proof of Work’ that explains how to do this, and now we’ve extended that model to Ouroboros and the proof of stake world and we have a paper that we’ve just submitted that contains details on how to construct these types of proofs and what these transactions look like. There are still questions about how large the proofs are going to be, relative to the size of the ledger, and there are questions about validation time, and also generality. The proofs we scoped work with our particular consensus algorithm, but we’d like to make these things work for all ledgers, not just a particular type of ledger, so there’s a lot of work to be done there. But it’s the first time it’s been formalized. The concept has been enumerated in a paper that was written in 2014, by a competitor of ours called Blockstream, but they didn’t write a proper academic paper and submit it for peer review. That’s considerably harder, there’s a lot more to think about when you’re rigorous about these things. In the long term, it’s a great thing to think about for scalability and interoperability, and also testing because you can now deploy versions of your chain with different parameters and it’s easy to move value between them and you can let people vote with their feet.

Q. How will Cardano overcome the first-mover advantage of Ethereum? Do you see multiple smart contract platforms co-existing in the space or will there be one prominent winner?

So how many Java, C++ or Go developers are writing code on Ethereum? You can’t, Ethereum doesn’t support any of these languages. They can’t even run a single viral app on the platform. If you look at the top 10 languages, none of them works on the system, so, by definition, all those developers aren’t developing for the system, they have to go and learn new tools and new stuff. With Cardano, first off, we’re backward-compatible, 100%, we’re running an EVM. So you can take your Solidity code and your Web 3 stuff and all the things you’ve come to know and love about Ethereum, and you can run it on my system, and it’s faster, cheaper and safer to run it on my system because we have a better consensus model. Second, through our work with the University of Illinois, through Runtime Verification – Grigore Rosu and his team – we’re working on something called semantics-based compilation. Should this be successful, we can take any K-defined system and translate it to run on our machine. All you have to do for a new language is just write the semantics in K, one time, and then K framework takes care of the rest of it. It’s a hard, high-risk, high-return project, but at the end of the day, we will end up one way or another supporting mainstream languages. Part of it is backwards compatibility, part of it is supporting mainstream languages, part of it is recognising that the vast majority of real applications aren’t running on Ethereum at the moment. The other thing is that smart contracts are not monolithic and you write and run your entire application on a blockchain and the reality is you have to add a server-client component to it. Let’s think of it like a poker game, maybe you trust random number generation to the server, but other things like player matching and account managing, these things are almost certainly not going to run on your blockchain, you’re dumb if you’re going to do these types of things. They’re probably going to run on some sort of server back-end. It’s more of, I treat a smart contract as a computational service. So it’s silly to say, ‘Oh well, only one platform and one token’s won’, it’s akin to saying Internet Explorer’s won and we all have to be Active X developers, god help us. I’m not loyal to IE, or Amazon Web Services. Rather, I have to ask, what’s the cheapest, best, most secure environment for me to run my computation in for my users? Our strategy is be backwards compatible, support more languages, especially mainstream languages in a better way, have a better user and developer experience, and be smarter about the ecosystem in which these contracts live. So we make it easier for the server to come into play, to use multiple ledgers and have a good app platform to deploy these types of things on, and we’ll definitely get a lot of growth there.

The other thing is that very few people today write smart contracts. They play with these things, but very few people are smart contract developers. If 99% of developers aren’t in the ecosystem, how can you say a person has first-mover advantage? It’s nuts.

Q: In 2014 I played around with the K framework for a university project, but I found it to be extremely slow.

Charles: Yes, because there is a K to OCaml back-end, but we’re building a K to LLVM back-end, which should be about 100 times faster.

Q: But is that enough? Because it was outright impossible to run a reasonably large project with thousands of lines of code.

Duncan: This is one of the problems that Grigore is trying to solve. As you say, executing the operational semantics directly is very slow. Runtime Verification are basically trying to do a compiler approach, which they believe will be fast.

Charles: It’s still a big unknown, exactly how much performance is necessary – and can we get within an order of magnitude of handwritten code? One proof of concept is the testnet that we’re launching at the end of this month running a version of the Ethereum Virtual Machine built in K. You can run smart contracts on the KEVM and compare them to an Ethereum testnet and see the performance delta between the two. But it’s also important to understand that the open source K framework components that you use and the version that Grigore uses are different. Grigore built a second set of software for his private company, Runtime Verification, that he uses for contracts he’s had with Boeing and Nasa, and I think that’s about 100 times faster than the one that you used. But even so, there’s still a big performance delta that needs to be ameliorated. We have quite a large team, there’s 19 people involved in that contract. Some of those people are allocated specifically for performance. Now let’s say that we can’t quite bridge that performance, there’s probably a lot of things we could do to cheat and improve things a bit, including handwriting certain things, or abandoning semantics-based compilation for more traditional techniques. But it’s still a big unknown. This is also why we have a multi-computation model, so in addition to the IELE framework and the K framework, we also have an alternative approach called Plutus.

Duncan: We think that most of the computation time in most of the smart contracts goes into crypto primitives. So you don’t have to have the world’s fastest interpreter to interpret those smart contracts. Plutus – people like me with a programming language background look at the EVM and Solidity, and say ‘It looks like the people that wrote this didn’t have much experience with programming language design’. There’s an academic discipline to the design of programming languages, and that didn’t seem to inform the design of Solidity at all. That shows up, in things like if you miss one error code then your smart contract loses everybody’s money. So we have two smart contract platforms that Charles has mentioned, the backward compatibility story, byte-for-byte compatibility with the K version of the EVM and then IELE, which is EVM style, but fixing a lot of the obvious problems. That gives us the story of how you compile Solidity programs to IELE and KEVM. In addition, we have a smart contract platform that is based on programming language research, in particular functional programming. We have an approach based on a functional core language, which is actually executed, based on system [inaudible] and then two languages which are compiled into that core language. The core language is what’s executed on the consensus nodes. That’s the [inaudible] equivalent of the EVM. We don’t call it a VM, it’s just an intermediate code, that’s the equivalence, and then two languages initially that compile into that. One is called Plutus, which is a functional, Turing-complete language very similar in many ways to Haskell but simplified and cut down. Then there is a non-Turing-complete DSL (domain specific language) that is aimed specifically at financial applications. It’s based on a paper from around 2001, ‘Financial Smart Contracts’, that lets you express all the normal and even exotic financial contracts that people tend to write, but it does it in a much simpler way so they can be easily analysed and understood. The point is, if your application fits into the domain of that DSL then you would get much shorter and simpler, easier-to-analyse programs, but alternatively, you can go to the general purpose, Turing-complete functional language and in both cases, it’s a two-layer language approach. If you look at existing Ethereum applications they don’t just run on the blockchain, as Charles said, it’s a blob of Javascript that runs on the client, and some Solidity code that runs on the back-end. Your programming model is this two-level thing anyway, but there are two different languages, in many ways like our web stack. Our web stack has grown over time, one language runs on the back-end and another on the front-end, and these multi-language things are not that easy to do, especially when they have grown accidentally. Because we can see that, we are taking a more deliberate approach, and saying, let’s design an explicitly two-level language so this bit will run on the blockchain, this bit will run on the client, but they are very similar languages. So what we’re actually doing is Haskell on the client and Plutus on the blockchain. And Plutus is very similar to Haskell, so what you will see is one program with one file… one program with embedded snippets that run on the chain and an out-of-context [inaudible] layer that happens off-chain. That should give a better, more integrated development experience and we aim to be able to do things like analyzing the stuff that runs on-chain, so you can demonstrate the safety properties of the on-chain code.

Charles: And performance should be equivalent to Haskell, because they share a common core.

Duncan: Right, and the Haskell code will be run through the Haskell compiler.

Q: How can a software developer that is excited about your project get best involved with it? Do you have any plans for educating developers or creating developer-relation type of roles at IOHK?

Charles: I really admire what you guys did with Dart. I love the developer experience effort that Google put into it and there are things to take from that.

It’s difficult with a cryptocurrency that is very rigorous in its approach. You’re starting with white papers written by cryptographers and the notion of formal specifications and you’re trying to implement these and prove correctness, to figure out: when and how do you open that project up to successfully collaborate with one-source developers? We are hiring people specifically to work with the exterior community and try to communicate how we are running a project and how we are writing things and how we welcome third-party contributions. There is a lot of technology in our stack, we’re making material enhancements to K, so anyone who wants to contribute there definitely should. We have Electron in our stack, so we are using Chromium and Node.js for our wallet front-end so there’s a lot of things going on there with the Daedalus wallet, and we’d love contributions there. And, of course, there’s the Haskell back-end. We are reimplementing some of that back-end in Rust, and experimenting with Rust and web assembly in the browser, so a Chrome-based wallet. So there’s a lot of tech there and it depends on the core competency of the person and what exactly they’d like to contribute. We have yet to build an easy-to-use, formal external process, to make it friction-free for external developers to come and assist us and it’s going to be a high priority in the second half of this year to figure out how we do that.

Another thing is that if an open-source project is to be successful, especially with these types of protocols, we do need competition. When I was running Ethereum, multi-client models were very important for us, so we ended up implementing Geth and the C++ client. And then, later on, Gavin Wood split off and created the Parity client for Ethereum. Now, this was great because it really forced us to specify the protocol properly, and we could no longer say, ‘Well, the code is this specification,’ because you’re (inaudible) of ambiguity there. So we worked hard at proper documentation, and we’d like a similar environment to materialize, and it would be great to see some alternative projects grow. But at the moment, the best you can do is go to our forums, go to our GitHub repository, and you can, of course, open issues and email our developers. And if you’re really interested in making open source contributions, we’ll try to find a way to integrate you. And long term, we’ll have a formal process for that’s really easy for people to connect with. And also, again, it’s depending on the particular level you want to contribute to. For example, we do formal specification verification work, so if you’re an Isabelle or Coq developer and you want to work with us, that would be great. If there’s like five people there, we’d probably be able to do that, right? But, levity aside, it would be fun to find people there. And other things, like if you want to build applications in our system. We are launching testnets soon, so it would be much appreciated for people to write software to deploy in our system and try to break it because that helps our system get better. So that’s the best non-answer to your question that I can give!

Duncan: At the moment, our code is all open, it’s on GitHub. This is one of those projects that’s open source but not yet open, collaborative. It’s not easy at the moment for us to collaborate with people because we’re not yet using GitHub with its issue tracker; we have a private issue tracker at the moment, for example. So this is one of the things where we aim to get there, for there to be documentation that’s easy for other people to look at and understand and make contributions and for us to accept. So the goal is to get there, but we are not there yet. You can go and read all the code, but you can’t see what the open tickets are, the documentation’s a bit patchy. So that is where we’d like to get to, to be able to direct people and accept contributions from anyone really.

IOHK videoconferencing with Google HQ
IOHK videoconferencing with Google HQ

Charles: We are going to try to annotate a lot of the design decisions in the system. For example, we recently released a formal specification for our wallet back-end. I think it’s the first time it’s ever been done for a UTxO wallet, and we’re going to create YouTube lectures, going through section by section on that and putting it up specifically with the aim of educating developers how our design decisions work and what the system’s all about. So as we specify each component of our system, we’re going to try to do that. We have an internal department called IOHK Education, led by a mathematician named Lars Brünjes that specializes in that, so over time, you should see more accessible documentation materialising. Hopefully, that will encourage people who have the capacity to contribute to come in. We are also discussing how we open up our issue tracker. We made the decision to have a private issue tracker in the beginning because there’s a lot of, usually, security concerns or open discussion about what direction to go in because the protocol is still very young. So we just figured, I’ll just leave that all private and not worry too much about it. But we do have a moral obligation, as an open source project, to try to get project management and issue tracking into a more open domain. So there is a lot of open discussion about that. And once those things get into the open domain, it will considerably easier for open source contributions to occur.

Q: We have another question on privacy. Are there any plans to implement private transactions in the style of Monero or Zcash?

Charles: Yes, so Monero uses a primitive called ring signatures, and Zcash uses a snark primitive. So privacy is a complicated topic because you’re actually talking about three things. You’re talking about the privacy in terms of linkability, so, if I look at a transaction or an address, what’s the probability that I can relate it to a known identity? So basically, go from anonymous or pseudonymous to known, the linkability dimension of it. Then there’s the obfuscation of amounts. So you might not be able to easily connect the addresses or transactions to people, but if it’s a public ledger, you could certainly see what’s the most expensive transaction, and it creates kind of a priority cue for deanonymization. Say, ‘Ah, well, there’s a $10 million transaction that happened today, let’s go and find who has that with a wrench to rob him’. And then there’s the network side of things, so can you obfuscate the use of the protocol or try to prevent people from being able to understand what you’re doing with the protocol? So there are existing solutions in all three of these. Like ring signatures and Zcash cover the first category. Confidential transactions, for example, covers the second category. And the third are covered by technologies like Dandelion. So first, you have to understand that privacy is a spectrum, and also, it is one that carries considerable regulatory discussion. For example, Japan just announced that they’re probably going to de-list all the privacy coins. So if we wish to be in the Japanese markets and we were to embrace Monero-style privacy, there’s a very low probability that Japanese exchanges will list Ada, which is a high priority for us.

On the other hand, privacy is a moral right. If you don’t have privacy in your system, you’re basically creating a system that your entire financial history is publicly known back to the beginning of time, since the system’s inception, which is dystopian to the max. So the best way of resolving this is to develop out some really good privacy options and implement them as improvement proposals, and then take advantage of the governance part of the system. So when voting is available, we can actually have alternative proposals and say, ‘Well, if you wanted to have ring signature style privacy with the whole banana, here’s how we would do that.’ And how much public support do we have for that among Ada holders? And, basically, have a referendum and see which one wins out, and then you can see where in the spectrum you fall. But the important thing is, people need to be informed. If you maximise privacy, you will inadvertently make the protocol illegal within certain jurisdictions and limit market access within certain jurisdictions. If you make the protocol more open, you are inviting more dystopian people to track what you’re doing and use it against you. So we’ll let the community decide that, but we do have active research. For example, on Dandelion, we’ve been funding that team at UIUC, and they’re creating a version of Dandelion that will come out this year. We also have had discussions with people who have been formalising the Monero cryptography primitives and trying to make them more efficient and better bound to security and better bound to privacy. There’s one project out of UCL that was done by Sarah Meiklejohn, and there was one out of China that was done, some professors in Beijing and a few other places – I think it’s called RingCT 2.0. So there’s certainly a lot of good tech, and we know how to implement that technology, but it’s now mostly in the hands of a social phenomenon rather than someone like me making that decision on behalf of the ecosystem.

There’s another thing that’s seldom mentioned, which is the idea of backdoors. We used to live in the (inaudible) debate, or, either you give an unlimited-use private backdoor to a trusted actor like the FBI, or you don’t. But there can be a spectrum with these things. For example, we can put a backdoor in the system, but it has to be constructed with mono-cryptographic primitives, and the use of it requires majority consent of the system, and it’s publicly known if it’s used. So would you be willing to invest in a currency that says only a certain group of people, if they come together and they have near-universal consensus (inaudible) and you know that they’ve used it, and it can only be used on a particular person instead of the entire system, is that a reasonable compromise? So I think that more-nuanced discussion has to be had, and there is certainly a lot of tech that’s being developed to accommodate these types of things. In fact, the inventor of the Snark, Eli Ben-Sasson, has recently started a for-profit company, and he’s developed technology like this to augment Zcash to be able to provide these types of backdoors which are auditable and publicly verifiably when used and, in some cases, one-time used, depending on how they’re deployed. So we’ll certainly be a member of that conversation, and eventually, we’ll settle on something, and it’s (inaudible) of how much privacy’s required. Closely related to it is also the place of metadata and attribution. So under what contexts should you attach mandated transactions, and how do you do that, and how do you share it? This is really important for compliance. If you look at exchanges, they have KYC (know your customer) and AML (anti-money-laundering) reporting requirements, and because of that, they’ve inadvertently become data custodians where they hold tons of personally identifiable information about their customers. They don’t want to hold it, but they have to because of the law. It would be much better having a minimum viable escalation model where you are allowed to have a challenge-response type of a query where you ask questions about your customers, like, are you a US citizen or not? And you can get some verification of that. But you don’t have to have that data necessarily.

The example I like to use is that, in the US, you have to be 21 or older to drink. How we usually verify that is we look at your driver’s licence. And because I see that document, I know your address, I know your driver’s licence number, I know exactly how old you are and how fat you are, how tall you are, the colour of your eyes, the colour of your hair. That’s a bunch of information you don’t need to know, but you inadvertently know because of the way that you’ve done verification. It would be much simpler to be able to just ask a query: are you over the age of 21? And have a response of ‘yes’ and know that that response is correct. Then I’ve learnt nothing about you other than that particular question, and we leave it at that. And the proof itself is the sufficient burden for the merchant. So we’re certainly involved in that conversation. We have a lab at University of Edinburgh that studies these types of things, a privacy and verified computation lab. It’s led by a former Microsoft person, Markulf Kohlweiss, who worked on the Everest project and other things. And privacy is in the remit of the lab, so we’ll come up with some options, and then we’ll democratically validate those options. And whichever one the Ada holders have decided, we’ll implement into the system. And by the way, this takes time, several years.

Continue onto Part 2 here.

View Comments