ReportWire

Category: Cryptocurrency

Cryptocurrency | ReportWire publishes the latest breaking U.S. and world news, trending topics and developing stories from around globe.

  • DAOs Are Not Scary, Part 1: Self-Enforcing Contracts And Factum Law | Ethereum Foundation Blog

    DAOs Are Not Scary, Part 1: Self-Enforcing Contracts And Factum Law | Ethereum Foundation Blog

    [ad_1]

    Many of the concepts that we promote over in Ethereum land may seem incredibly futuristic, and perhaps even frightening, at times. We talk about so-called “smart contracts” that execute themselves without any need, or any opportunity, for human intervention or involvement, people forming Skynet-like “decentralized autonomous organizations” that live entirely on the cloud and yet control powerful financial resources and can incentivize people to do very real things in the physical world, decentralized “math-based law”, and a seemingly utopian quest to create some kind of fully trust-free society. To the uninformed user, and especially to those who have not even heard of plain old Bitcoin, it can be hard to see how these kinds of things are possible, and if they are why they can possibly be desirable. The purpose of this series will be to dissect these ideas in detail, and show exactly what we mean by each one, discussing its properties, advantages and limitations.

    The first installment of the series will talk about so-called “smart contracts”. Smart contracts are an idea that has been around for several decades, but was given its current name and first substantially brought to the (cryptography-inclined) public’s attention by Nick Szabo in 2005. In essence, the definition of a smart contract is simple: a smart contract is a contract that enforces itself. That is to say, whereas a regular contract is a piece of paper (or more recently PDF document) containing text which implicitly asks for a judge to order a party to send money (or other property) to another party under certain conditions, a smart contract is a computer program that can be run on hardware which automatically executes those conditions. Nick Szabo uses the example of a vending machine:

    A canonical real-life example, which we might consider to be the primitive ancestor of smart contracts, is the humble vending machine. Within a limited amount of potential loss (the amount in the till should be less than the cost of breaching the mechanism), the machine takes in coins, and via a simple mechanism, which makes a freshman computer science problem in design with finite automata, dispense change and product according to the displayed price. The vending machine is a contract with bearer: anybody with coins can participate in an exchange with the vendor. The lockbox and other security mechanisms protect the stored coins and contents from attackers, sufficiently to allow profitable deployment of vending machines in a wide variety of areas.

    Smart contracts are the application of this concept to, well, lots of things. We can have smart financial contracts that automatically shuffle money around based on certain formulas and conditions, smart domain name sale orders that give the domain to whoever first sends in $200, perhaps even smart insurance contracts that control bank accounts and automatically pay out based on some trusted source (or combination of sources) supplying data about real-world events.

    Smart Property

    At this point, however, one obvious question arises: how are these contracts going to be enforced? Just like traditional contracts, which are not worth the paper they’re written on unless there’s an actual judge backed by legal power enforcing them, smart contracts needs to be “plugged in” to some system in order to actually have power to do anything. The most obvious, and oldest, solution is hardware, an idea that also goes by the name “smart property”. Nick Szabo’s vending machine is the canonical example here. Inside the vending machine, there is a sort of proto-smart-contract, containing a set of computer code that looks something like this:

    if button_pressed == “Coca Cola” and money_inserted >= 1.75:
    release(“Coca Cola”)
    return_change(money_inserted – 1.75)

    else if button_pressed == “Aquafina Water” and money_inserted >= 1.25:
    release(“Aquafina Water”)
    return_change(money_inserted – 1.25)

    else if …

    The contract has four “hooks” into the outside world: the button_pressed and money_inserted variables as input, and therelease and return_change commands as output. All four of these depend on hardware, although we focus on the last three because human input is generally considered to be a trivial problem. If the contract was running on an Android phone from 2007, it would be useless; the Android phone has no way of knowing how much money was inserted into a slot, and certainly cannot release Coca Cola bottles or return change. On a vending machine, on the other hand, the contract carries some “force”, backed by the vending machine’s internal Coca Cola holdings and its physical security preventing people from just taking the Coca Cola without following the rules of the contract.

    Another, more futuristic, application of smart property is rental cars: imagine a world where everyone has their own private key on a smartphone, and there is a car such that when you pay $100 to a certain address the car automatically starts responding commands signed by your private key for a day. The same principle can also be applied to houses. If that sounds far-fetched, keep in mind that office buildings are largely smart property already: access is controlled by access cards, and the question of which (if any) doors each card is valid for is determined by a piece of code linked to a database. And if the company has an HR system that automatically processes employment contracts and activates new employees access cards, then that employment contract is, to a slight extent, a smart contract.

    Smart Money and Factum Society

    However, physical property is very limited in what it can do. Physical property has a limited amount of security, so you cannot practically do anything interesting with more than a few tens of thousands of dollars with a smart-property setup. And ultimately, the most interesting contracts involve transferring money. But how can we actually make that work? Right now, we basically can’t. We can, theoretically, give contracts the login details to our bank accounts, and then have the contract send money under some conditions, but the problem is that this kind of contract is not really “self-enforcing”. The party making the contract can always simply turn the contract off just before payment is due, or drain their bank account, or even simply change the password to the account. Ultimately, no matter how the contract is integrated into the system, someone has the ability to shut it off.

    How can we solve the problem? Ultimately, the answer is one that is radical in the context of our wider society, but already very much old news in the world of Bitcoin: we need a new kind of money. So far, the evolution of money has followed three stages: commodity money, commodity-backed money and fiat money. Commodity money is simple: it’s money that is valuable because it is also simultaneously a commodity that has some “intrinsic” use value. Silver and gold are perfect examples, and in more traditional societies we also have tea, salt (etymology note: this is where the word “salary” comes from), seashells and the like. Next came commodity-backed money – banks issuing certificates that are valuable because they are redeemable for gold. Finally, we have fiat money. The “fiat” in “fiat money” is just like in “fiat lux“, except instead of God saying “let there be light” it’s the federal government saying “let there be money”. The money has value largely because the government issuing it accepts that money, and only that money, as payment for taxes and fees, alongside several other legal privileges.

    With Bitcoin, however, we have a new kind of money: factum money. The difference between fiat money and factum money is this: whereas fiat money is put into existence, and maintained, by a government (or, theoretically, some other kind of agency) producing it, factum money just is. Factum money is simply a balance sheet, with a few rules on how that balance sheet can be updated, and that money is valid among that set of users which decides to accept it. Bitcoin is the first example, but there are more. For example, one can have an alternative rule, which states that only bitcoins coming out of a certain “genesis transaction”, count as part of the balance sheet; this is called “colored coins”, and is also a kind of factum money (unless those colored coins are fiat or commodity-backed).

    The main promise of factum money, in fact, is precisely the fact that it meshes so well with smart contracts. The main problem with smart contracts is enforcement: if a contract says to send 200toBobifXhappens,andXdoeshappen,howdoweensurethat200 to Bob if X happens, and X does happen, how do we ensure that

    This is actually a much more revolutionary development than you might think at first; with factum money, we have created a way for contracts, and perhaps even law in general, to work, and be effective, without relying on any kind of mechanism whatsoever to enforce it. Want a $100 fine for littering? Then define a currency so that you have 100 units less if you litter, and convince people to accept it. Now, that particular example is very far-fetched, and likely impractical without a few major caveats which we will discuss below, but it shows the general principle, and there are many more moderate examples of this kind of principle that definitely can be put to work.

    Just How Smart Are Smart Contracts?

    Smart contracts are obviously very effective for any kind of financial applications, or more generally any kind of swaps between two different factum assets. One example is a domain name sale; a domain, like google.com, is a factum asset, since it’s backed by a database on a server that only carries any weight because we accept it, and money can obviously be factum as well. Right now, selling a domain is a complicated process that often requires specialized services; in the future, you may be able to package up a sale offer into a smart contract and put it on the blockchain, and if anyone takes it both sides of the trade will happen automatically – no possibility of fraud involved. Going back to the world of currencies, decentralized exchange is another example, and we can also do financial contracts such as hedging and leverage trading.

    However, there are places where smart contracts are not so good. Consider, for example, the case of an employment contract: A agrees to do a certain task for B in exchange for payment of X units of currency C. The payment part is easy to smart-contract-ify. However, there is a part that is not so easy: verifying that the work actually took place. If the work is in the physical world, this is pretty much impossible, since blockchains don’t have any way of accessing the physical world. Even if it’s a website, there is still the question of assessing quality, and although computer programs can use machine learning algorithms to judge such characteristics quite effectively in certain cases, it is incredibly hard to do so in a public contract without opening the door for employees “gaming the system”. Sometimes, a society ruled by algorithms is just not quite good enough.

    Fortunately, there is a moderate solution that can capture the best of both worlds: judges. A judge in a regular court has essentially unlimited power to do what they want, and the process of judging does not have a particularly good interface; people need to file a suit, wait a significant length of time for a trial, and the judge eventually makes a decision which is enforced by the legal system – itself not a paragon of lightning-quick efficiency. Private arbitration often manages to be cheaper and faster than courts, but even there the problems are still the same. Judges in a factum world, on the other hand, are very much different. A smart contract for employment might look like this:

    if says(B,”A did the job”) or says(J,”A did the job”):
    send(200, A)

    else if says(A,”A did not do the job”) or says(J,”A did not do the job”):
    send(200, B)

    says is a signature verification algorithm; says(P,T) basically checks if someone had submitted a message with text T and a digital signature that verifies using P’s public key. So how does this contract work? First, the employer would send 200 currency units into the contract, where they would sit in escrow. In most cases, the employer and employee are honest, so either A quits and releases the funds back to B by signing a message saying “A did not do the job” or A does the job, B verifies that A did the job, and the contract releases the funds to A. However, if A does the job, and B disagrees, then it’s up to judge J to say that either A did the job or A did not do the job.

    Note that J’s power is very carefully delineated; all that J has the right to do is say that either A did the job or A did not do the job. A more sophisticated contract might also give J the right to grant judgements within the range between the two extremes. J does not have the right to say that A actually deserves 600 currency units, or that by the way the entire relationship is illegal and J should get the 200 units, or anything else outside of the clearly defined boundaries. And J’s power is enforced by factum – the contract contains J’s public key, and thus the funds automatically go to A or B based on the boundaries. The contract can even require messages from 2 out of 3 judges, or it can have separate judges judge separate aspects of the work and have the contract automatically assign B’s work a quality score based on those ratings. Any contract can simply plug in any judge in exactly the way that they want, whether to judge the truth or falsehood of a specific fact, provide a measurement of some variable, or be one of the parties facilitating the arrangement.

    How will this be better than the current system? In short, what this introduces is “judges as a service”. Now, in order to become a “judge” you need to get hired at a private arbitration firm or a government court or start your own. In a cryptographically enabled factum law system, being a judge simply requires having a public key and a computer with internet access. As counterintuitive as it sounds, not all judges need to be well-versed in law. Some judges can specialize in, for example, determining whether or not a product was shipped correctly (ideally, the postal system would do this). Other judges can verify the completion of employment contracts. Others would appraise damages for insurance contracts. It would be up to the contract writer to plug in judges of each type in the appropriate places in the contract, and the part of the contract that can be defined purely in computer code will be.

    And that’s all there is to it.

    The next part of this series will talk about the concept of trust, and what cryptographers and Bitcoin advocates really mean when they talk about building a “trust-free” society.

    [ad_2]

    Source link

  • Ethereum Scalability and Decentralization Updates | Ethereum Foundation Blog

    Ethereum Scalability and Decentralization Updates | Ethereum Foundation Blog

    [ad_1]

    Scalability is now at the forefront of the technical discussion in the cryptocurrency scene. The Bitcoin blockchain is currently over 12 GB in size, requiring a period of several days for a new bitcoind node to fully synchronize, the UTXO set that must be stored in RAM is approaching 500 MB, and continued software improvements in the source code are simply not enough to alleviate the trend. With every passing year, it becomes more and more difficult for an ordinary user to locally run a fully functional Bitcoin node on their own desktop, and even as the price, merchant acceptance and popularity of Bitcoin has skyrocketed the number of full nodes in the network has essentially stayed the same since 2011. The 1 MB block size limit currently puts a theoretical cap on this growth, but at a high cost: the Bitcoin network cannot process more than 7 transactions per second. If the popularity of Bitcoin jumps up tenfold yet again, then the limit will force the transaction fee up to nearly a dollar, making Bitcoin less useful than Paypal. If there is one problem that an effective implementation of cryptocurrency 2.0 needs to solve, it is this.

    The reason why we in the cryptocurrency spaceare having these problems, and are making so little headway toward coming up with a solution, is that there one fundamental issue with all cryptocurrency designs that needs to be addressed. Out of all of the various proof of work, proof of stake and reputational consensus-based blockchain designs that have been proposed, not a single one has managed to overcome the same core problem: that every single full node must process every single transaction. Having nodes that can process every transaction, even up to a level of thousands of transactions per second, is possible; centralized systems like Paypal, Mastercard and banking servers do it just fine. However, the problem is that it takes a large quantity of resources to set up such a server, and so there is no incentive for anyone except a few large businesses to do it. Once that happens, then those few nodes are potentially vulnerable to profit motive and regulatory pressure, and may start making theoretically unauthorized changes to the state, like giving themselves free money, and all other users, which are dependent on those centralized nodes for security, would have no way of proving that the block is invalid since they do not have the resources to process the entire block.

    In Ethereum, as of this point, we have no fundamental improvements over the principle that every full node must process every transaction. There have been ingenious ideas proposed by various Bitcoin developers involving multiple merge-mined chains with a protocol for moving funds from one chain to another, and these will be a large part of our cryptocurrency research effort, but at this point research into how to implement this optimally is not yet mature. However, with the introduction of Block Protocol 2.0 (BP2), we have a protocol that, while not getting past the fundamental blockchain scalability flaw, does get us partway there: as long as at least one honest full node exists (and, for anti-spam reasons, has at least 0.01% mining power or ether ownership), “light clients” that only download a small amount of data from the blockchain can retain the same level of security as full nodes.

    What Is A Light Client?


    The basic idea behind a light client is that, thanks to a data structure present in Bitcoin (and, in a modified form, Ethereum) called a Merkle tree, it is possible to construct a proof that a certain transaction is in a block, such that the proof is much smaller than the block itself. Right now, a Bitcoin block is about 150 KB in size; a Merkle proof of a transaction is about half a kilobyte. If Bitcoin blocks become 2 GB in size, the proofs might expand to a whole kilobyte. To construct a proof, one simply needs to follow the “branch” of the tree all the way up from the transaction to the root, and provide the nodes on the side every step of the way. Using this mechanism, light clients can be assured that transactions sent to them (or from them) actually made it into a block.

    This makes it substantially harder for malicious miners to trick light clients. If, in a hypothetical world where running a full node was completely impractical for ordinary users, a user wanted to claim that they sent 10 BTC to a merchant with not enough resources to download the entire block, the merchant would not be helpless; they would ask for a proof that a transaction sending 10 BTC to them is actually in the block. If the attacker is a miner, they can potentially be more sophisticated and actually put such a transaction into a block, but have it spend funds (ie. UTXO) that do not actually exist. However, even here there is a defense: the light client can ask for a second Merkle tree proof showing that the funds that the 10 BTC transaction is spending also exist, and so on down to some safe block depth. From the point of view of a miner using a light client, this morphs into a challenge-response protocol: full nodes verifying transactions, upon detecting that a transaction spent an output that does not exist, can publish a “challenge” to the network, and other nodes (likely the miner of that block) would need to publish a “response” consisting of a Merkle tree proof showing that the outputs in question do actually exist in some previous block. However, there is one weakness in this protocol in Bitcoin: transaction fees. A malicious miner can publish a block giving themselves a 1000 BTC reward, and other miners running light clients would have no way of knowing that this block is invalid without adding up all of the fees from all of the transactions themselves; for all they know, someone else could have been crazy enough to actually add 975 BTC worth of fees.

    BP2

    block-protocol-20

    With the previous Block Protocol 1.0, Ethereum was even worse; there was no way for a light client to even verify that the state tree of a block was a valid consequence of the parent state and the transaction list. In fact, the only way to get any assurances at all was for a node to run through every transaction and sequentially apply them to the parent state themselves. BP2, however, adds some stronger assurances. With BP2, every block now has three trees: a state tree, a transaction tree, and a stack trace tree providing the intermediate root of the state tree and the transaction tree after each step. This allows for a challenge-response protocol that, in simplified form, works as follows:

    1. Miner M publishes block B. Perhaps the miner is malicious, in which case the block updates the state incorrectly at some point.

    2. Light node L receives block B, and does basic proof of work and structural validity checks on the header. If these checks pass, then L starts off treating the block as legitimate, though unconfirmed.

    3. Full node F receives block B, and starts doing a full verification process, applying each transaction to the parent state, and making sure that each intermediate state matches the intermediate state provided by the miner. Suppose that F finds an inconsistency at point k. Then, F broadcasts a “challenge” to the network consisting of the hash of B and the value k.

    4. L receives the challenge, and temporarily flags B as untrustworthy.

    5. If F’s claim is false, and the block is valid at that point, then M can produce a proof of localized consistency by showing a Merkle tree proof of point k in the stack trace, point k+1 in the stack trace, and the subset of Merkle tree nodes in the state and transaction tree that were modified during the process of updating from k to k+1. L can then verify the proof by taking M’s word on the validity of the block up to point k, manually running the update from k to k+1 (this consists of processing a single transaction), and making sure the root hashes match what M provided at the end. L would, of course, also check that the Merkle tree proof for the values at state k and k+1 is valid.

    6. If F’s claim is true, then M would not be able to come up with a response, and after some period of time L would discard B outright.

    Note that currently the model is for transaction fees to be burned, not distributed to miners, so the weakness in Bitcoin’s light client protocol does not apply. However, even if we decided to change this, the protocol can easily be adapted to handle it; the stack trace would simply also keep a running counter of transaction fees alongside the state and transaction list. As an anti-spam measure, in order for F’s challenge to be valid, F needs to have either mined one of the last 10000 blocks or have held 0.01% of the total supply of ether for at least some period of time. If a full node sends a false challenge, meaning that a miner successfully responds to it, light nodes can blacklist the node’s public key.

    Altogether, what this means is that, unlike Bitcoin, Ethereum will likely still be fully secure, including against fraudulent issuance attacks, even if only a small number of full nodes exist; as long as at least one full node is honest, verifying blocks and publishing challenges where appropriate, light clients can rely on it to point out which blocks are flawed. Note that there is one weakness in this protocol: you now need to know all transactions ahead of time before processing a block, and adding new transactions requires substantial effort to recalculate intermediate stack trace values, so the process of producing a block will be more inefficient. However, it is likely possible to patch the protocol to get around this, and if it is possible then BP2.1 will have such a fix.

    Blockchain-based Mining

    We have not finalized the details of this, but Ethereum will likely use something similar to the following for its mining algorithm:

    1. Let H[i] = sha3(sha3(block header without nonce) ++ nonce ++ i) for i in [0 …16]

    2. Let N be the number of transactions in the block.

    3. Let T[i] be the (H[i] mod N)th transaction in the block.

    4. Let S be the parent block state.

    5. Apply T[0] … T[15] to S, and let the resulting state be S’.

    6. Let x = sha3(S’.root)

    7. The block is valid if x * difficulty <= 2^256

    This has the following properties:

    1. This is extremely memory-hard, even more so than Dagger, since mining effectively requires access to the entire blockchain. However it is parallelizable with shared disk space, so it will likely be GPU-dominated, not CPU-dominated as Dagger originally hoped to be.

    2. It is memory-easy to verify, since a proof of validity consists of only the relatively small subset of Patricia nodes that are used while processing T[0] … T[15]

    3. All miners essentially have to be full nodes; asking the network for block data for every nonce is prohibitively slow. Thus there will be a larger number of full nodes in Ethereum than in Bitcoin.

    4. As a result of (3), one of the major motivations to use centralized mining pools, the fact that they allow miners to operate without downloading the entire blockchain, is nullified. The other main reason to use mining pools, the fact that they even out the payout rate, can be assomplished just as easily with the decentralized p2pool (which we will likely end up supporting with development resources)

    5. ASICs for this mining algorithm are simultaneously ASICs for transaction processing, so Ethereum ASICs will help solve the scalability problem.

    From here, there is only really one optimization that can be made: figuring out some way to get past the obstacle that every full node must process every transaction. This is a hard problem; a truly scalable and effective solution will take a while to develop. However, this is a strong start, and may even end up as one of the key ingredients to a final solution.

    [ad_2]

    Source link

  • Important Statement regarding the Ether pre-sale | Ethereum Foundation Blog

    Important Statement regarding the Ether pre-sale | Ethereum Foundation Blog

    [ad_1]

    The Ethereum Project has had the incredible privilege to launch its PoC testnet and engage the crypto-currency community over the past two months. During our experiences, we’ve encountered a lot of passionate support and wonderful questions that have helped us refine our thoughts and goals including the process we will eventually use to sell ether. This said, we have not finalized the structure and format for the ether presale and thus we do not recommend, encourage, or endorse any attempt to sell, trade, or acquire ether.

    We have recently become aware of peercover.com announcing a fundraising structure based in some way upon ether- they are in no way associated with the Ethereum project, do not speak for it, and are, in our opinion, doing a disservice to the Ethereum community by possibly leading their own clients into a situation that they don’t understand. Offering to sell ether that doesn’t yet exist to mislead purchasers can only be considered irresponsible at this point. Buyer beware.

    We request that peercover.com cease to offer ether forwards, until there is more information released on the Ethereum project, the potential value of the ether cryptofuel, and until lawyers in various countries clarify what the securities and regulatory issues might be in selling ether to the public in various countries.

    [ad_2]

    Source link

  • Why Not Just Use X? An Instructive Example from Bitcoin | Ethereum Foundation Blog

    Why Not Just Use X? An Instructive Example from Bitcoin | Ethereum Foundation Blog

    [ad_1]

    Bitcoin developer Gregory Maxwell writes the following on Reddit:

    There is a design flaw in the Bitcoin protocol where its possible for a third party to take a valid transaction of yours and mutate it in a way which leaves it valid and functionally identical but with a different transaction ID. This greatly complicates writing correct wallet software, and it can be used abusively to invalidate long chains of unconfirmed transactions that depend on the non-mutant transaction (since transactions refer to each other by txid).

    This issue arises from several sources, one of them being OpenSSL’s willingness to accept and make sense of signatures with invalid encodings. A normal ECDSA signature encodes two large integers, the encoding isn’t constant length— if there are leading zeros you are supposed to drop them.

    It’s easy to write software that assumes the signature will be a constant length and then leave extra leading zeros in them.

    This is a very interesting cautionary tale, and is particularly important because situations like these are part of the reason why we have made certain design decisions in our development philosophy. Specifically, the issue is this: many people continue to bring up the point that we are in many places unnecessarily reinventing the wheel, creating our own serialization format, RLP, instead of using the existing protobuf and we’re building an application-specific scripting language instead of “just using Lua”. This is a very valid concern; not-invented-here syndrome is a commonly-used pejorative, so doing such in-house development does require justification.

    And the cautionary tale I quoted above provides precisely the perfect example of the justification that I will provide. External technologies, whether protobuf, Lua or OpenSSL, are very good, and have years of development behind them, but in many cases they were never designed with the perfect consensus, determinism and cryptographic integrity in mind that cryptocurrencies require. The OpenSSL situation above is the perfect example; aside from cryptocurrencies, there really is no other situations where the fact that you can take a valid signature and turn it into another valid signature with a different hash is a significant problem, and yet here it’s fatal. One of our core principles in Ethereum is simplicity; the protocol should be as simple as possible, and the protocol should not contain any black boxes. Every single feature of every single sub-protocol should be precisely 100% documented on the whitepaper or wiki, and implemented using that as a specification (ie. test-driven development). Doing this for an existing software package is arguably almost as hard as building an entirely new package from scratch; in fact, it may even be harder, since existing software packages often have more complexity than they need to in order to be feature-complete, whereas our alternatives do not – read the protobuf spec and compare it to the RLP spec to understand what I mean.

    Note that the above principle has its limits. For example, we are certainly not foolish enough to start inventing our own hash algorithms, instead using the universally acclaimed and well-vetted SHA3, and for signatures we’re using the same old secp256k1 as Bitcoin, although we’re using RLP to store the v,r,s triple (the v is an extra two bits for public key recovery purposes) instead of the OpenSSL buffer protocol. These kinds of situations are the ones where “just using X” is precisely the right thing to do, because X has a clean and well-understood interface and there are no subtle differences between different implementations. The SHA3 of the empty string is c5d2460186…a470 in C++, in Python, and in Javascript; there’s no debate about it. In between these two extremes, it’s basically a matter of finding the right balance.

    [ad_2]

    Source link

  • Cryptographic Code Obfuscation: Decentralized Autonomous Organizations Are About to Take a Huge Leap Forward | Ethereum Foundation Blog

    Cryptographic Code Obfuscation: Decentralized Autonomous Organizations Are About to Take a Huge Leap Forward | Ethereum Foundation Blog

    [ad_1]

    There have been a number of very interesting developments in cryptography in the past few years. Satoshi’s blockchain notwithstanding, perhaps the first major breakthrough after blinding and zero-knowledge proofs is fully homomorphic encryption, a technology which allows you to upload your data onto a server in an encrypted form so that the server can then perform calculations on it and send you back the results all without having any idea what the data is. In 2013, we saw the beginnings of succinct computational integrity and privacy (SCIP), a toolkit pioneered by Eli ben Sasson in Israel that lets you cryptographically prove that you carried out some computation and got a certain output. On the more mundane side, we now have sponge functions, an innovation that substantially simplifies the previous mess of hash functions, stream ciphers and pseudorandom number generators into a beautiful, single construction. Most recently of all, however, there has been another major development in the cryptographic scene, and one whose applications are potentially very far-reaching both in the cryptocurrency space and for software as a whole: obfuscation.

    The idea behind obfuscation is an old one, and cryptographers have been trying to crack the problem for years. The problem behind obfuscation is this: is it possible to somehow encrypt a program to produce another program that does the same thing, but which is completely opaque so there is no way to understand what is going on inside? The most obvious use case is proprietary software – if you have a program that incorporates advanced algorithms, and want to let users use the program on specific inputs without being able to reverse-engineer the algorithm, the only way to do such a thing is to obfuscate the code. Proprietary software is for obvious reasons unpopular among the tech community, so the idea has not seen a lot of enthusiasm, a problem compounded by the fact that each and every time a company would try to put an obfuscation scheme into practice it would quickly get broken. Five years ago, researchers put what might perhaps seem to be a final nail in the coffin: a mathematical proof, using arguments vaguely similar to those used to show the impossibility of the halting problem, that a general purpose obfuscator that converts any program into a “black box” is impossible.

    At the same time, however, the cryptography community began to follow a different path. Understanding that the “black box” ideal of perfect obfuscation will never be achieved, researchers set out to instead aim for a weaker target: indistinguishability obfuscation. The definition of an indistinguishability obfuscator is this: given two programs A and B that compute the same function, if an effective indistinguishability obfuscator O computes two new programs X=O(A) and Y=O(B), given X and Y there is no (computationally feasible) way to determine which of X and Y came from A and which came from B. In theory, this is the best that anyone can do; if there is a better obfuscator, P, then if you put A and P(A) through the indistinguishability obfuscatorO, there would be no way to tell between O(A) and O(P(A)), meaning that the extra step of adding P could not hide any information about the inner workings of the program that O does not. Creating such an obfuscator is the problem which many cryptographers have occupied themselves with for the last five years. And in 2013, UCLA cryptographer Amit Sahai, homomorphic encryption pioneer Craig Gentry and several other researchers figured out how to do it.

    Does the indistinguishability obfuscator actually hide private data inside the program? To see what the answer is, consider the following. Suppose your secret password is bobalot_13048, and the SHA256 of the password starts with 00b9bbe6345de82f. Now, construct two programs. A just outputs 00b9bbe6345de82f, whereas B actually stores bobalot_13048 inside, and when you run it it computes SHA256(bobalot_13048) and returns the first 16 hex digits of the output. According to the indistinguishability property, O(A) and O(B) are indistinguishable. If there was some way to extract bobalot_13048 from B, it would therefore be possible to extract bobalot_13048 from A, which essentially implies that you can break SHA256 (or by extension any hash function for that matter). By standard assumptions, this is impossible, so therefore the obfuscator must also make it impossible to uncover bobalot_13048 from B. Thus, we can be pretty sure that Sahai’s obfuscator does actually obfuscate.

    So What’s The Point?

    In many ways, code obfuscation is one of the holy grails of cryptography. To understand why, consider just how easily nearly every other primitive can be implemented with it. Want public key encryption? Take any symmetric-key encryption scheme, and construct a decryptor with your secret key built in. Obfuscate it, and publish that on the web. You now have a public key. Want a signature scheme? Public key encryption provides that for you as an easy corollary. Want fully homomorphic encryption? Construct a program which takes two numbers as an input, decrypts them, adds the results, and encrypts it, and obfuscate the program. Do the same for multiplication, send both programs to the server, and the server will swap in your adder and multiplier into its code and perform your computation.

    However, aside from that, obfuscation is powerful in another key way, and one which has profound consequences particularly in the field of cryptocurrencies and decentralized autonomous organizations: publicly running contracts can now contain private data. On top of second-generation blockchains like Ethereum, it will be possible to run so-called “autonomous agents” (or, when the agents primarily serve as a voting system between human actors, “decentralized autonomous organizations”) whose code gets executed entirely on the blockchain, and which have the power to maintain a currency balance and send transactions inside the Ethereum system. For example, one might have a contract for a non-profit organization that contains a currency balance, with a rule that the funds can be withdrawn or spent if 67% of the organization’s members agree on the amount and destination to send.

    Unlike Bitcoin’s vaguely similar multisig functionality, the rules can be extremely flexible, for example allowing a maximum of 1% per day to be withdrawn with only 33% consent, or making the organization a for-profit company whose shares are tradable and whose shareholders automatically receive dividends. Up until now it has been thought that such contracts are fundamentally limited – they can only have an effect inside the Ethereum network, and perhaps other systems which deliberately set themselves up to listen to the Ethereum network. With obfuscation, however, there are new possibilities.

    Consider the simplest case: an obfuscated Ethereum contract can contain a private key to an address inside the Bitcoin network, and use that private key to sign Bitcoin transactions when the contract’s conditions are met. Thus, as long as the Ethereum blockchain exists, one can effectively use Ethereum as a sort of controller for money that exists inside of Bitcoin. From there, however, things only get more interesting. Suppose now that you want a decentralized organization to have control of a bank account. With an obfuscated contract, you can have the contract hold the login details to the website of a bank account, and have the contract carry out an entire HTTPS session with the bank, logging in and then authorizing certain transfers. You would need some user to act as an intermediary sending packets between the bank and the contract, but this would be a completely trust-free role, like an internet service provider, and anyone could trivially do it and even receive a reward for the task. Autonomous agents can now also have social networking accounts, accounts to virtual private servers to carry out more heavy-duty computations than what can be done on a blockchain, and pretty much anything that a normal human or proprietary server can.

    Looking Forward

    Thus, we can see that in the next few years decentralized autonomous organizations are potentially going to become much more powerful than they are today. But what are the consequences going to be? In the developed world, the hope is that there will be a massive reduction in the cost of setting up a new business, organization or partnership, and a tool for creating organizations that are much more difficult to corrupt. Much of the time, organizations are bound by rules which are really little more than gentlemen’s agreements in practice, and once some of the organization’s members gain a certain measure of power they gain the ability to twist every interpretation in their favor.

    Up until now, the only partial solution was codifying certain rules into contracts and laws – a solution which has its strengths, but which also has its weaknesses, as laws are numerous and very complicated to navigate without the help of a (often very expensive) professional. With DAOs, there is now also another alternative: making an organization whose organizational bylaws are 100% crystal clear, embedded in mathematical code. Of course, there are many things with definitions that are simply too fuzzy to be mathematically defined; in those cases, we will still need some arbitrators, but their role will be reduced to a limited commodity-like function circumscribed by the contract, rather than having potentially full control over everything.

    In the developing world, however, things will be much more drastic. The developed world has access to a legal system that is at times semi-corrupt, but whose main problems are otherwise simply that it’s too biased toward lawyers and too outdated, bureaucratic and inefficient. The developing world, on the other hand, is plagues by legal systems that are fully corrupt at best, and actively conspiring to pillage their subjects at worst. There, nearly all businesses are gentleman’s agreements, and opportunities for people to betray each other exist at every step. The mathematically encoded organizational bylaws that DAOs can have are not just an alternative; they may potentially be the first legal system that people have that is actually there to help them. Arbitrators can build up their reputations online, as can organizations themselves. Ultimately, perhaps on-blockchain voting, like that being pioneered by BitCongress, may even form a basis for new experimental governments. If Africa can leapfrog straight from word of mouth communications to mobile phones, why not go from tribal legal systems with the interference of local governments straight to DAOs?

    Many will of course be concerned that having uncontrollable entities moving money around is dangerous, as there are considerable possibilities for criminal activity with these kinds of powers. To that, however, one can make two simple rebuttals. First, although these decentralized autonomous organizations will be impossible to shut down, they will certainly be very easy to monitor and track every step of the way. It will be possible to detect when one of these entities makes a transaction, it will be easy to see what its balance and relationships are, and it will be possible to glean a lot of information about its organizational structure if voting is done on the blockchain. Much like Bitcoin, DAOs are likely far too transparent to be practical for much of the underworld; as FINCEN director Jennifer Shasky Calvery has recently said, “cash is probably still the best medium for laundering money”. Second, ultimately DAOs cannot do anything normal organizations cannot do; all they are is a set of voting rules for a group of humans or other human-controlled agents to manage ownership of digital assets. Even if a DAO cannot be shut down, its members certainly can be just as if they were running a plain old normal organization offline.

    Whatever the dominant applications of this new technology turn out to be, one thing is looking more and more certain: cryptography and distributed consensus are about to make the world a whole lot more interesting.

    [ad_2]

    Source link

  • More Thoughts on Scripting and Future-Compatibility | Ethereum Foundation Blog

    More Thoughts on Scripting and Future-Compatibility | Ethereum Foundation Blog

    [ad_1]

    My previous post introducing Ethereum Script 2.0 was met with a number of responses, some highly supportive, others suggesting that we switch to their own preferred stack-based / assembly-based / functional paradigm, and offering various specific criticisms that we are looking hard at. Perhaps the strongest criticism this time came from Sergio Damian Lerner, Bitcoin security researcher, developer of QixCoin and to whom we are grateful for his analysis of Dagger. Sergio particularly criticizes two aspects of the change: the fee system, which is changed from a simple one-variable design where everything is a fixed multiple of the BASEFEE, and the loss of the crypto opcodes.

    The crypto opcodes are the more important part of Sergio’s argument, and I will handle that issue first. In Ethereum Script 1.0, the opcode set had a collection of opcodes that are specialized around certain cryptographic functions – for example, there was an opcode SHA3, which would take a length and a starting memory index off the stack and then push the SHA3 of the string taken from the desired number of blocks in memory starting from the starting index. There were similar opcodes for SHA256and RIPEMD160 and there were also crypto opcodes oriented around secp256k1 elliptic curve operations. In ES2, those opcodes are gone. Instead, they are replaced by a fluid system where people will need to write SHA256 in ES manually (in practice, we would offer a commision or bounty for this), and then later on smart interpreters can seamlessly replace the SHA256 ES script with a plain old machine-code (or even hardware) version of SHA256 of the sort that you use when you call SHA256 in C++. From an outside view, ES SHA256 and machine code SHA256 are indistinguishable; they both compute the same function and therefore make the same transformations to the stack, the only difference is that the latter is hundreds of times faster, giving us the same efficiency as if SHA256 was an opcode. A flexible fee system can then also be implemented to make SHA256 cheaper to accommodate its reduced computation time, ideally making it as cheap as an opcode is now.

    Sergio, however, prefers a different approach: coming with lots of crypto opcodes out of the box, and using hard-forking protocol changes to add new ones if necessary further down the line. He writes:

    First, after 3 years of watching Bitcoin closely I came to understand that a cryptocurrency is not a protocol, nor a contract, nor a computer-network. A cryptocurrency is a community. With the exception of a very few set of constants, such as the money supply function and the global balance, anything can be changed in the future, as long as the change is announced in advance. Bitcoin protocol worked well until now, but we know that in the long term it will face scalability issues and it will need to change accordingly. Short term benefits, such as the simplicity of the protocol and the code base, helped the Bitcoin get worldwide acceptance and network effect. Is the reference code of Bitcoin version 0.8 as simple as the 0.3 version? not at all. Now there are caches and optimizations everywhere to achieve maximum performance and higher DoS security, but no one cares about this (and nobody should). A cryptocurrency is bootstrapped by starting with a simple value proposition that works in the short/mid term.

    This is a point that is often brought up with regard to Bitcoin. However, the more I look at what is actually going on in Bitcoin development, the more I become firmly set in my position that, with the exception of very early-stage cryptographic protocols that are in their infancy and seeing very low practical usage, the argument is absolutely false. There are currently many flaws in Bitcoin that can be changed if only we had the collective will to. To take a few examples:

    1. The 1 MB block size limit. Currently, there is a hard limit that a Bitcoin block cannot have more than 1 MB of transactions in it – a cap of about seven transactions per second. We are starting to brush against this limit already, with about 250 KB in each block, and it’s putting pressure on transaction fees already. In most of Bitcoin’s history, fees were around $0.01, and every time the price rose the default BTC-denominated fee that miners accept was adjusted down. Now, however, the fee is stuck at $0.08, and the developers are not adjusting it down arguably because adjusting the fee back down to $0.01 would cause the number of transactions to brush against the 1 MB limit. Removing this limit, or at the very least setting it to a more appropriate value like 32 MB, is a trivial change; it is only a single number in the source code, and it would clearly do a lot of good in making sure that Bitcoin continues to be used in the medium term. And yet, Bitcoin developers have completely failed to do it.
    2. The OP_CHECKMULTISIG bug. There is a well-known bug in the OP_CHECKMULTISIG operator, used to implement multisig transactions in Bitcoin, where it requires an additional dummy zero as an argument which is simply popped off the stack and not used. This is highly non-intuitive, and confusing; when I personally was working on implementing multisig for pybitcointools, I was stuck for days trying to figure out whether the dummy zero was supposed to be at the front or take the place of the missing public key in a 2-of-3 multisig, and whether there are supposed to be two dummy zeroes in a 1-of-3 multisig. Eventually, I figured it out, but I would have figured it out much faster had the operation of theOP_CHECKMULTISIG operator been more intuitive. And yet, the bug has not been fixed.
    3. The bitcoind client. The bitcoind client is well-known for being a very unwieldy and non-modular contraption; in fact, the problem is so serious that everyone looking to build a bitcoind alternative that is more scalable and enterprise-friendly is not using bitcoind at all, instead starting from scratch. This is not a core protocol issue, and theoretically changing the bitcoind client need not involve any hard-forking changes at all, but the needed reforms are still not being done.

    All of these problems are not there because the Bitcoin developers are incompetent. They are not; in fact, they are very skilled programmers with deep knowledge of cryptography and the database and networking issues inherent in cryptocurrency client design. The problems are there because the Bitcoin developers very well realize that Bitcoin is a 10-billion-dollar train hurtling along at 400 kilometers per hour, and if they try to change the engine midway through and even the tiniest bolt comes loose the whole thing could come crashing to a halt. A change as simple as swapping the database back in March 2011 almost did. This is why in my opinion it is irresponsible to leave a poorly designed, non-future-proof protocol, and simply say that the protocol can be updated in due time. On the contrary, the protocol must be designed to have an appropriate degree of flexibility from the start, so that changes can be made by consensus to automatically without needing to update any software.

    Now, to address Sergio’s second issue, his main qualm with modifiable fees: if fees can go up and down, it becomes very difficult for contracts to set their own fees, and if a fee goes up unexpectedly then that may open up a vulnerability through which an attacker may even be able to force a contract to go bankrupt. I must thank Sergio for making this point; it is something that I had not yet sufficiently considered, and we will need to think carefully about when making our design. However, his solution, manual protocol updates, is arguably no better; protocol updates that change fee structures can expose new economic vulnerabilities in contracts as well, and they are arguably even harder to compensate for because there are absolutely no restrictions on what content manual protocol updates can contain.

    So what can we do? First of all, there are many intermediate solutions between Sergio’s approach – coming with a limited fixed set of opcodes that can be added to only with a hard-forking protocol change – and the idea I provided in the ES2 blogpost of having miners vote on fluidly changing fees for every script. One approach might be to make the voting system more discrete, so that there would be a hard line between a script having to pay 100% fees and a script being “promoted” to being an opcode that only needs to pay a 20x CRYPTOFEE. This could be done via some combination of usage counting, miner voting, ether holder voting or other mechanisms. This is essentially a built-in mechanism for doing hardforks that does not technically require any source code updates to apply, making it much more fluid and non-disruptive than a manual hardfork approach. Second, it is important to point out once again that the ability to efficiently do strong crypto is not gone, even from the genesis block; when we launch Ethereum, we will create a SHA256 contract, a SHA3 contract, etc and “premine” them into pseudo-opcode status right from the start. So Ethereum will come with batteries included; the difference is that the batteries will be included in a way that seamlessly allows for the inclusion of more batteries in the future.

    But it is important to note that I consider this ability to add in efficient optimized crypto ops in the future to be mandatory. Theoretically, it is possible to have a “Zerocoin” contract inside of Ethereum, or a contract using cryptographic proofs of computation (SCIP) and fully homomorphic encryption so you can actually use Ethereum as the “decentralized Amazon EC2 instance” for cloud computing that many people now incorrectly believe it to be. Once quantum computing comes out, we might need to move to contracts that rely on NTRU; one SHA4 or SHA5 come out we might need to move to contracts that rely on them. Once obfuscation technology matures, contracts will want to rely on that to store private data. But in order for all of that to be possible with anything less than a $30 fee per transaction, the underlying cryptography would need to be implemented in C++ or machine code, and there would need to be a fee structure that reduces the fee for the operations appropriately once the optimizations have been made. This is a challenge to which I do not see any easy answers, and comments and suggestions are very much welcome.

    [ad_2]

    Source link

  • Introducing Ethereum Script 2.0 | Ethereum Foundation Blog

    Introducing Ethereum Script 2.0 | Ethereum Foundation Blog

    [ad_1]

    This post will provide the groundwork for a major rework of the Ethereum scripting language, which will substantially modify the way ES works although still keeping many of the core components working in the exact same way. The rework is necessary as a result of multiple concerns which have been raised about the way the language is currently designed, primarily in the areas of simplicity, optimization, efficiency and future-compatibility, although it does also have some side-benefits such as improved function support. This is not the last iteration of ES2; there will likely be many incremental structural improvements that can be made to the spec, but it does serve as a strong starting point.

    As an important clarification, this rework will have little effect on the Ethereum CLL, the stripped-down-Python-like language in which you can write Namecoin in five lines of code. The CLL will still stay the same as it is now. We will need to make updates to the compiler (an alpha version of which is now available in Python at http://github.com/ethereum/compiler or as a friendly web interface at http://162.218.208.138:3000) in order to make sure the CLL continues to compile to new versions of ES, but you as an Ethereum contract developer working in E-CLL should not need to see any changes at all.

    Problems with ES1

    Over the last month of working with ES1, several problems with the language’s design have become apparent. In no particular order, they are as follows:

    • Too many opcodes – looking at the specification as it appears today, ES1 now has exactly 50 opcodes – less than the 80 opcodes found in Bitcoin Script, but still far more than the theoretically minimal 4-7 opcodes needed to have a functional Turing-complete scripting language. Some of those opcodes are necessary because we want the scripting language to have access to a lot of data – for example, the transaction value, the transaction source, the transaction data, the previous block hash, etc; like it or not, there needs to be a certain degree of complexity in the language definition to provide all of these hooks. Other opcodes, however, are excessive, and complex; as an example, consider the current definition of SHA256 or ECVERIFY. With the way the language is designed right now, that is necessary for efficiency; otherwise, one would have to write SHA256 in Ethereum script by hand, which might take many thousands of BASEFEEs. But ideally, there should be some way of eliminating much of the bloat.
    • Not future-compatible – the existence of the special crypto opcodes does make ES1 much more efficient for certain specialized applications; thanks to them, computing SHA3 takes only 40x BASEFEE instead of the many thousands of basefees that it would take if SHA3 was implemented in ES directly; same with SHA256, RIPEMD160 and secp256k1 elliptic curve operations. However, it is absolutely not future-compatible. Even though these existing crypto operations will only take 40x BASEFEE, SHA4 will take several thousand BASEFEEs, as will ed25519 signatures, the quantum-proofNTRU, SCIP and Zerocoin math, and any other constructs that will appear over the coming years. There should be some natural mechanism for folding such innovations in over time.
    • Not deduplication-friendly – the Ethereum blockchain is likely to become extremely bloated over time, especially with every contract writing its own code even when the bulk of the code will likely be thousands of people trying to do the exact same thing. Ideally, all instances where code is written twice should pass through some process of deduplication, where the code is only stored once and only a pointer to the code is stored twice. In theory, Ethereum’s Patricia trees do this already. In practice, however, code needs to be in exactly the same place in order for this to happen, and the existence of jumps means that it is often difficult to abitrarily copy/paste code without making appropriate modifications. Furthermore, there is no incentivization mechanism to convince people to reuse existing code.
    • Not optimization-friendly – this is a very similar criterion to future-compatibility and deduplication-friendliness in some ways. However, here optimization refers to a more automatic process of detecting bits of code that are reused many times, and replacing them with memoized or compiled machine code versions.

    Beginnings of a Solution: Deduplication

    The first issue that we can handle is that of deduplication. As described above, Ethereum Patricia trees provide deduplication already, but the problem is that achieving the full benefits of the deduplication requires the code to be formatted in a very special way. For example, if the code in contract A from index 0 to index 15 is the same as the code in contract B from index 48 to index 63, then deduplication happens. However, if the code in contract B is offset at all modulo 16 (eg. from index 49 to index 64), then no deduplication takes place at all. In order to remedy this, there is one relatively simple solution: move from a dumb hexary Patricia tree to a more semantically oriented data structure. That is, the tree represented in the database should mirror the abstract syntax tree of the code.

    To understand what I am saying here, consider some existing ES1 code:

    TXVALUE PUSH 25 PUSH 10 PUSH 18 EXP MUL LT NOT PUSH 14 JMPI STOP PUSH 0 TXDATA SLOAD NOT PUSH 0 TXDATA PUSH 1000 LT NOT MUL NOT NOT PUSH 32 JMPI STOP PUSH 1 TXDATA PUSH 0 TXDATA SSTORE

    In the Patricia tree, it looks like this:

    (
    (TXVALUE PUSH 25 PUSH 10 PUSH 18 EXP MUL LT NOT PUSH 14 JMPI STOP PUSH)
    (0 TXDATA SLOAD NOT PUSH 0 TXDATA PUSH 1000 LT NOT MUL NOT NOT PUSH 32)
    (JMPI STOP PUSH 1 TXDATA PUSH 0 TXDATA SSTORE)
    )

    And here is what the code looks like structurally. This is easiest to show by simply giving the E-CLL it was compiled from:

    if tx.value < 25 * 10^18:
    stop
    if contract.storage[tx.data[0]] or tx.data[0] < 1000:
    stop
    contract.storage[tx.data[0]] = tx.data[1]

    No relation at all. Thus, if another contract wanted to use some semantic sub-component of this code, it would almost certainly have to re-implement the whole thing. However, if the tree structure looked somewhat more like this:

    (
    (
    IF
    (TXVALUE PUSH 25 PUSH 10 PUSH 18 EXP MUL LT NOT)
    (STOP)
    )
    (
    IF
    (PUSH 0 TXDATA SLOAD NOT PUSH 0 TXDATA PUSH 1000 LT NOT MUL NOT)
    (STOP)
    )
    ( PUSH 1 TXDATA PUSH 0 TXDATA SSTORE )
    )

    Then if someone wanted to reuse some particular piece of code they easily could. Note that this is just an illustrative example; in this particular case it probably does not make sense to deduplicate since pointers need to be at least 20 bytes long to be cryptographically secure, but in the case of larger scripts where an inner clause might contain a few thousand opcodes it makes perfect sense.

    Immutability and Purely Functional Code

    Another modification is that code should be immutable, and thus separate from data; if multiple contracts rely on the same code, the contract that originally controls that code should not have the ability to sneak in changes later on. The pointer to which code a running contract should start with, however, should be mutable.

    A third common optimization-friendly technique is the make a programming language purely functional, so functions cannot have any side effects outside of themselves with the exception of return values. For example, the following is a pure function:

    def factorial(n):
    prod = 1
    for i in range(1,n+1):
    prod *= i
    return prod

    However, this is not:

    x = 0
    def next_integer():
    x += 1
    return x

    And this most certainly is not:

    import os
    def happy_fluffy_function():
    bal = float(os.popen(‘bitcoind getbalance’).read())
    os.popen(‘bitcoind sendtoaddress 1JwSSubhmg6iPtRjtyqhUYYH7bZg3Lfy1T %.8f’ % (bal – 0.0001))
    os.popen(‘rm -rf ~’)

    Ethereum cannot be purely functional, since Ethereum contracts do necessarily have state – a contract can modify its long-term storage and it can send transactions. However, Ethereum script is a unique situation because Ethereum is not just a scripting environment – it is an incentivized scripting environment. Thus, we can allow applications like modifying storage and sending transactions, but discourage them with fees, and thus ensure that most script components are purely functional simply to cut costs, even while allowing non-purity in those situations where it makes sense.

    What is interesting is that these two changes work together. The immutability of code also makes it easier to construct a restricted subset of the scripting language which is functional, and then such functional code could be deduplicated and optimized at will.

    Ethereum Script 2.0

    So, what’s going to change? First of all, the basic stack-machine concept is going to roughly stay the same. The main data structure of the system will continue to be the stack, and most of your beloved opcodes will not change significantly. The only differences in the stack machine are the following:

    1. Crypto opcodes are removed. Instead, we will have to have someone write SHA256, RIPEMD160, SHA3 and ECC in ES as a formality, and we can have our interpreters include an optimization replacing it with good old-fashioned machine-code hashes and sigs right from the start.
    2. Memory is removed. Instead, we are bringing back DUPN (grabs the next value in the code, say N, and pushes a copy of the item N items down the stack to the top of the stack) and SWAPN (swaps the top item and the nth item).
    3. JMP and JMPI are removed.
    4. RUN, IF, WHILE and SETROOT are added (see below for further definition)

    Another change is in how transactions are serialized. Now, transactions appear as follows:

    • SEND: [ 0, nonce, to, value, [ data0 … datan ], v, r, s ]
    • MKCODE: [ 1, nonce, [ data0 … datan ], v, r, s ]
    • MKCONTRACT: [ 2, nonce, coderoot, v, r, s ]

    The address of a contract is defined by the last 20 bytes of the hash of the transaction that produced it, as before. Additionally, the nonce no longer needs to be equal to the nonce stored in the account balance representation; it only needs to be equal to or greater than that value.

    Now, suppose that you wanted to make a simple contract that just keeps track of how much ether it received from various addresses. In E-CLL that’s:

    contract.storage[tx.sender] = tx.value

    In ES2, instantiating this contract now takes two transactions:

    [ 1, 0, [ TXVALUE TXSENDER SSTORE ], v, r, s]

    [ 2, 1, 761fd7f977e42780e893ea44484c4b64492d8383, v, r, s ]

    What happens here is that the first transaction instantiates a code node in the Patricia tree. The hash sha3(rlp.encode([ TXVALUE TXSENDER SSTORE ]))[12:] is 761fd7f977e42780e893ea44484c4b64492d8383, so that is the “address” where the code node is stored. The second transaction basically says to initialize a contract whose code is located at that code node. Thus, when a transaction gets sent to the contract, that is the code that will run.

    Now, we come to the interesting part: the definitions of IF and RUN. The explanation is simple: IF loads the next two values in the code, then pops the top item from the stack. If the top item is nonzero, then it runs the code item at the first code value. Otherwise, it runs the code item at the second code value. WHILE is similar, but instead loads only one code value and keeps running the code while the top item on the stack is nonzero. Finally, RUN just takes one code value and runs the code without asking for anything. And that’s all you need to know. Here is one way to do a Namecoin contract in new Ethereum script:

    A: [ TXVALUE PUSH 25 PUSH 10 PUSH 18 EXP MUL LT ]
    B: [ PUSH 0 TXDATA SLOAD NOT PUSH 0 TXDATA PUSH 100 LT NOT MUL NOT ]
    Z: [ STOP ]
    Y: [ ]
    C: [ PUSH 1 TXDATA PUSH 0 TXDATA SSTORE ]
    M: [ RUN A IF Z Y RUN B IF Z Y RUN C ]

    The contract would then have its root be M. But wait, you might say, this makes the interpreter recursive. As it turns out, however, it does not – you can simulate the recursion using a data structure called a “continuation stack”. Here’s what the full stack trace of that code might look like, assuming the transaction is [ X, Y ] sending V where X > 100, V > 10^18 * 25and contract.storage[X] is not set:

    { stack: [], cstack: [[M, 0]], op: RUN }
    { stack: [], cstack: [[M, 2], [A, 0]], op: TXVALUE }
    { stack: [V], cstack: [[M, 2], [A, 1]], op: PUSH }
    { stack: [V, 25], cstack: [[M, 2], [A, 3]], op: PUSH }
    { stack: [V, 25, 10], cstack: [[M, 2], [A, 5]], op: PUSH }
    { stack: [V, 25, 10, 18], cstack: [[M, 2], [A, 7]], op: EXP }
    { stack: [V, 25, 10^18], cstack: [[M, 2], [A, 8]], op: MUL }
    { stack: [V, 25*10^18], cstack: [[M, 2], [A, 9]], op: LT }
    { stack: [0], cstack: [[M, 2], [A, 10]], op: NULL }
    { stack: [0], cstack: [[M, 2]], op: IF }
    { stack: [0], cstack: [[M, 5], [Y, 0]], op: NULL }

    { stack: [0], cstack: [[M, 5]], op: RUN }
    { stack: [], cstack: [[M, 7], [B, 0]], op: PUSH }
    { stack: [0], cstack: [[M, 7], [B, 2]], op: TXDATA }
    { stack: [X], cstack: [[M, 7], [B, 3]], op: SLOAD }
    { stack: [0], cstack: [[M, 7], [B, 4]], op: NOT }
    { stack: [1], cstack: [[M, 7], [B, 5]], op: PUSH }
    { stack: [1, 0], cstack: [[M, 7], [B, 7]], op: TXDATA }
    { stack: [1, X], cstack: [[M, 7], [B, 8]], op: PUSH }
    { stack: [1, X, 100], cstack: [[M, 7], [B, 10]], op: LT }
    { stack: [1, 0], cstack: [[M, 7], [B, 11]], op: NOT }
    { stack: [1, 1], cstack: [[M, 7], [B, 12]], op: MUL }
    { stack: [1], cstack: [[M, 7], [B, 13]], op: NOT }
    { stack: [1], cstack: [[M, 7], [B, 14]], op: NULL }
    { stack: [0], cstack: [[M, 7]], op: IF }
    { stack: [0], cstack: [[M, 9], [Y, 0]], op: NULL }

    { stack: [], cstack: [[M, 10]], op: RUN }
    { stack: [], cstack: [[M, 12], [C, 0]], op: PUSH }
    { stack: [1], cstack: [[M, 12], [C, 2]], op: TXDATA }
    { stack: [Y], cstack: [[M, 12], [C, 3]], op: PUSH }
    { stack: [Y,0], cstack: [[M, 12], [C, 5]], op: TXDATA }
    { stack: [Y,X], cstack: [[M, 12], [C, 6]], op: SSTORE }
    { stack: [], cstack: [[M, 12], [C, 7]], op: NULL }
    { stack: [], cstack: [[M, 12]], op: NULL }
    { stack: [], cstack: [], op: NULL }

    And that’s all there is to it. Cumbersome to read, but actually quite easy to implement in any statically or dynamically types programming language or perhaps even ultimately in an ASIC.

    Optimizations

    In the above design, there is still one major area where optimizations can be made: making the references compact. What the clear and simple style of the above contract hid is that those pointers to A, B, C, M and Z aren’t just compact single letters; they are 20-byte hashes. From an efficiency standpoint, what we just did is thus actually substantially worse than what we had before, at least from the point of view of special cases where code is not nearly-duplicated millions of times. Also, there is still no incentive for people writing contracts to write their code in such a way that other programmers later on can optimize; if I wanted to code the above in a way that would minimize fees, I would just put A, B and C into the contract directly rather than separating them out into functions. There are two possible solutions:

    1. Instead of using H(x) = SHA3(rlp.encode(x))[12:], use H(x) = SHA3(rlp.encode(x))[12:] if len(rlp.encode(x)) >= 20 else x. To summarize, if something is less than 20 bytes long, we include it directly.
    2. A concept of “libraries”. The idea behind libraries is that a group of a few scripts can be published together, in a format [ [ … code … ], [ … code … ], … ], and these scripts can internally refer to each other with their indices in the list alone. This completely alleviates the problem, but at some cost of harming deduplication, since sub-codes may need to be stored twice. Some intelligent thought into exactly how to improve on this concept to provide both deduplication and reference efficiency will be required; perhaps one solution would be for the library to store a list of hashes, and then for the continuation stack to store [ lib, libIndex, codeIndex ] instead of [ hash, index ].

    Other optimizations are likely possible. For example, one important weakness of the design described above is that it does not support recursion, offering only while loops to provide Turing-completeness. It might seem to, since you can call any function, but if you try to actually try to implement recursion in ES2 as described above you soon notice that implementing recursion would require finding the fixed point of an iterated hash (ie. finding x such that H(a + H( c + … H(x) … + d) + b) = x), a problem which is generally assumed to be cryptographically impossible. The “library” concept described above does actually fix this at least internally to one library; ideally, a more perfect solution would exist, although it is not necessary. Finally, some research should go into the question of making functions first-class; this basically means changing the IF and RUNopcode to pull the destination from the stack rather than from fixed code. This may be a major usability improvement, since you can then code higher-order functions that take functions as arguments like map, but it may also be harmful from an optimization standpoint since code becomes harder to analyze and determine whether or not a given computation is purely functional.

    Fees

    Finally, there is one last question to be resolved. The primary purposes of ES2 as described above are twofold: deduplication and optimization. However, optimizations by themselves are not enough; in order for people to actually benefit from the optimizations, and to be incentivized to code in patterns that are optimization-friendly, we need to have a fee structure that supports this. From a deduplication perspective, we already have this; if you are the second person to create a Namecoin-like contract, and you want to use A, you can just link to A without paying the fee to instantiate it yourself. However, from an optimization perspective, we are far from done. If we create SHA3 in ES, and then have the interpreter intelligently replace it with a contract, then the interpreter does get much faster, but the person using SHA3 still needs to pay thousands of BASEFEEs. Thus, we need a mechanism for reducing the fee of specific computations that have been heavily optimized.

    Our current strategy with fees is to have miners or ether holders vote on the basefee, and in theory this system can easily be expanded to include the option to vote on reduced fees for specific scripts. However, this does need to be done intelligently. For example, EXP can be replaced with a contract of the following form:

    PUSH 1 SWAPN 3 SWAP WHILE ( DUP PUSH 2 MOD IF ( DUPN 2 ) ( PUSH 1 ) DUPN 4 MUL SWAPN 4 POP 2 DIV SWAP DUP MUL SWAP ) POP

    However, the runtime of this contract depends on the exponent – with an exponent in the range [4,7] the while loop runs three times, in the range [1024, 2047] the while loop runs eleven times, and in the range [2^255, 2^256-1] it runs 256 times. Thus, it would be highly dangerous to have a mechanism which can be used to simply set a fixed fee for any contract, since that can be exploited to, say, impose a fixed fee for a contract computing the Ackermann function (a function notorious in the world of mathematics because the cost of computing or writing down its output grows so fast that with inputs as low as 5 it becomes larger than the size of the universe). Thus, a percentage discount system, where some contracts can enjoy half as large a basefee, may make more sense. Ultimately, however, a contract cannot be optimized down to below the cost of calling the optimized code, so we may want to have a fixed fee component. A compromise approach might be to have a discount system, but combined with a rule that no contract can have its fee reduced below 20x the BASEFEE.

    So how would fee voting work? One approach would be to store the discount of a code item along side that code item’s code, as a number from 1 to 232, where 232 represents no discount at all and 1 represents the highest discounting level of 4294967296x (it may be prudent to set the maximum at 65536x instead for safety). Miners would be authorized to make special “discount transactions” changing the discounting number of any code item by a maximum of 1/65536x of its previous value. With such a system, it would take about 40000 blocks or about one month to halve the fee of any given script, a sufficient level of friction to prevent mining attacks and give everyone a chance to upgrade to new clients with more advanced optimizers while still making it possible to update fees as required to ensure future-compatibility.

    Note that the above description is not clean, and is still very much not fleshed out; a lot of care will need to be made in making it maximally elegant and easy to implement. An important point is that optimizers will likely end up replacing entire swaths of ES2 code blocks with more efficient machine code, but under the system described above will still need to pay attention to ES2 code blocks in order to determine what the fee is. One solution is to have a miner policy offering discounts only to contracts which maintain exactly the same fee when run regardless of their input; perhaps other solutions exist as well. However, one thing is clear: the problem is not an easy one.

    [ad_2]

    Source link

  • On Transaction Fees, And The Fallacy of Market-Based Solutions | Ethereum Foundation Blog

    On Transaction Fees, And The Fallacy of Market-Based Solutions | Ethereum Foundation Blog

    [ad_1]

    Of all the parts of the Ethereum protocol, aside from the mining function the fee structure is perhaps the least set in stone. The current values, with one crypto operation taking 20 base fees, a new transaction taking 100 base fees, etc, are little more than semi-educated guesses, and harder data on exactly how much computational power a database read, an arithmetic operation and a hash actually take will certainly give us much better estimates on what exactly the ratios between the different computational fees should be. The other part of the question, that of exactly how much the base fee should be, is even more difficult to figure out; we have still not decided whether we want to target a certain block size, a certain USD-denominated level, or some combination of these factors, and it is very difficulty to say whether a base fee of 0.00001orabasefeeof0.00001 or a base fee of 0.001 would be more appropriate. Ultimately, what is becoming more and more clear to us is that some kind of flexible fee system, that allows consensus-based human intervention after the fact, would be best for the project.

    When many people coming from Bitcoin see this problem, however, they wonder why we are having such a hard time with this issue when Bitcoin already has a ready-made solution: make the fees voluntary and market-based. In the Bitcoin protocol, there are no mandatory transaction fees; even an extremely large and computationally arduous transaction can get in with a zero fee, and it is up to the miners to determine what fees they require. The lower a transaction’s fee, the longer it takes for the transaction to find a miner that will let it in, and those who want faster confirmations can pay more. At some point, an equilibrium should be reached. Problem solved. So why not here?

    The reality, is, however, is that in Bitcoin the transaction fee problem is very far from “solved”. The system as described above already has a serious vulnerability: miners have to pay no fees, so a miner can choke the entire network with an extremely large block. In fact, this problem is so serious that Satoshi close to fix it with the ugliest possible path: set a maximum block size limit of 1 MB, or 7 transactions per second. Now, without the immensely hard-fought and politically laden debate that necessarily accompanies any “hard-forking” protocol change, Bitcoin simply cannot organically adapt to handle anything more than the 7 tx/sec limit that Satoshi originally placed.

    And that’s Bitcoin. In Ethereum, the issue is even more problematic due to Turing-completeness. In Bitcoin, one can construct a mathematical proof that a transaction N bytes long will not take more than k*N time to verify for some constant k. In Ethereum, one can construct a transaction in less than 150 bytes that, absent fees, will run forever:

    [ TO, VALUE, [ PUSH, 0, JMP ], v, r, s ]

    In case you do not understand that, it’s the equivalent of 10: DO_NOTHING, 20: GOTO 10; an infinite loop. And as soon as a miner publishes a block that includes that transaction, the entire network will freeze. In fact, thanks to the well-known impossibility of the halting problem, it is not even possible to construct a filter to weed out infinite-looping scripts.

    Thus, computational attacks on Ethereum are trivial, and even more restrictions must be placed in order to ensure that Ethereum remains a workable platform. But wait, you might say, why not just take the 1 MB limit, and convert it into a 1 million x base fee limit? One can even make the system more future-proof by replacing a hard cap with a floating cap of 100 times the moving average of the last 10000 blocks. At this point, we need to get deeper into the economics and try to understand what “market-based fees” are all about.

    Crypto, Meet Pigou

    In general terms, an idealized market, or at least one specific subset of a market, can be defined as follows. There exist a set of sellers, S[1] … S[n], who are interested in selling a particular resource, and where seller S[i] incurs a cost c[i] from giving up that resource. We can say c[1] < c[2] < … < c[n] for simplicity. Similarly, there exist some buyers, B[1] … B[n], who are interested in gaining a particular resource and incur a gain g[i], where g[1] > g[2] > … > g[n]. Then, an order matching process happens as follows. First, one locates the last k where g[k] > c[k]. Then, one picks a price between those two values, say at p = (g[k] + c[k])/2, and S[i] and B[i] make a trade, where S[i] gives the resource to B[i] and B[i] pays p to S[i]. All parties benefit, and the benefit is the maximum possible; if S[k+1] and B[k+1] also made a transaction, c[k+1] > v[k+1], so the transaction would actually have negative net value to society. Fortunately, it is in everybody’s interest to make sure that they do not participate in unfavorable trades.

    The question is, is this kind of market the right model for Bitcoin transactions? To answer this question, let us try to put all of the players into roles. The resource is the service of transaction processing, and the people benefitting from the resource, the transaction senders, are also the buyers paying transaction fees. So far, so good. The sellers are obvious the miners. But who is incurring the costs? Here, things get tricky. For each individual transaction that a miner includes, the costs are borne not just by that miner, but by every single node in the entire network. The cost per transaction is tiny; a miner can process a transaction and include it in a block for less than 0.00001worthofelectricityanddatastorage.Thereasonwhytransactionfeesneedtobehighisbecausethat0.00001 worth of electricity and data storage. The reason why transaction fees need to be high is because that 0.00001 is being paid by thousands of nodes all around the world.

    It gets worse. Suppose that the net cost to the network of processing a transaction is close to 0.05.Intheory,evenifthecostsarenotbornebyexactlythesamepeoplewhosettheprices,aslongasthetransactionfeeiscloseto0.05. In theory, even if the costs are not borne by exactly the same people who set the prices, as long as the transaction fee is close to

    Now, suppose that the mining ecosystem is more oligarchic, with one pool controlling 25% of all mining power. What are the incentives then? Here, it gets more tricky. The mining pool can actually choose to set its minimum fee higher, perhaps at 0.001.Thismayseemlikethepoolisforgoingprofitopportunitiesbetween0.001. This may seem like the pool is forgoing profit opportunities between 0.00001 and 0.00099,butwhatisalsohappeningisthatmanyofthetransactionsenderswhowereaddingbetween0.00099, but what is also happening is that many of the transaction senders who were adding between 0.00001 and $0.00099 before now have the incentive to increase their fees to make sure this pool confirms their transactions – otherwise, they would need to wait an average of 3.3 minutes longer. Thus, the fewer miners there are, the higher fees go – even thought a reduced number of miners actually means a lower network cost of processing all transactions.

    From the above discussion, what should become painfully clear is that transaction processing simply is not a market, and therefore trying to apply market-like mechanisms to it is an exercise in random guessing at best, and a scalability disaster at worst. So what are the alternatives? The economically ideal solution is one that has often been brought up in the context of global warming, perhaps the largest geopolitical tragedy of the commons scenario in the modern world: Pigovian taxes.

    Price Setting without A Market

    The way a Pigovian tax works is simple. Through some mechanism, the total net cost of consuming a certain quantity of a common resource (eg. network computation, air purity) is calculated. Then, everyone who consumes that resource is required to pay that cost for every unit of the resource that they consume (or for every unit of pollution that they emit). The challenge in Pigovian taxation, however, is twofold. First, who gets the revenue? Second, and more importantly, there is no way to opt out of pollution, and thus no way for the market to extract people’s preferences about how much they would need to gain in order to suffer a given dose of pollution; thus, how do we set the price?

    In general, there are three ways of solving this problem:

    1. Philosopher kings set the price, and disappear as the price is set in stone forever.
    2. Philosopher kings maintain active control over the price.
    3. Some kind of democratic mechanism

    There is also a fourth way, some kind of market mechanism which randomly doles out extra pollution to certain groups and attempts to measure the extent to which people (or network nodes in the context of a crytocurrency) are willing to go to avoid that pollution; this approach is interesting but heavily underexplored, and I will not attempt to examine it at this point in time.

    Our initial strategy was (1). Ripple’s strategy is (2). Now, we are increasingly looking to (3). But how would (3) be implemented? Fortunately, cryptocurrency is all about democratic consensus, and every cryptocurrency already has at least two forms of consensus baked in: proof of work and proof of stake. I will show two very simple protocols for doing this right now:

    Proof of work Protocol

    1. If you mine a block, you have the right to set a value in the “extra data field”, which can be anywhere from 0-32 bytes (this is already in the protocol)
    2. If the first byte of this data is 0, nothing happens
    3. If the first byte of this data is 1, we set block.basefee = block.basefee + floor(block.basefee / 65536)
    4. If the first byte of this data is 255, we set block.basefee = block.basefee – floor(block.basefee / 65536)

    Proof of stake Protocol

    1. After each block, calculate h = sha256(block.parenthash + address) * block.address_balance(address)for each address
    2. If h > 2^256 / difficulty, where difficulty is a set constant, that address can sign either 1, 0 or 255 and create a signed object of the form [ val, v, r, s ]
    3. The miner can then include that object in the block header, giving the miner and the stakeholder some miniscule reward.
    4. If the data is 1, we set block.basefee = block.basefee + floor(block.basefee / 65536)
    5. If the data is 255, we set block.basefee = block.basefee – floor(block.basefee / 65536)

    The two protocols are functionally close to identical; the only difference is that in the proof of work protocol miners decide on the basefee and in the proof of stake protocol ether holders do. The question is, do miners and ether holders have their incentives aligned to set the fee fairly? If transaction fees go to miners, then miners clearly do not. However, if transaction fees are burned, and thus their value goes to all ether holder proportionately through reduced inflation, then perhaps they do. Miners and ether holders both want to see the value of their ether go up, so they want to set a fee that makes the network more useful, both in terms of not making it prohibitively expensive to make transactions and in terms of not setting a high computational load. Thus, in theory, assuming rational actors, we will have fees that are at least somewhat reasonable.

    Is there a reason to go one way or the other in terms of miners versus ether holders? Perhaps there is. Miners have the incentive to see the value of ether go as high as possible in the short term, but perhaps not so much in the long term, since a prolonged rise eventually brings competition which cancels out the miners’ increased profit. Thus, miners might end up adopting a looser policy that imposes higher costs (eg. data storage) on miners far down the line. Ether holders, on the other hand, seem to have a longer term interest. On the other hand, miners are somewhat “locked in” to mining ether specifically, especially if semi-specialized or specialized hardware gets involved; ether holders, on the other hand, can easily hop from one market to the other. Furthermore, miners are less anonymous than ether holders. Thus, the issue is not clear cut; if transaction fees are burned one can go either way.

    [ad_2]

    Source link

  • Conference, Alpha Testnet and Ether Pre-sale Updates | Ethereum Foundation Blog

    Conference, Alpha Testnet and Ether Pre-sale Updates | Ethereum Foundation Blog

    [ad_1]

    Important notice: any information from this post regarding the ether sale is highly outdated and probably inaccurate. Please only consult the latest blog posts and official materials at ethereum.org for information on the sale

    Ethereum received an incredible response at the Miami Bitcoin Conference. We traveled there anticipating many technical questions as well as a philosophical discussion about the purpose of Ethereum; however, the overwhelming amount of interest and enthusiasm for the project was much larger than we had anticipated. Vitalik’s presentation was met with both a standing ovation and a question queue that took hours to address.

    Because we intend on providing an equal opportunity to all those who want to be involved, and are reviewing the relevant logistical and regulatory issues for a token sale of this scale, we have decided to postpone the Feb 1 launch of the sale. We will make the announcement of the new sale launch date on our official website: Ethereum.org.

    The Ethereum project is also excited to announce the alpha release of the open source testnet client to the community at the beginning of February. This will give people an opportunity to get involved with the project and experiment with Ethereum scripts and contracts, and gain a better understanding of the technical properties of the Ethereum platform. Launching the testnet at this date will give those interested in the fundraiser a chance to better understand what the Ethereum project is about before participating.

    The testnet will include full support for sending and receiving transactions, the initial version of the scripting language as described in our whitepaper, and may or may not include mining. This will also be the first major cryptocurrency project to have two official clients released at the same time, with one written in C++ and the other in Go; a Python client is also in the works. A compiler from the Ethereum CLL to Ethereum script will be released very soon.

    A note on security

    The ether sale will NOT be launching on February 1st and any attempt to collect funds at this time should be considered a scam. There have been some scams in the forums thus please use caution and only consider information posted on Ethereum.org to be legitimate. It is important to reinforce to all that only information released and posted at Ethereum.org should be trusted as many are likely to impersonate us.

    [ad_2]

    Source link

  • Ethereum: Now Going Public | Ethereum Foundation Blog

    Ethereum: Now Going Public | Ethereum Foundation Blog

    [ad_1]

    I first wrote the initial draft of the Ethereum whitepaper on a cold day in San Francisco in November, as a culmination of months of thought and often frustrating work into an area that we have come to call “cryptocurrency 2.0″ – in short, using the Bitcoin blockchain for more than just money. In the months leading up to the development of Ethereum, I had the privilege to work closely with several projects attempting to implement colored coins, smart property, and various types of decentralized exchange. At the time, I was excited by the sheer potential that these technologies could bring, as I was acutely aware that many of the major problems still plaguing the Bitcoin ecosystem, including fraudulent services, unreliable exchanges, and an often surprising lack of security, were not caused by Bitcoin’s unique property of decentralization; rather, these issues are a result of the fact that there was still great centralization left, in places where it could potentially quite easily be removed.

    What I soon realized, however, was the sheer difficulty that many of these projects were facing, and the often ugly technological hacks that were required to make them work. And, once one looks at the problem carefully, the culprit becomes obvious: fragmentation. Each individual project was attempting to implement its own blockchain or meta-layer on top of Bitcoin, and considerable effort was being duplicated and interoperability lost as a result. Eventually, I realized that the key to solving the problem once and for all was a simple insight that the field of computer science first conceived in 1935: there is no need to construct a separate infrastructure for each individual feature and implementation; rather, it is possible to create a Turing-complete programming language, and allow everyone to use that language to implement any feature that can be mathematically defined. This is how our computers work, and this is how our web browsers work; and, with Ethereum, this is how our cryptocurrencies can work.

    Since that moment, Ethereum has come very far over the past two months. The Ethereum team has expanded to include dozens of developers including Gavin Wood and Jeffrey Wilcke, lead developers of the C++ and Go implementations, respectively, as well as others including Charles Hoskinson, Anthony Di Iorio and Mihai Alisie, and dozens of other incredibly talented individuals who are unfortunately too many to mention. Many of them have even come to understand the project so deeply as to be better at explaining Ethereum than myself. There are now over fifteen people in our developer chat rooms actively working on the C++ and Go implementations, which are already surprisingly close to having all the functionality needed to run in a testnet. Aside from development effort, there are dozens of people operating around the world in our marketing and community outreach team, developing the non-technical infrastructure needed to make the Ethereum ecosystem the solid and robust community that it deserves to be. And now, at this stage, we have made a collective decision that we are ready to take our organization much more public than we have been before.

    What Is Ethereum

    In short, Ethereum is a next-generation cryptographic ledger that intends to support numerous advanced features, including user-issued currencies, smart contracts, decentralized exchange and even what we think is the first proper implementation of decentralized autonomous organizations (DAOs) or companies (DACs). However, this is not what makes Ethereum special. Rather, what makes Ethereum special is the way that it does this. Instead of attempting to specifically support each individual type of functionality as a feature, Ethereum includes a built-in Turing-complete scripting language, which allows you to code the features yourself through a mechanism known as “contracts”. A contract is like an autonomous agent that runs a certain piece of code every time a transaction is sent to it, and this code can modify the contract’s internal data storage or send transactions. Advanced contracts can even modify their own code.

    A simple example of a contract would be a basic name registration system, allowing users to register their name with their address. This contract would not send transactions; its sole purpose is to build up a database which other nodes can then query. The contract, written in our high-level C-Like Language (CLL) (or perhaps more accurately Python-like language), looks as follows:

    if tx.value < block.basefee * 200:
    stop
    if contract.storage[tx.data[0]] or tx.data[0] < 100:
    stop
    contract.storage[tx.data[0]] = tx.data[1]

    And there you have it. Five lines of code, executed simultaneously by thousands of nodes around the world, and you have the beginnings of a solution to a major problem in cryptography: human-friendly authentication. It is important to point out that when the original version of Ethereum’s scripting code was designed we did not have name registration in mind; rather, the fact that this is possible came about as an emergent property of its Turing-completeness. Hopefully this will give you an insight into exactly what Ethereum makes possible; for more applications, with code, see the whitepaper. Just a few examples include:

    • User-issued currencies / “colored coins”
    • Decentralized exchange
    • Financial contracts, including leverage trading and hedging
    • Crop insurance
    • Savings wallets with withdrawal limits
    • Peer to peer gambling
    • Decentralized Dropbox-style data storage
    • Decentralized autonomous organizations

    Perhaps now you see why we are excited.

    Who is Ethereum

    The core Ethereum team includes four members:

     

    Vitalik Buterin Vitalik Buterin first joined the Bitcoin community in March 2011, and co-founded Bitcoin Magazine with Mihai Alisie in September 2011. He was admitted to the University of Waterloo to study computer science in 2012, and in 2013 made the decision to leave Waterloo to travel through Bitcoin communities around the world and work on Bitcoin projects full time. Vitalik is responsible for a number of Bitcoin projects, including pybitcointools, a fork of BitcoinJSand multisig.info; now, he has returned to Canada and is fully dedicated to working on Ethereum.
    Mihai Alisie Mihai Alisie’s first foray into the Bitcoin community is Bitcoin Magazine, in September 2011. From Issue #1, which was shipped from his living room in Romania, to today Bitcoin Magazine bears Mihai’s imprint, and has grown as he has grown with the magazine. What started out as a team of people that didn’t have any experience in the publishing industry, is now distributing a physical magazine internationally and in Barnes & Noble bookstores across the US. Mihai is also involved in an innovative online e-commerce startup known as Egora.
    Anthony Di Iorio Anthony Di Iorio is a Founding Member, Board Member & Executive Director of the Bitcoin Alliance of Canada, Founder of the Toronto Bitcoin Meetup Group, and partner / founder in various Bitcoin start-ups and initiatives including the in-browser Bitcoin wallet KryptoKit, Cointalk, the Toronto-based Bitcoin hub and coworking spaceBitcoin Decentral, Bitcoin Across America, and the Global Bitcoin Alliance.
    Charles Hoskinson Charles Hoskinson is an entrepreneur and cryptographer actively working on ventures in the Bitcoin ecosystem. He founded both the Bitcoin Education Project and Invictus Innovations prior to accepting his current role as a core developer of the Ethereum Project. He studied at Metropolitan State University of Denver and University of Colorado at Boulder with an emphasis in Analytic Number Theory. Charles is known for his love of economics, horology and MOOCs alongside a passion for chess and games of strategy.

    We also have an excellent team of developers, entrepreneurs, marketers and evangelists:

    • Dr. Gavin Wood: Core C++ Developer
    • Geff Obscura: Core Go Developer
    • Dr. Emanuele Costa: Quantitative Analyst; SCRUM Master
    • Joseph Lubin: Software Engineering, Quantitative Analyst
    • Eric Lombrozo: Software Architect
    • Max Kaye: Developer
    • Jonathan Mohan: Media, Marketing and Evangelism (BitcoinNYC)
    • Wendell Davis: Strategic Partner and Branding (Hive Wallet)
    • Anthony Donofrio: Logos, branding, Web Development (Hive Wallet)
    • Taylor Gerring: Web Development
    • Paul Snow: Language Development, Software Development
    • Chris Odom: Strategic Partner, Developer (Open Transactions)
    • Jerry Liu and Bin Lu: Chinese strategy and translations (http://www.8btc.com/ethereum)
    • Hai Nguyen: Accounting
    • Amir Shetrit: Business Development (Colored Coins)
    • Steve Dakh: Developer (KryptoKit)
    • kyle Kurbegovich: Media (Cointalk)

    Looking Forward

    I personally will be presenting at the Bitcoin conference in Miami on Jan 25-26. Soon after that, on February 1, the ether pre-sale will begin, at which point anyone will be able to obtain some of the initial pre-allocated ether (Ethereum’s internal currency) at a rate of 1000-2000 ether for 1 BTC by going to http://fund.ethereum.org. The pre-sale will run throughout February and March, and early funders will get higher rewards; anyone who sends money in the first seven days will receive the full 2000 ether, then 1980 ether on the 8th day, 1960 on the 9th day, and so forth until the baseline rate of 1000 ether per BTC is retained for the last three days of the pre-sale.

    We will be able to develop fully functional and robust Ethereum clients with as little as 500 BTC funding with current rates; basic implementations in Go, C++ and Python are coming close to testnet quality already. However, we are seeking to go much further than that. Ethereum is not “just another altcoin”; it is a new way forward for cryptocurrency, and ultimately for peer-to-peer protocols as a whole. To that end, we would like to be able to invest a large quantity of funds into securing top-notch talent for improving the security and scalability of the Ethereum network itself, but also supporting a robust Ethereum ecosystem hopefully bringing other cryptocurrency and peer-to-peer projects into our fold. We are already well underway in talks with KryptoKit, Humint and OpenTransactions, and are interested in working with other groups such as Tahoe-LAFS, Bitmessage and Bitcloud as well.

    All of these projects can potentially benefit from integrating with the Ethereum blockchain in some fashion, simply because the layer is so universal; because of its Turing-completeness, an Ethereum contract can be constructed to incentivize nearly everything, and even entirely non-financial uses such as public key registration have extremely wide-reaching benefits for any decentralized cryptographic product that intends to include, for example, a social network. All of these projects will add great value to the Ethereum ecosystem, and the Ethereum ecosystem will hopefully add great value to them. We do not wish to compete with any organization; we intend to work together.

    Throughout the fundraiser, we will be working hard on development; we will release a centralized testnet, a server to which anyone can push contracts and transactions, very soon, and will then follow up with a decentralized testnet to test networking features and mining algorithms. We also intend to host a contest, similar to those used to decide the algorithms for the Advanced Encryption Standard (AES) in 2005 and SHA3 in 2013, in which we invite researchers from universities around the world to compete to develop the best possible specialized hardware-resistant, centralization-resistant and fair mining algorithms, and will also explore alternatives such as proof of stake, proof of burn and proof of excellence. Details on this will be further released in February.

    Finally, to promote local community development, we also intend to create public community hubs and incubators, which we are tentatively calling “holons”, in several cities around the world. The first holon will be based inside of Bitcoin Decentral in Toronto, and a substantial portion of Ethereum development will take place there; anyone who is seriously interested in participating heavily in Ethereum should consider giving us a visit over the next month. Other cities we are looking into include San Francisco, Amsterdam, Tel Aviv and some city in Asia; this part of the project is still in a very early phase of development, and more details will come over the next month.

    For now feel free to check out our resources:


    Reddit: http://reddit.com/r/ethereum

    [ad_2]

    Source link

  • Slasher: A Punitive Proof-of-Stake Algorithm | Ethereum Foundation Blog

    Slasher: A Punitive Proof-of-Stake Algorithm | Ethereum Foundation Blog

    [ad_1]

    The purpose of this post is not to say that Ethereum will be using Slasher in place of Dagger as its main mining function. Rather, Slasher is a useful construct to have in our war chest in case proof of stake mining becomes substantially more popular or a compelling reason is provided to switch. Slasher may also benefit other cryptocurrencies that wish to exist independently of Ethereum. Special thanks to tacotime for some inspiration, and for Jack Walker for improvement suggestions.

    Proof of stake mining has for a long time been a large area of interest to the cryptocurrency community. The first proof-of-stake based coin, PPCoin, was releasd by Sunny King in 2012, and has consistently remained among the top five alternative currencies by monetary base since then. And for good reason; proof of stake has a number of advantages over proof of work as a mining method. First of all, proof of stake is much more environmentally friendly; while proof of work requires miners to effectively burn computational power on useless calculations to secure the network, proof of stake effectively simulates the burning, so no real-world energy or resources are ever actually wasted. Second, there are centralization concerns. With proof of work, mining has been essentially dominated by specialized hardware (“application-specific integrated circuits” / ASICs), and there is a large risk that a single large player such as Intel or a major bank will take over and de-facto monopolize the market. Memory-hard mining algorithms like Scrypt and now Dagger mitigate this to a large extent, but even still not perfectly. Once again, proof of stake, if it can be made to work, is essentially a perfect solution.

    However, proof of stake, as implemented in nearly every currency so far, has one fundamental flaw: as one prominent Bitcoin developer put it, “there’s nothing at stake”. The meaning of the statement becomes clear when we attempt to analyze what exactly is going on in the event of an attempted 51% attack, the situation that any kind of proof-of-work like mechanism is intended to prevent. In a 51% attack, an attacker A sends a transaction from A to B, waits for the transaction to be confirmed in block K1 (with parent K), collects a product from B, and then immediately creates another block K2 on top of K – with a transaction sending the same bitcoins but this time from A to A. At that point, there are two blockchains, one from block K1 and another from block K2. If B can add blocks on top of K2 faster than the entire legitimate network can create blocks on top of K1, the K2 blockchain will win – and it will be as if the payment from A to B had never happened. The point of proof of work is to make it take a certain amount of computational power to create a block, so that in order for K2 to outrace K1 B would have to have more computational power than the entire legitimate network combined.

    In the case of proof of stake, it doesn’t take computational power to create a work – instead, it takes money. In PPCoin, every “coin” has a chance per second of becoming the lucky coin that has the right to create a new valid block, so the more coins you have the faster you can create new blocks in the long run. Thus, a successful 51% attack, in theory, requires not having more computing power than the legitimate network, but more money than the legitimate network. But here we see the difference between proof of work and proof of stake: in proof of work, a miner can only mine on one fork at a time, so the legitimate network will support the legitimate blockchain and not an attacker’s blockchain. In proof of stake, however, as soon as a fork happens miners will have money in both forks at the same time, and so miners will be able to mine on both forks. In fact, if there is even the slightest chance that the attack will succeed, miners have the incentive to mine on both. If a miner has a large number of coins, the miner will want to oppose attacks to preserve the value of their own coins; in an ecosystem with small miners, however, network security potentially falls apart in a classic public goods problem as no single miner has substantial impact on the result and so every miner will act purely “selfishly”.

    The Solution

    Some have theorized that the above argument is a deathblow to all proof of stake, at least without a proof of work component assisting it. And in a context where every chain is only aware of itself, this is indeed provably true. However, there is actually one clever way to get around the issue, and one which has so far been underexplored: make the chain aware of other chains. Then, if a miner is caught mining on two chains at the same time, that miner can be penalized. However, it is not at all obvious how to do this with a PPCoin-like design. The reason is this: mining is a random process. That is to say, a miner with 0.1% of the stake has a 0.1% chance of mining a valid block on block K1, and a 0.1% chance of mining a valid block on block K2, but only a 0.0001% chance of mining a valid block on both. And in that case, the miner can simply hold back the second block – because mining is probabilistic, the miner can still gain 99.9% of the benefit of mining on the second chain.

    The following proposal, however, outlines an algorithm, which we are calling Slasher to express its harshly punitive nature, for avoiding this proposal. The design description given here uses address balances for clarity, but can easily be used to work with “unspent transaction outputs”, or any other similar abstraction that other currencies may use.

    1. Blocks are mined with proof of work. However, we make one modification. When creating a block K, a miner must include the value H(n) for some random n generated by the miner. The miner must claim the reward by releasing a transaction uncovering n between block K+100 and K+900. The proof of work reward is very low, ideally encouraging energy usage equal to about 1% of that of Bitcoin. The target block time is 30 seconds.
    2. Suppose the total money supply is M, and n[i] is the n value at block i. At block K+1000, an address A with balance B gains a “signing privilege” if sha256(n[K] + n[K+1] + … + n[K+99] + A) < 2^256 * 64 * B / M. Essentially, an address has a chance of gaining a signing privilege proportional to the amount of money that it has, and on average 64 signing privileges will be assigned each block.
    3. At block K+2000, miners with signing privileges from block K have the opportunity to sign the block. The number of signatures is what determines the total length of one blockchain versus another. A signature awards the signer a reward that is substantially larger than the proof of work reward, and this reward will unlock by block K+3000.
    4. Suppose that a user detects two signatures made by address A on two distinct blocks with height K+2000. That node can then publish a transaction containing those two signatures, and if that transaction is included before block K+3000 it destroys the reward for that signature and sends 33% to the user that ratted the cheater out.

    The key to this design is how the signing privileges are distributed: instead of the signing privilege being randomly based on the previous block, the signing privilege is based on the block two thousand blocks ago. Thus, in the event of a fork, a miner that gets lucky in one chain will also get lucky in the other, completely eliminating the probabilistic dual-mining attack that is possible with PPCoin. Another way of looking at it is that because Slasher uses proof-of-stake-2000-blocks-ago instead of proof-of-stake now, and forks will almost certainly not last 2000 blocks, there is only one currency supply to mine with, so there is indeed “something at stake”. The penalty of block reward loss ensures that every node will take care to sign only one block at each block number.

    The use of 100 pre-committed random numbers is an idea taken from provably fair gambling protocols; the idea is that powerful miners have no way of attempting to create many blocks and publishing only those that assign their own stake a signing privilege, since they do not know what any of the other random data used to determine the stakeholder is when they create their blocks.

    The system is not purely proof-of-stake; some minimal proof-of-work will be required to maintain a time interval between blocks. However, a 51% attack on the proof of work would be essentially inconsequential, as proof of stake signing is the sole deciding factor in which blockchain wins. Furthermore, the energy usage from proof of work can be made to be 95-99% lower, resolving the environmental concern with proof of work.

    [ad_2]

    Source link

  • Bootstrapping a Decentralized Autonomous Corporation, Part 3: Identity Corp | Ethereum Foundation Blog

    Bootstrapping a Decentralized Autonomous Corporation, Part 3: Identity Corp | Ethereum Foundation Blog

    [ad_1]

    In the first two parts of this series, we talked about what the basic workings of a decentralized autonomous corporation might look like, and what kinds of challenges it might need to deal with to be effective. However, there is still one question that we have not answered: what might such corporations be useful for? Bitcoin developer Jeff Garzik once suggested that one application migh be a sort of decentralized Dropbox, where users can upload their files to a resilient peer-to-peer network that would be incentivized to keep those files reliably backed up. But aside from this particular example, what other applications might there be? What are the industries where decentralized corporations will not simply be a gimiick, but will rather be able to survive on their own merits and provide genuine value to society?

    Arguably, there are three major categories where this is the case. First, there are the natural monopolies. For certain kinds of services, it simply makes no sense to have many hundreds of competing offerings all working at the same time; software protocols, languages and to some extent social networks and currencies all fit into this model. However, if the providers of these services are not held in check by a competitive market, the question is, who does hold them in check? Who ensures that they charge a fair market price for their services, and do not set monopoly prices thousands of times above what the product actually costs to produce? A decentralized corporation can theoretically be designed so that no one involved in the price-setting mechanism has any such incentive. More generally, decentralized corporations can be made invulnerable to corruption in ways unimaginable in human-controlled system, although great care would certainly need to be taken not to introduce other vulnerabilities instead; Bitcoin itself is a perfect example of this.

    Second, there are services that violate government laws and regulations; the use of decentralized file-sharing networks for copyright infringement, and to a much lesser extent the use of Bitcoin on sites like Silk Road, are both examples. As Satoshi Nakamoto put it, “Governments are good at cutting off the heads of a centrally controlled networks like Napster, but pure P2P networks like Gnutella and Tor seem to be holding their own.” Finally, there are those cases where a decentralized network can simply maintain itself more efficiently and provides better services than any centralized alternative; the peer-to-peer network used by Blizzard to distribute updates to its massively multiplayer online game World of Warcraft is perhaps one of the purest examples.

    The rest of this article will outline one particular idea for a decentralized corporation that can potentially open up a number of new possibilities in cryptocurrency, creating designs that have vastly different properties from the cryptocurrencies we see today while still staying close to the cryptocurrency ideal. The basic concept is this: Identity Corp, a corporation whose sole purpose is to create cryptographically secure identity documents for individuals that they could sign messages with, and are linked to individuals’ physical identities.

    What’s The Point?

    At first, the idea of creating yet another way to track people’s identity seems silly. Here we are, having escaped the shackles of state-backed fiat currency and its onerous anti-money-laundering identity verification requirements and gotten into the semi-anonymous world of Bitcoin, and I’m suggesting that we bring identity verification right back to the table? But of course, the choice between “nymity” and anonymity is not nearly quite so simple. Even individuals facing potential lifetime imprisonment, such as Silk Road founder Dread Pirate Roberts, still tend to maintain some kind of identity – in the aforementioned case, the identity is “Dread Pirate Roberts” itself. Why does he (or perhaps she, we may never know) do that? The answer is simple: he is also running a multimillion dollar business – namely, the online anonymous marketplace Silk Road, and he needs to provide customers some reassurance that he can be trusted. Legal and even semi-legal businesses often show themselves in public, deliberately making themselves vulnerable to both government prosecution and harassment of varying degrees from disaffected customers. Why do that? To show to the world that they now have an extra incentive to act honestly. The “crypto” in cryptography does come from the Greek word for hiding, but in reality cryptography is often about verifying your identity as it is about concealing it.

    However, the sort of “identity” used by Dread Pirate Roberts is different from the identity we are talking about here. The function of standard public key cryptographic identity is a limited one: to provide proof that two messages were created (or at least signed) by the same entity. This definition may seem strange at first; usually, we think of identities as determining “who someone is”. In reality, however, just like in the principle of relativity in physics, in the context of identity and reputation theory there is no “preferred frame” for determining which set of observations of a person constitute that core person, or if a person has multiple names which name is his or her “real name”. If I write articles as “Vitalik Buterin”, but make internet posts as “djargon135″, it is equally legitimate to say “djargon135 is actually Vitalik Buterin” as it is to say “Vitalik Buterin is actually djargon135″; in either case, what matters is that one set of messages claimed to be written by djargon135, and another set of messages claimed to be written by Vitalik Buterin, in fact have a common author. Under this framework, a “real name” is distinguished from a “pseudonym” in one way and one way only: each entity can only have one real name. That is to say, while pseudonyms can be used to prove that two messages were created by the same entity, real names can also be used to prove that two messages were created by two different entities.

    But this still does not answer the question: why have real names at all? In fact, nearly all applications of a real name can be reduced to one fundamental concept: the giveaway. We all understand what a giveaway is: perhaps a corporation wishes to hand out a free sample of a product to attract potential customers, perhaps a homeless shelter with limited resources wants to feed everyone enough to survive, and thus not let anyone take triple portions for themselves, or perhaps a government agency administering a welfare program wants to prevent people from claiming welfare twice. The idea is simple: X units of some product, service or commodity per person, and if you want more you will have to get your second portion through other channels. One of the use cases of a “real name” used earlier, that of a company owner publishing his details to reassure customers that he is vulnerable to prosecution by law enforcement, does not look like an example of a giveaway, but in fact that company owner is a recipient of a particularly special kind of giveaway in society: that of reputation. In a public key reputation environment, an identity can be created at no cost, so everyone starts out with zero reputation, making business difficult at first. In a real-name system, however, everyone immediately starts out with one pre-made identity, and no way to acquire more, making that identity “expensive” and thus giving them a fixed quantity of reputation to start out with. Instead of one free sample per person, it’s one free reputation per person, but the principle is the same.

    How To Implement It

    Actually implemening a system, of course, is a challenge. It is very difficult to do with any purely over-the-internet mechanism because anyone can trivially create multiple identites and make them all act like different people. It is certainly possible to weed out some fraud by applying statistical analysis on the messages that everyone signs (eg. if two different identities both consistently spell “actualy” instead of “actually”, that is some strong evidence that they might be linked); however, this can easily be circumvented by combining a spellchecker with a program that deliberately inserts spelling errors and rearranges some grammatical constructions. These tactics can perhaps be themselves corrected for, but ultimately relying solely or even largely on such mechanisms is a recipe for statistical warfare, not any kind of stable identity system.

    So what’s left? Offline mechanisms. DNA-based identity is the most obvious, although face, iris and fingerprint scans can also add themselves to the list. Currently, government-based identity systems do not use this information too much because government identity documents follow a centralized parent-child model: you want a social insurance number, you need to provide your passport, you lost your passport, you provide a birth certificate and possibly change-of-name certificates if applicable. Ultimately, everything usually depends on a combination of the birth certificate and face recognition on the part of he government agents administering the system. A decentralized system to accomplish this can use both mechanisms, although many will argue that having the ability in theory to register without providing any government documents is a strong positive – it should be possible to get an identity through the system without necessarily tying in one’s government-backed “real name” (in the usual sense of the term, not my own distinction given above). If this is not possible, then some kind of mixnet-like setup could be used to anonymize identities once they have been created while still maintaining the one-per-person limit. However, attempts at fraud would likely be much more frequent; governments are not, at least at first, going to use any legal mechanisms to enforce anti-fraud rules with these identities as they do with their own documents.

    From the above information, it becomes easy to imagine how one might create a centralized organization that accomplishes this objective. The organization would have an office, people would go in, have their biometrics (face, fingerprint, iris, maybe DNA) checked, and would then receive their fresh new cryptographic passport. Why not stop there? In this case, the answer is that the natural monopoly argument applies. Even if the system may have multiple identity providers, they would all need to cross-check information with each other to prevent multiple signups, and the resulting system would necessarily be the only one of its kind.

    If this system is managed by a corporation, that corporation would have the incentive to start charging high fees once its product becomes ubiquitous and necessary. If it is managed by a government, then the government would have the incentive to tie these identities to its own real names, and remove any privacy features (or at least install a backdoor for itself). Furthermore, it might want the ability to revoke identities as a punishment, and if large parts of the internet (and society at large) start relying on these mechanisms it would become much harder to survive as a fugitive or dissident. Furthermore, there comes another question: which government speficially would administer the system? Even supposedly worldwide bodies like the United Nations are not universally trusted, often precisely because they are such perfect targets for corruption among anyone trying to secure any kind of worldwide control. Thus, to both avoid a corporation subverting the system for profit and a government subverting the system for its own political ends, placing the power into the hands of a decentralized network, if possible, is arguably the best option.

    But how is it possible? Identity Corp can certainly avoid the truly difficult challenge of actively interacting with the world because all it does is provide information. However, receiving data about the world, including its users’ biometric information, would be nevertheless very challenging. There are no public APIs for such information; the only option would be for some human agent, or group of agents, to collect it. The channel of communication between the humans and the network will be simply digital bits, so it is very easy to see how these agents themselves could defraud the system: they could create many different identities for fake individuals with fake data.

    The only solution seems to be, once again, decentralization and redundancy: have many different agents collecting the same information, and require individuals looking to get an identity to confirm it with several different agents, ideally randomly (or otherwise) selected by the system itself. These agents would all send out messages to the network containing both biometric data and the identity that data is mapped to, perhaps encrypted using some cryptographic mechanisms that allows two datasets to be checked to see if they are nearly identical but shows nothing else. If two different agents assign two biometric identities to the same data, the second identity can be rejected. If someone tries to register an identity with fake biometric data, they will need to convince a number of specific organizations to somehow accept it. Finally, the system should also include a mechanism for detecting and correcting fraud after the fact, perhaps using some sort of special-purpose decentralized “court”.

    The second challenge is figuring out exactly who these “agents” are going to be. The system should be able to avoid Sybil attacks (the technical term for an attacker pretending to be a million entities so as to take control of a network that relies on consensus), and weed out bad agents without that mechanism itself being subject to bad agents or Sybil attacks. Proof-of-work and proof-of-stake is not enough; since we do not want each individual to travel around the world giving their biometric information to 51% of the network, in practice it may only take as little as 10% or even 5% to pull off fraud on a large scale. Thus, it is quite probable that making a pure decentralized corporation to accomplish this task will be impossible; rather, the best we can hope for is a hybrid system that uses heavy support from humans to keep the network in balance, but at the same time uses the network’s cryptographic properties to force the system to stick to its original mission. This would be somewhere between a legal contract or constitution and a true decentralized network, but the distinction there is a very fluid one; as Lawrence Lessig is keen to point out, “code is law“.

    SocialCoin and the One World “Government”

    The existence of a decentralized “real name” system allows for a large number of possibilities that have so far been unexplored in the cryptocurrency world. One attractive possibility is SocialCoin, the cryptocurrency that pays everyone in the world a “world citizen’s dividend” of 1000 units per month; another, similar alternative is to plug the system into a Devcoin-like system, allowing people to come together and vote on projects that the money should be spent on, thereby creating what is essentially a (voluntary) “world government” that funds itself from the revenue from generating new currency units. How much money could such a government get while still maintaining a low inflation rate? Here, there are two factors to keep in mind: people dying and losing their coins forever, and actual inflation.

    Currently, when someone dies, their property automatically goes to their children or spouse by default. In a cryptocurrency, however, by default a person’s monetary savings simply become inaccessible since their passwords are lost. This destruction of coins creates a deflationary pressure; given the current death rate of around 8 per 1000 per year, multiplying by a factor of 2 to account for the fact that people tend to be somewhat wealthier than average at the time of their death, and then again dividing by 3 to take into account the fact that many people will have a system set up to ensure their wealth will go somewhere when they die (currently, about half the population has wills, and the divider can be bumped to 3 since people with more money are more likely to have them), we can get an estimate of 0.5% coin loss per year.

    This, combined with a low target inflation rate of 1.5%, means that we can “print” 2% of the current money supply every year. Since cryptocurrencies will massively reduce the amount of fractional reserve banking in the world (as the cryptocurrency base unit is online, so individuals no longer “need” to store their money in banks in order to maintain savings accounts and make long-distance transactions), we can expect much of the world’s M2 and M3 money supply (ways of calculating money supply that include bank deposits) to become part of the base money supply of a cryptocurrency. The M2 money supply of the world is estimated at around 40trillion</a>,givingourworldgovernmentabudgetof40 trillion</a>, giving our world government a budget of


    In theory, a world government can do a lot with 800billionperyear;inpractice,itremainstobeseenhowfreefromcorruptionsuchaninstitutionwouldbe,althoughinthiscasethefactthatitwillbecontrolledbydirectdemocracy,andhavenopowertotax,canpotentiallyserveaspowerfulrestraintsonabuse.Itwouldessentiallybeagovernmentinthesenseofbeinganentitytaskedwithmaintainingsocialinfrastructure,butwouldlackthepowertocoerceandcompelthatmightmakeitparticularlydangerous.Or,wecansimplystickwithSocialCoin,andleaveituptoeachindividualtoimprovetheirlivesthebestthattheycanwith800 billion per year; in practice, it remains to be seen how free from corruption such an institution would be, although in this case the fact that it will be controlled by direct democracy, and have no power to tax, can potentially serve as powerful restraints on abuse. It would essentially be a government in the sense of being an entity tasked with maintaining social infrastructure, but would lack the power to coerce and compel that might make it particularly dangerous. Or, we can simply stick with SocialCoin, and leave it up to each individual to improve their lives the best that they can with

    See also:

    http://bitcoinmagazine.com/7050/bootstrapping-a-decentralized-autonomous-corporation-part-i/

    http://bitcoinmagazine.com/7119/bootstrapping-an-autonomous-decentralized-corporation-part-2-interacting-with-the-world/

    [ad_2]

    Source link

  • Bootstrapping An Autonomous Decentralized Corporation, Part 2: Interacting With the World | Ethereum Foundation Blog

    Bootstrapping An Autonomous Decentralized Corporation, Part 2: Interacting With the World | Ethereum Foundation Blog

    [ad_1]

    In the first part of this series, we talked about how the internet allows us to create decentralized corporations, automatons that exist entirely as decentralized networks over the internet, carrying out the computations that keep them “alive” over thousands of servers. As it turns out, these networks can even maintain a Bitcoin balance, and send and receive transactions. These two capacities: the capacity to think, and the capacity to maintain capital, are in theory all that an economic agent needs to survive in the marketplace, provided that its thoughts and capital allow it to create sellable value fast enough to keep up with its own resource demands. In practice, however, one major challenge still remains: how to actually interact with the world around them.

    Getting Data

    The first of the two major challenges in this regard is that of input – how can a decentralized corporation learn any facts about the real world? It is certainly possible for a decentralized corporation to exist without facts, at least in theory; a computing network might have the Zermelo-Fraenkel set theory axioms embedded into it right from the start and then embark upon an infinite loop proving all possible mathematical theorems – although in practice even such a system would need to somehow know what kinds of theorems the world finds interesting; otherwise, we may simply learn that a+b=b+a, a+b+c=c+b+a,a+b+c+d=d+c+b+a and so on. On the other hand, a corporation that has some data about what people want, and what resources are available to obtain it, would be much more useful to the world at large.

    Here we must make a distinction between two kinds of data: self-verifying data, and non-self-verifying data. Self-verifying data is data which, once computed on in a certain way, in some sense “proves” its own validity. For example, if a given decentralized corporation is looking for prime numbers containing the sequence ’123456789′, then one can simply feed in ’12345678909631′ and the corporation can computationally verify that the number is indeed prime. The current temperature in Berlin, on the other hand, is not self-verifying at all; it could be 11′C, but it could also just as easily be 17′C, or even 231′C; without outside data, all three values seem equally legitimate.

    Bitcoin is an interesting case to look at. In the Bitcoin system, transactions are partially self-verifying. The concept of a “correctly signed” transaction is entirely self-verifying; if the transaction’s signature passes the elliptic curve digital signature verification algorithm, then the transaction is valid. In theory, you might claim that the transaction’s signature correctness depends on the public key in the previous transaction; however, this actually does not at all detract from the self-verification property – the transaction submitter can always be required to submit the previous transaction as well. However, there is something that is not self-verifying: time. A transaction cannot spend money before that money was received and, even more crucially, a transaction cannot spend money that has already been spent. Given two transactions spending the same money, either one could have theoretically come first; there is no way to self-verify the validity of one history over the other.

    Bitcoin essentially solves the time problem with a computational democracy. If the majority of the network agrees that events happened in a certain order, then that order is taken as truth, and the incentive is for every participant in this democratic process to participate honestly; if any participant does not, then unless the rogue participant has more computing power than the rest of the network put together his own version of the history will always be a minority opinion, and thus rejected, depriving the miscreant of their block revenue.

    In a more general case, the fundamental idea that we can gleam from the blockchain concept is this: we can use some kind of resource-democracy mechanism to vote on the correct value of some fact, and ensure that people are incentivized to provide accurate estimates by depriving everyone whose report does not match the “mainstream view” of the monetary reward. The question is, can this same concept be applied elsewhere as well? One improvement to Bitcoin that many would like to see, for example, is a form of price stabilization; if Bitcoin could track its own price in terms of other currencies or commodities, for example, the algorithm could release more bitcoins if the price is high and fewer if the price is low – naturally stabilizing the price and reducing the massive spikes that the current system experiences. However, so far, no one has yet figured out a practical way of accomplishing such a thing. But why not?

    The answer is one of precision. It is certainly possible to design such a protocol in theory: miners can put their own view of what the Bitcoin price is in each block, and an algorithm using that data could fetch it by taking the median of the last thousand blocks. Miners that are not within some margin of the median would be penalized. However, the problem is that the miners have every incentive, and substantial wiggle room, to commit fraud. The argument is this: suppose that the actual Bitcoin price is 114 USD, and you, being a miner with some substantial percentage of network power (eg. 5%), know that there is a 99.99% chance that 113 to 115 USD will be inside the safe margin, so if you report a number within that range your blocks will not get rejected. What should you say that the Bitcoin price is? The answer is, something like 115 USD. The reason is that if you put your estimate higher, the median that the network provides might end up being 114.05 BTC instead of 114 BTC, and the Bitcoin network will use this information to print more money – increasing your own future revenue in the process at the expense of existing savers. Once everyone does this, even honest miners will feel the need to adjust their estimates upwards to protect their own blocks from being rejected for having price reports that are too low. At that point, the cycle repeats: the price is 114 USD, you are 99.99% sure that 114 to 116 USD will be within the safe margin, so you put down the answer of 116 USD. One cycle after that, 117 USD, then 118 USD, and before you know it the entire network collapses in a fit of hyperinflation.

    The above problem arose specifically from two facts: first, there is a range of acceptable possibilities with regard to what the price is and, second, the voters have an incentive to nudge the answer in one direction. If, instead of proof of work, proof of stake was used (ie. one bitcoin = one vote instead of one clock cycle = one vote), then the opposite problem would emerge: everyone would bid the price down since stakeholders do not want any new bitcoins to be printed at all. Can proof of work and proof of stake perhaps be combined to somehow solve the problem? Maybe, maybe not.

    There is also another potential way to resolve this problem, at least for applications that are higher-level than the underying currency: look not at reported market prices, but at actual market prices. Assume, for example, that there already exists a system like Ripple (or perhaps something based on colored coins) that includes a decentralized exchange between various cryptographic assets. Some might be contracts representing assets like gold or US dollars, others company shares, others smart property and there would obviously also be trust-free cryptocurrency similar to Bitcoin as well. Thus, in order to defraud the system, malicious participants would not simply need to report prices that are slightly incorrect in their favored direction, but would need to push the actual prices of these goods as well – essentially, a LIBOR-style price fixing conspiracy. And, as the experiences of the last few years have shown, LIBOR-style price fixing conspiracies are something that even human-controlled systems cannot necessarily overcome.

    Furthermore, this fundamental weakness that makes it so difficult to capture accurate prices without a crypto-market is far from universal. In the case of prices, there is definitely much room for corruption – and the above does not even begin to describe the full extent of corruption possible. If we expect Bitcoin to last much longer than individual fiat currencies, for example, we might want the currency generation algorithm to be concerned with Bitcoin’s price in terms of commodities, and not individual currencies like the USD, leaving the question of exactly which commodities to use wide open to “interpretation”. However, in most other cases no such problems exist. If we want a decentralized database of weather in Berlin, for example, there is no serious incentive to fudge it in one direction or the other. Technically, if decentralized corporations started getting into crop insurance this would change somewhat, but even there the risk would be smaller, since there wowuld be two groups pulling in opposite directions (namely, farmers who want to pretend that there are droughts, and insurers who want to pretend that there are not). Thus, a decentralized weather network is, even with the technology of today, an entirely possible thing to create.

    Acting On The World

    With some kind of democratic voting protocol, we reasoned above, it’s possible for a decentralized corporation to learn facts about the world. However, is it also possible to do the opposite? Is it possible for a corporation to actually influence its environment in ways more substantial than just sitting there and waiting for people to assign value to its database entries as Bitcoin does? The answer is yes, and there are several ways to accomplish the goal. The first, and most obvious, is to use APIs. An API, or application programming interface, is an interface specifically designed to allow computer programs to interact with a particular website or other software program. For example, sending an HTTP GET request tohttp://blockchain.info/address/1AEZyM6pXy1gxiqVsRLFENJLhDjbCj4FJz?format=json sends an instruction to blockchain.info’s servers, which then give you back a file containing the latest transactions to and from the Bitcoin address 1AEZyM6pXy1gxiqVsRLFENJLhDjbCj4FJz in a computer-friendly format. Over the past ten years, as business has increasingly migrated onto the internet, the number of services that are accessible by API has been rapidly increasing. We have internet search, weather, online forums, stock trading, and more APIs are being created every year. With Bitcoin, we have one of the most critical pieces of all: an API for money.

    However, there still remains one critical, and surprisingly mundane, problem: it is currently impossible to send an HTTP request in a decentralized way. The request must eventually be sent to the server all in one piece, and that means that it must be assembled in its entirety, somewhere. For requests whose only purpose is to retrieve public data, like the blockchain query described above, this is not a serious concern; the problem can be solved with a voting protocol. However, if the API requires a private API key to access, as all APIs that automate activities like purchasing resources necessarily do, having the private key appear in its entirety, in plaintext, anywhere but at the final recipient, immediately compromises the private key’s privacy. Requiring requests to be signed alleviates this problem; signatures, as we saw above, can be done in a decentralized way, and signed requests cannot be tampered with. However, this requires additional effort on the part of API developers to accomplish, and so far we are nowhere near adopting signed API requests as a standard.

    Even with that issue solved, another issue still remains. Interacting with an API is no challenge for a computer program to do; however, how does the program learn about that API in the first place? How does it handle the API changing? What about the corporation running a particular API going down outright, and others coming in to take its place? What if the API is removed, and nothing exists to replace it? Finally, what if the decentralized corporation needs to change its own source code? These are problems that are much more difficult for computers to solve. To this, there is only one answer: rely on humans for support. Bitcoin heavily relies on humans to keep it alive; we saw in March 2013 how a blockchain fork required active intervention from the Bitcoin community to fix, and Bitcoin is one of the most stable decentralized computing protocols that can possibly be designed. Even if a 51% attack happens, a blockchain fork splits the network into three, and a DDoS takes down the five major mining pools all at the same time, once the smoke clears some blockchain is bound to come out ahead, the miners will organize around it, and the network will simply keep on going from there. More complex corporations are going to be much more fragile; if a money-holding network somehow leaks its private keys, the result is that it goes bankrupt.

    But how can humans be used without trusting them too much? If the humans in question are only given highly specific tasks that can easily be measured, like building the fastest possible miner, then there is no issue. However, the tasks that humans will need to do are precisely those tasks that cannot so easily be measured; how do you figure out how much to reward someone for discovering a new API? Bitcoin solves the problem by simply removing the complexity by going up one layer of abstraction: Bitcoin’s shareholders benefit if the price goes up, so shareholders are encouraged to do things that increase the price. In fact, in the case of Bitcoin an entire quasi-religion has formed around supporting the protocol and helping it grow and gain wider adoption; it’s hard to imagine every corporation having anything close to such a fervent following.

    Hostile Takeovers

    Alongside the “future proofing” problem, there is also another issue that needs to be dealt with: that of “hostile takeovers”. This is the equivalent of a 51% attack in the case of Bitcoin, but the stakes are higher. A hostile takeover of a corporation handling money means that the attacker gains the ability to drain the corporation’s entire wallet. A hostile takeover of Decentralized Dropbox, Inc means that the attacker can read everyone’s files (although hopefully the files are encrypted, although in the case the attacker can still deny everyone their files). A hostile takeover of a decentralized web hosting company can lead to massive losses not just for those who have websites hosted, but also their customers, as the attacker gains the ability to modify web pages to also send off customers’ private data to the attacker’s own server as soon as each customer logs in. How might a hostile takeover be accomplished? In the case of the 501-out-of-1000 private key situation, the answer is simple: pretend to be a few thousand different servers at the same time, and join the corporation with all of them. By forwarding communications through millions of computers infected by a botnet, this is easy to accomplish without being detected. Then, once you have more than half of the servers in the network, you can immediately proceed to cash out.

    Fortunately, the presence of Bitcoin has created a number of solutions, of which the proof of work used by Bitcoin itself is only one. Because Bitcoin is a perfect API for money, any kind of protocol involving monetary scarcity and incentives is now available for computer networks to use. Proof of stake, requiring each participating node to show proof that it controls, say, 100 BTC is one possible solution; if that is done, then implementing a hostile takeover would require more resources than all of the legitimate nodes committed together. The 100 BTC could even be moved to a multisignature address partially controlled by the network as a surety bond, both discouraging nodes from cheating and giving their owners a great incentive to act and even get together to keep the corporation alive.

    Another alternative might simply be to allow the decentralized corporation to have shareholders, so that shareholders get some kind of special voting privileges, along with the right to a share of the profits, in exchange for investing; this too would encourage the shareholders to protect their investment. Making a more fine-grained evaluation of an individual human employee is likely impossible; the best solution is likely to simply use monetary incentives to direct people’s actions on a coarse level, and then let the community self-organize to make the fine-grained adjustments. The extent to which a corporation targets a community for investment and participation, rather than discrete individuals, is the choice of its original developers. On the one hand, targeting a community can allow your human support to work together to solve problems in large groups. On the other hand, keeping everyone separate prevents collusion, and in that way reduces the likelihood of a hostile takeover.

    Thus, what we have seen here is that very significant challenges still remain before any kind of decentralized corporation can be viable. The problem will likely be solved in layers. First, with the advent of Bitcoin, a self-supporting layer of cryptographic money exists. Next, with Ripple and colored coins, we will see crypto-markets emerge, that can then be used to provide crypto-corporations with accurate price data. At the same time, we will see more and more crypto-friendly APIs emerge to serve decentralized systems’ needs. Such APIs will be necessary regardless of whether decentralized corporations will ever exist; we see today just how difficult cryptographic keys are to keep secure, so infrastructure suitable for multiparty signing will likely become a necessity. Large certificate signing authorities, for example, hold private keys that would result in hundreds of millions of dollars worth of security breaches if they were ever to fall into the wrong hands, and so these organizations often employ some form of multiparty signing already.

    Finally, it will still take time for people to develop exactly how these decentralized corporations would work. Computer software is increasingly becoming the single most important building block of our modern world, but up until now search into the area has been focused on two areas: artificial intelligence, software working purely on its own, and software tools working under human beings. The question is: is there something in the middle? If there is, the idea of software directing humans, the decentralized corporation, is exactly that. Contrary to fears, this would not be an evil heartless robot imposing an iron fist on humanity; in fact, the tasks that the corporation will need to outsource are precisely those that require the most human freedom and creativity. Let’s see if it’s possible.

    See also:

    http://bitcoinmagazine.com/7050/bootstrapping-a-decentralized-autonomous-corporation-part-i/

    http://bitcoinmagazine.com/7235/bootstrapping-a-decentralized-autonomous-corporation-part-3-identity-corp/

    Supplementary reading: Jeff Garzik’s article on one practical example of what an autonomous corporation might be useful for

    [ad_2]

    Source link

  • Bootstrapping A Decentralized Autonomous Corporation: Part I | Ethereum Foundation Blog

    Bootstrapping A Decentralized Autonomous Corporation: Part I | Ethereum Foundation Blog

    [ad_1]

    Corporations, US presidential candidate Mitt Romney reminds us, are people. Whether or not you agree with the conclusions that his partisans draw from that claim, the statement certainly carries a large amount of truth. What is a corporation, after all, but a certain group of people working together under a set of specific rules? When a corporation owns property, what that really means is that there is a legal contract stating that the property can only be used for certain purposes under the control of those people who are currently its board of directors – a designation itself modifiable by a particular set of shareholder. If a corporation does something, it’s because its board of directors has agreed that it should be done. If a corporation hires employees, it means that the employees are agreeing to provide services to the corporation’s customers under a particular set of rules, particularly involving payment. When a corporation has limited liability, it means that specific people have been granted extra privileges to act with reduced fear of legal prosecution by the government – a group of people with more rights than ordinary people acting alone, but ultimately people nonetheless. In any case, it’s nothing more than people and contracts all the way down.

    However, here a very interesting question arises: do we really need the people? On the one hand, the answer is yes: although in some post-Singularity future machines will be able to survive all on their own, for the forseeable future some kind of human action will simply be necessary to interact with the physical world. On the other hand, however, over the past two hundred years the answer has been increasingly no. The industrial revolution allowed us, for the first time, to start replacing human labor with machines on a large scale, and now we have advanced digitized factories and robotic arms that produce complex goods like automobiles all on their own. But this is only automating the bottom; removing the need for rank and file manual laborers, and replacing them with a smaller number of professionals to maintain the robots, while the management of the company remains untouched. The question is, can we approach the problem from the other direction: even if we still need human beings to perform certain specialized tasks, can we remove the management from the equation instead?

    Most companies have some kind of mission statement; often it’s about making money for shareholders; at other times, it includes some moral imperative to do with the particular product that they are creating, and other goals like helping communities sometimes enter the mix, at least in theory. Right now, that mission statement exists only insofar as the board of directors, and ultimately the shareholders, interpret it. But what if, with the power of modern information technology, we can encode the mission statement into code; that is, create an inviolable contract that generates revenue, pays people to perform some function, and finds hardware for itself to run on, all without any need for top-down human direction?

    As Let’s Talk Bitcoin’s Daniel Larmier pointed out in his own exploration on this concept, in a sense Bitcoin itself can be thought of as a very early prototype of exactly such a thing. Bitcoin has 21 million shares, and these shares are owned by what can be considered Bitcoin’s shareholders. It has employees, and it has a protocol for paying them: 25 BTC to one random member of the workforce roughly every ten minutes. It even has its own marketing department, to a large extent made up of the shareholders themselves. However, it is also very limited. It knows almost nothing about the world except for the current time, it has no way of changing any aspect of its function aside from the difficulty, and it does not actually do anything per se; it simply exists, and leaves it up to the world to recognize it. The question is: can we do better?

    Computation

    The first challenge is obvious: how would such a corporation actually make any decisions? It’s easy to write code that, at least given predictable environments, takes a given input and calculates a desired action to take. But who is going to run the code? If the code simply exists as a computer program on some particular machine, what is stopping the owner of that machine from shutting the whole thing down, or even modifying its code to make it send all of its money to himself? To this problem, there is only one effective answer: distributed computing.

    However, the kind of distributed computing that we are looking for here is not the same as the distributed computing in projects like SETI@home and Folding@home; in those cases, there is still a central server collecting data from the distributed nodes and sending out requests. Here, rather, we need the kind of distributed computing that we see in Bitcoin: a set of rules that decentrally self-validates its own computation. In Bitcoin, this is accomplished by a simple majority vote: if you are not helping to compute the blockchain with the majority network power, your blocks will get discarded and you will get no block reward. The theory is that no single attacker will have enough computer power to subvert this mechanism, so the only viable strategy is essentially to “go with the flow” and act honestly to help support the network and receive one’s block reward. So can we simply apply this mechanism to decentralized computation? That is, can we simply ask every computer in the network to evaluate a program, and then reward only those whose answer matches the majority vote? The answer is, unfortunately, no. Bitcoin is a special case because Bitcoin is simple: it is just a currency, carrying no property or private data of its own. A virtual corporation, on the other hand, would likely need to store the private key to its Bitcoin wallet – a piece of data which should be available in its entirety to no one, not to everyone in the way that Bitcoin transactions are. But, of course, the private key must still be usable. Thus, what we need is some system of signing transactions, and even generating Bitcoin addresses, that can be computed in a decentralized way. Fortunately, Bitcoin allows us to do exactly that.

    The first solution that might immediately come to mind is multisignature addresses; given a set of a thousand computers that can be relied upon to probably continue supporting the corporations, have each of them create a private key, and generate a 501-of-1000 multisignature address between them. To spend the funds, simply construct a transaction with signatures from any 501 nodes and broadcast it into the blockchain. The problem here is obvious: the transaction would be too large. Each signature makes up about seventy bytes, so 501 of them would make a 35 KB transaction – which is very difficult to get accepted into the network as bitcoind by default refuses transactions with any script above 10,000 bytes. Second, the solution is specific to Bitcoin; if the corporation wants to store private data for non-financial purposes, multisignature scripts are useless. Multisignature addresses work because there is a Bitcoin network evaluating them, and placing transactions into the blockchain depending on whether or not the evaluation succeeds. In the case of private data, an analogous solution would essentially require some decentralized authority to store the data and give it out only if a request has 501 out of 1000 signatures as needed – putting us right back where we started.

    However, there is still hope in another solution; the general name given to this by cryptographers is “secure multiparty computation”. In secure multiparty computation, the inputs to a program (or, more precisely, the inputs to a simulated “circuit”, as secure multiparty computation cannot handle “if” statements and conditional looping) are split up using an algorithm calledShamir’s Secret Sharing, and a piece of the information is given to each participant. Shamir’s Secret Sharing can be used to split up any data into N pieces such that any K of them, but no K-1 of them, are sufficient to recover the original data – you choose what K and N are when running the algorithm. 2-of-3, 5-of-10 and 501-of-1000 are all possible. A circuit can then be evaluated on the pieces of data in a decentralized way, such that at the end of the computation everyone has a piece of the result of the computation, but at no point during the computation does any single individual get even the slightest glimpse of what is going on. Finally, the pieces are put together to reveal the result. The runtime of the algorithm is O(n3), meaning that the number of computational steps that it takes to evaluate a computation is roughly proportional to the cube of the number of participants; at 10 nodes, 1000 computational steps, and at 1000 nodes 1 billion steps. A simple billion-step loop in C++ takes about twenty seconds on my own laptop, and servers can do it in a fraction of a second, so 1000 nodes is currently roughly at the limit of computational practicality.

    As it turns out, secure multiparty computation can be used to generate Bitcoin addresses and sign transactions. For address generation, the protocol is simple:

    1. Everyone generates a random number as a private key.
    2. Everyone calculates the public key corresponding to the private key.
    3. Everyone reveals their public key, and uses Shamir’s Secret Sharing algorithm to calculate a public key that can be reconstructed from any 501 of the thousand public keys revealed.
    4. An address is generated from that public key.

    Because public keys can be added, subtracted , multiplied and even divided by integers, surprisingly this algorithm works exactly as you would expect. If everyone were to then put together a 501-of-1000 private key in the same way, that private key would be able to spend the money sent to the address generated by applying the 501-of-1000 algorithm to the corresponding public keys. This works because Shamir’s Secret Sharing is really just an algebraic formula – that is to say, it uses only addition, subtraction, multiplication and division, and one can compute this formula “over” public keys just as easily as with addresses; as a result, it doesn’t matter if the private key to public key conversion is done before the algebra or after it. Signing transactions can be done in a similar way, although the process is somewhat more complicated.

    The beauty of secure multiparty computation is that it extends beyond just Bitcoin; it can just as easily be used to run the artificial intelligence algorithm that the corporation relies on to operate. So-called “machine learning”, the common name for a set of algorithms that detect patterns in real-world data and allow computers to model it without human intervention and are employed heavily in fields like spam filters and self-driving cars, is also “just algebra”, and can be implemented in secure multiparty computation as well. Really, any computation can, if that computation is broken down into a circuit on the input’s individual bits. There is naturally some limit to the complexity that is possible; converting complex algorithms into circuits often introduces additional complexity, and, as described above, Shamir’s Secret Sharing can get expensive all by itself. Thus, it should only really be used to implement the “core” of the algorithm; more complex high-level thinking tasks are best resolved by outside contractors.

    Excited about this topic? Look forward to parts 2, 3 and 4: how decentralized corporations can interact with the outside world, how some simple secure multiparty computation circuits work on a mathematical level, and two examples of how these decentralized corporations can make a difference in the real world.

    See also:

    http://letstalkbitcoin.com/is-bitcoin-overpaying-for-false-security/

    http://bitcoinmagazine.com/7119/bootstrapping-an-autonomous-decentralized-corporation-part-2-interacting-with-the-world/

    http://bitcoinmagazine.com/7235/bootstrapping-a-decentralized-autonomous-corporation-part-3-identity-corp/

    [ad_2]

    Source link