Please allow me to share some research that I've done into the topic of increasing the block size maximum, and the opinions that I've formed along the way1.
- The average block is nowhere near the 1MB limit:
- 32 MB is the absolute upper limit of network message size2:
- Conformal estimates that a 32 MB block can hold 167,000 transactions3
Specific notes on Gavin Andresen's "Blocksize Economics":
- Andresen claims:
…economic theory says that in a competitive market, supply, demand, and price will find an equilibrium where the price is equal to the marginal cost to suppliers plus some net income (because suppliers can always choose to do something more profitable with their time or money).
In the case of Bitcoin, this is categorically false: Bitcoin miners must purchase ASICs4. To use the claim that miners can pack their bags and do other things to justify this other claim that "the market will prevail/provide" is a complete farce.
- In the linked piece, Andresen never mentions the actual reason that Satoshi imposed a block limit. Oleg Andreev, however, does:
Huge blocks could lead to excessive use of bandwidth which could lead to higher percentage of orphaned blocks due to higher synchronization delays.
If I'm reading history correctly, Satoshi imposed a maximum block size of 1MB to address the Denial of Service attacks performed against the Bitcoin network using arbitrarily large blocks (~2010) when blocks could still be minted by dilletantes with CPUs.
- Andresen claims:
Transaction confirmation speed is important for most small-value transactions, so it is likely they will be secured using semi-trusted third parties who co-sign transactions and guarantee to never allow double-spending.
That third party is the blockchain. To propose off-chain solutions for a non-problem suggests the proposer is dangerously out of touch with actual blockchain economics. The blockchain is secured by the means provided in Bitcoin - nothing less, and nothing more. This claim could be reworded thusly, without losing any content and gaining some clarity:
Some holders of Bitcoin trust Coinbase to both hold their coins and transfer them to other Coinbae holders. All such "holders" of "Bitcoins" leverage Coinbase's advanced database technologies to broadcast transactions within the Coinbase network and further trust Coinbase to never allow double spends.
Trusting any party but the Bitcoin network itself to secure transactions is the height of folly.
Specific notes on Gavin Andresen's "A Scalability Roadmap":
- the "'pruned' block database":
The actual blockchain cannot be pruned and still remain a blockchain5. What is proposed is that clients maintain a "database" of unspent transactions in (lossy, easily corruptible) memory; a database against which new blocks and new transactions can be compared to see that in fact all outputs in the new blocks are formed from inputs in the unspent transactions pool (referred to in the literature as the UTXO, or close as I can tell, "unspent transaction output").
Missing from this Pull Request is an answer to the question of "what should a client do should they see a transaction whose inputs are not in the UTXO?". Charitably, I could not possibly say. Uncharitably, I must assume that the USG is exerting some influence on those who "maintain" "Bitcoin Core" in order to degrade the performance of as many full nodes in the wild as possible. Only with the network (fragile and full of indeterminate behavior as it is) weakened even further does that organization stand the slightest chance of imposing their will upon it.
- Andresen claims:
You might be surprised that old blocks aren't needed to validate new transactions. Pieter Wuille re-architected Bitcoin Core a few releases ago so that all of the data needed to validate transactions is kept in a 'UTXO' (unspent transaction output) database.
For a node to be considered a full node, it must validate transactions against a blockchain that the node itself has validated in its entirety. Anything shy of that is not a full node, and it does not actually validate transactions. Should the pruning be implemented on the basis of a UTXO argument, nodes will be forced to download the blockchain in its entirety to validate transactions about which they're unsure. Why then bother?
- Andresen notes the validity of these claims, almost pooh-pooing them:
a proposal from Mark Friedenbach for how to embed such a commitment [ed: a UTXO hashing scheme to enable peers to ask for the UTXO set instead of the full blockchain6] in blocks hasn't reached consensus, and neither have discussions about exactly how the UTXO set should be represented and hashed.
- Andresen assumes the sale7 on the extremely contentious notion of increasing the block size:
The next scaling problem that needs to be tackled is the hardcoded 1-megabyte block size limit that means the network can suppor [sic] only approximately 7-transactions-per-second.
First off, I don't agree that the Bitcoin protocol must support any more transactions per second than it currently does. Consider that average block sizes are currently a third of their theoretical maximum, and that few people are doing much to minimize their transaction sizes. Before we discuss diddling our scarcity numbers, let's actually watch the network's behavior at steady state when it does bump up against those numbers.
- A myth that Andresen promulgates:
- Connect to peers, just as is done today.
- Download headers for the best chain from its peers (tens of megabytes; will take at most a few minutes)
- Download enough full blocks to handle and reasonable blockchain re-organization (a few hundred should be plenty, which will take perhaps an hour).
- Ask a peer for the UTXO set, and check it against the commitment made in the blockchain.
This myth misses a core point about Bitcoin: it's not something that one just discovers, dabbles in, and voila can run with the big dogs of. It's something you approach carefully, quietly, and humbly for it is so much larger and scarier than yourself.
The analogy that I use when talking about it with civilians is that it's radioactive money. Hard dollars in a bank account are great, sure; but when I can point to my (negligible) bitcoin stash and say that "this is thus and such fraction of all the BTC that will ever exist", that's downright radioactive money technology compared to the dollar. Dollars waste away every year unless one sticks them in inflation-resistant assets, and the Bitcoin simply does no such thing. Radioactive, I tell you.
In this story, taking a week or even a month to sync the blockchain should be no big deal. If you need to use Bitcoins quickly and you've never set your tooling up you'll simply be shit out of luck. Doing well in this life requires planning, foresight, and execution. Diddling Bitcoin to hold the hands of people who lack foresight and the ability to plan and subsequently execute is a recipe for no soup I'll dine on.
I've known for years that 21 million was a magical number - that is the sum of all Bitcoins that could ever exist, and on this research trip I've learned that there is a second magic number: 1 megabyte. The fixed and predictable supply of Bitcoins means that we can all evaluate our holdings in terms of both todays circulation and tomorrows circulation. Were the USG or any other fiat institution to mess with that 21M, they'd be able to enslave the whole world all over again by creating more money out of thin air. As difficult as it is to predict the ROI of mining, let's not give the miners yet another headache in their simulations, that of estimating the maximum block size and the demand for transactions at some point in the future.
Andresen has this vision to convey about the future of the block size:
Roll out a hard fork that increases the maximum block size, and implements a rule to increase that size over time, very similar to the rule that decreases the block reward over time.
Choose the initial maximum size so that a 'Bitcoin hobbyist' can easily participate as a full node on the network. By 'Bitcoin hobbyist' I mean somebody with a current, reasonably fast computer and Internet connection, running an up-to-date version of Bitcoin Core and willing to dedicate half their CPU power and bandwidth to Bitcoin.
And choose the increase to match the rate of growth of bandwidth over time: 50% per year for the last twenty years. Note that this is less than the approximately 60% per year growth in CPU power; bandwidth will be the limiting factor for transaction volume for the foreseeable future.
Andresen explicitly seeks to slip fiat dynamics into Bitcoin via block sizes, disregarding hard limits in the P2P protocol and the importance of holding the line on the ideology of scarce resources.
Enshrine a second number of scarcity: 1 megabyte. The alternative is an ever-increasing block size; a world with no scarcity in transaction space; and negligible mining fees after the block reward goes to zero. In a world where the block size increases regularly, the reward for mining will decline far more precipitiously than it would otherwise, giving the fiat governments another chance to prevent Bitcoin wresting the control of capital from their hands.
I am, however, just some dog with a computer. I advise you to do your own research and form your own opinions. If you've not the time to do that – you could do worse than reading this summary and the links so included.
Rumor has it that Bitcoin P2P messages of 32 MB in size don't even transmit reliably between peers. If a message of 32 MB can't transmit between peers reliably, bringing the maximum block size anywhere near that line is a terrible idea.
This suggests an average transaction size of 251.969 bytes per transaction. Not unfair - my random sample of 3 transactions clocked in at ~253B bytes.
An ASIC is an Application Specific Intgrated Circuit. That is to say that it cannot be repurposed for anything other than computing SHA-256 (the Bitcoin mining algorithm) at high speeds.
For one to have "a blockchain", one must: have a copy of every block; every block must be untruncated; every transaction must be confirmed to be valid. Anything short of that is a scamchain.
Meaning that he's written this assuming that the reader agrees with the claim he then makes: the network must be pushed to support far more than 7 transactions per second. This is poor writing in the first place, poor rhetoric in the second, and ultimately downright disingenous.