mempool-size - Blockchain Charts

These graphs show that fees for inclusion in 2nd block just shot up 10x from 50 to 500 satoshis/kB, and mempool size just shot up from <5 MB to 30 MB. Would you feel safe sending a transaction into the network now? Can Bitcoin rally if the blocksize remains artificially limited by Blockstream/Core?

http://statoshi.info/dashboard/db/fee-estimates
To select a longer time period, zoom out on the graph by clicking on the word "6 hours ago" to the right of the words "Zoom Out" - which will reveal a drop-down menu.
https://tradeblock.com/bitcoin
To see the increase in the Mempool Size (from less than 5 MB, to 30 MB), go to the graph on the graph on the lower right called "Recent Mempool", and use the two menus to select "7 Days" and "Size".
How can Bitcoin continue to rally, if the network is becoming backlogged due to unnecessary congestion?
submitted by ydtm to btc [link] [comments]

These graphs show that fees for inclusion in 2nd block just shot up 10x from 50 to 500 satoshis/kB, and mempool size just shot up from <5 MB to 30 MB. Would you feel safe sending a transaction into the network now? Can Bitcoin rally if the blocksize remains artificially limited by Blockstream /r/btc

These graphs show that fees for inclusion in 2nd block just shot up 10x from 50 to 500 satoshis/kB, and mempool size just shot up from <5 MB to 30 MB. Would you feel safe sending a transaction into the network now? Can Bitcoin rally if the blocksize remains artificially limited by Blockstream /btc submitted by BitcoinAllBot to BitcoinAll [link] [comments]

Collection of fee estimating tools (for saving on fees when sending a bitcoin transaction)

Here are some resources that can help you estimate fees when sending a bitcoin transaction, so you don't end up overpaying unnecessarily. Keep in mind that in order to take advantage of this, you need a proper bitcoin wallet which allows for custom fee setting. A selection of such wallets you can find here or here.
The order here is roughly from advanced to easy.
1) https://jochen-hoenicke.de/queue/#0,24h
Here you can see a visualization of how many unconfirmed transactions are currently on the network, as well as how many were there in the past. Each coloured layer represents a different fee amount. F.ex the deep blue (lowest layer) are the 1sat/byte transactions, slightly brighter level above are the 2sat/byte transactions and so on.
The most interesting graph is the third one, which shows you the size of the current mempool in MB and the amount of transactions with different fee levels, which would compete with your transaction if you were to send it right now. This should help you estimating how high you need to set the fee (in sat/byte) in order to have it confirmed "soon". But this also should help you to see that even the 1sat/byte transactions get confirmed very regularly, especially on weekends and in the night periods, and that the spikes in the mempool are always temporary. For that you can switch to higher timeframes in the upper right corner, f.ex here is a 30 days view: https://jochen-hoenicke.de/queue/#0,30d. You clearly can see that the mempool is cyclical and you can set a very low fee if you are not in hurry.
2) https://mempool.space
This is also an overview of the current mempool status, although less visual than the previous one. It shows you some important stats, like the mempool size, some basic stats of the recent blocks (tx fees, size etc). Most importantly, it makes a projection of how large you need to set your fees in sat/byte if you want your transaction to be included in the next block, or within the next two/three/four blocks. You can see this projection in the left upper corner (the blocks coloured in brown).
3) https://whatthefee.io
This is a simple estimate tool. It shows you the likelihood (in %) of a particular fee size (in sat/byte) to be confirmed within a particular timeframe (measured in hours). It is very simple to use, but the disadvantage is that it shows you estimates only for the next 24 hours. You probably will still overpay by this method, if your transaction is less time sensitive than that.
4) https://twitter.com/CoreFeeHelper
This is a very simple bot that tweets out fees projections every hour or so. It tells you how you need to set the fees in order to be confirmed within 1hou6hours/12hours/1day/3days/1week. Very simple to use.
Hopefully one of these tools will help you save fees for your next bitcoin transaction. Or at least help you understand that even with a very low fee setting your transaction will be confirmed sooner or later. Furthermore, I hope it makes you understand how important it is to use a wallet that allows you to set your own fees.
submitted by TheGreatMuffin to Bitcoin [link] [comments]

Mempool breaks resistance trend, strong support at ~87500. Super bullish!

Mempool breaks resistance trend, strong support at ~87500. Super bullish! submitted by block_the_tx_stream to btc [link] [comments]

BTC mempool is full, low sat/byte tx are being dropped. Higher tx fees are replacing them, this can only mean bad things for core

BTC mempool is full, low sat/byte tx are being dropped. Higher tx fees are replacing them, this can only mean bad things for core submitted by velopic to btc [link] [comments]

BTC Fees amplified today by last night's difficulty adjustment. Current (peak of day) next-block fees are testing new highs.

Compounding Factors Causing the Fee Explosion
Over the past 2 weeks, a large sum of SHA256 hashpower has come online as the rewards for mining in real-dollar value had been increasing.
The increase over the past 2016 blocks was so great in fact that it caused an 11% jump in difficulty last night.
To add to that, the price retreated back to the 2-week average meaning some hashpower has left after the price adjustment. You can see how far behind schedule the current block times are here.
That has compounded to set new highs in sat/byte fees and has simultaneously escalated the price per transaction drastically
While BTC blocks may be able to clear overnight when mining is running 10-20% above the expected block rate, it's pretty clear from history that every day has a peak usage that cannot be handled by the network. After readjustment, it looks like only the lull of the weekend can currently clear the backlog, and only just.
I recommend checking Johoe's Mempool size in MB graph for a longer span. In the 3 month graph, you can really start to see each daily spike, weeks where the mempool only cleared on the weekend, and even a couple of weekends where the mempool didn't clear.
So What is Each Blockchain Currently Capable Of?
Current Segwit usage has been stagnant at around 40-45% for the past year now, but let's just say for argument's sake that segwit usage hits 100%. This represents a capacity increase of the BTC blockchain of only around 25%. That means that even if BTC hit perfect segwit usage, it could only handle around 500k transactions per day instead of 400k.
This bottleneck does not exist on BCH.
BCH can currently handle 16M blocks with no issue as proved by last year's stress-test and it should now be able to handle full 32MB blocks given recent parallelization improvements. The throughput of even 16MB blocks would allow for somewhere around an 8M TX/day average.
Bitcoin Cash is absolutely equipped to deal with an order of magnitude more transactions than Bitcoin today while maintaining 1sat/byte fees.
Blockchain technology can do so much more than BTC gives it credit for.
submitted by CaptainPatent to btc [link] [comments]

So tell me, you really think the whole "fee-exploding" mempool hype was not an attack?

So tell me, you really think the whole submitted by BitcoinReminder_com to Bitcoin [link] [comments]

A technical dive into CTOR

Over the last several days I've been looking into detail at numerous aspects of the now infamous CTOR change to that is scheduled for the November hard fork. I'd like to offer a concrete overview of what exactly CTOR is, what the code looks like, how well it works, what the algorithms are, and outlook. If anyone finds the change to be mysterious or unclear, then hopefully this will help them out.
This document is placed into public domain.

What is TTOR? CTOR? AOR?

Currently in Bitcoin Cash, there are many possible ways to order the transactions in a block. There is only a partial ordering requirement in that transactions must be ordered causally -- if a transaction spends an output from another transaction in the same block, then the spending transaction must come after. This is known as the Topological Transaction Ordering Rule (TTOR) since it can be mathematically described as a topological ordering of the graph of transactions held inside the block.
The November 2018 hard fork will change to a Canonical Transaction Ordering Rule (CTOR). This CTOR will enforce that for a given set of transactions in a block, there is only one valid order (hence "canonical"). Any future blocks that deviate from this ordering rule will be deemed invalid. The specific canonical ordering that has been chosen for November is a dictionary ordering (lexicographic) based on the transaction ID. You can see an example of it in this testnet block (explorer here, provided this testnet is still alive). Note that the txids are all in dictionary order, except for the coinbase transaction which always comes first. The precise canonical ordering rule can be described as "coinbase first, then ascending lexicographic order based on txid".
(If you want to have your bitcoin node join this testnet, see the instructions here. Hopefully we can get a public faucet and ElectrumX server running soon, so light wallet users can play with the testnet too.)
Another ordering rule that has been suggested is removing restrictions on ordering (except that the coinbase must come first) -- this is known as the Any Ordering Rule (AOR). There are no serious proposals to switch to AOR but it will be important in the discussions below.

Two changes: removing the old order (TTOR->AOR), and installing a new order (AOR->CTOR)

The proposed November upgrade combines two changes in one step:
  1. Removing the old causal rule: now, a spending transaction can come before the output that it spends from the same block.
  2. Adding a new rule that fixes the ordering of all transactions in the block.
In this document I am going to distinguish these two steps (TTOR->AOR, AOR->CTOR) as I believe it helps to clarify the way different components are affected by the change.

Code changes in Bitcoin ABC

In Bitcoin ABC, several thousand lines of code have been changed from version 0.17.1 to version 0.18.1 (the current version at time of writing). The differences can be viewed here, on github. The vast majority of these changes appear to be various refactorings, code style changes, and so on. The relevant bits of code that deal with the November hard fork activation can be found by searching for "MagneticAnomaly"; the variable magneticanomalyactivationtime sets the time at which the new rules will activate.
The main changes relating to transaction ordering are found in the file src/validation.cpp:
There are other changes as well:

Algorithms

Serial block processing (one thread)

One of the most important steps in validating blocks is updating the unspent transaction outputs (UTXO) set. It is during this process that double spends are detected and invalidated.
The standard way to process a block in bitcoin is to loop through transactions one-by-one, removing spent outputs and then adding new outputs. This straightforward approach requires exact topological order and fails otherwise (therefore it automatically verifies TTOR). In pseudocode:
for tx in transactions: remove_utxos(tx.inputs) add_utxos(tx.outputs) 
Note that modern implementations do not apply these changes immediately, rather, the adds/removes are saved into a commit. After validation is completed, the commit is applied to the UTXO database in batch.
By breaking this into two loops, it becomes possible to update the UTXO set in a way that doesn't care about ordering. This is known as the outputs-then-inputs (OTI) algorithm.
for tx in transactions: add_utxos(tx.outputs) for tx in transactions: remove_utxos(tx.inputs) 
Benchmarks by Jonathan Toomim with Bitcoin ABC, and by myself with ElectrumX, show that the performance penalty of OTI's two loops (as opposed to the one loop version) is negligible.

Concurrent block processing

The UTXO updates actually form a significant fraction of the time needed for block processing. It would be helpful if they could be parallelized.
There are some concurrent algorithms for block validation that require quasi-topological order to function correctly. For example, multiple workers could process the standard loop shown above, starting at the beginning. A worker temporarily pauses if the utxo does not exist yet, since it's possible that another worker will soon create that utxo.
There are issues with such order-sensitive concurrent block processing algorithms:
In contrast, the OTI algorithm's loops are fully parallelizable: the worker threads can operate in an independent manner and touch transactions in any order. Until recently, OTI was thought to be unable to verify TTOR, so one reason to remove TTOR was that it would allow changing to parallel OTI. It turns out however that this is not true: Jonathan Toomim has shown that TTOR enforcement is easily added by recording new UTXOs' indices within-block, and then comparing indices during the remove phase.
In any case, it appears to me that any concurrent validation algorithm would need such additional code to verify that TTOR is being exactly respected; thus for concurrent validation TTOR is a hindrance at best.

Advanced parallel techniques

With Bitcoin Cash blocks scaling to large sizes, it may one day be necessary to scale onto advanced server architectures involving sharding. A lot of discussion has been made over this possibility, but really it is too early to start optimizing for sharding. I would note that at this scale, TTOR is not going to be helpful, and CTOR may or may not lead to performance optimizations.

Block propagation (graphene)

A major bottleneck that exists in Bitcoin Cash today is block propagation. During the stress test, it was noticed that the largest blocks (~20 MB) could take minutes to propagate across the network. This is a serious concern since propagation delays mean increased orphan rates, which in turn complicate the economics and incentives of mining.
'Graphene' is a set reconciliation technique using bloom filters and invertible bloom lookup tables. It drastically reduces the amount of bandwidth required to communicate a block. Unfortunately, the core graphene mechanism does not provide ordering information, and so if many orderings are possible then ordering information needs to be appended. For large blocks, this ordering information makes up the majority of the graphene message.
To reduce the size of ordering information while keeping TTOR, miners could optionally decide to order their transactions in a canonical ordering (Gavin's order, for example) and the graphene protocol could be hard coded so that this kind of special order is transmitted in one byte. This would add a significant technical burden on mining software (to create blocks in such a specific unusual order) as well as graphene (which must detect this order, and be able to reconstruct it). It is not clear to me whether it would be possible to efficiently parallelize sorting algortithms that reconstruct these orderings.
The adoption of CTOR gives an easy solution to all this: there is only one ordering, so no extra ordering information needs to be appended. The ordering is recovered with a comparison sort, which parallelizes better than a topological sort. This should simplify the graphene codebase and it removes the need to start considering supporting various optional ordering encodings.

Reversibility and technical debt

Can the change to CTOR be undone at a later time? Yes and no.
For block validators / block explorers that look over historical blocks, the removal of TTOR will permanently rule out usage of the standard serial processing algorithm. This is not really a problem (aside from the one-time annoyance), since OTI appears to be just as efficient in serial, and it parallelizes well.
For anything that deals with new blocks (like graphene, network protocol, block builders for mining, new block validation), it is not a problem to change the ordering at a later date (to AOR / TTOR or back to CTOR again, or something else). These changes would add no long term technical debt, since they only involve new blocks. For past-block validation it can be retroactively declared that old blocks (older than a few months) have no ordering requirement.

Summary and outlook

Taking a broader view, graphene is not the magic bullet for network propagation. Even with the CTOR-improved graphene, we might not see vastly better performance right away. There is also work needed in the network layer to simply move the messages faster between nodes. In the last stress test, we also saw limitations on mempool performance (tx acceptance and relaying). I hope both of these fronts see optimizations before the next stress test, so that a fresh set of bottlenecks can be revealed.
submitted by markblundeberg to btc [link] [comments]

If you don't really understand the block size issue, I know it's fun to troll but PLEASE actually take a minute and understand it beforehand. (Wall of text explaining block size)

PLEASE don't be like this guy, posting graphs demonstrating you don't understand what's actually happening with the block size issue. If you want to understand the block size issue, please listen. I sincerely want to help you and will try and explain things as best I can.
Step one is just to look at this graph. Just focus on the orange line and the blue lines. What you should see is that the "block" size for BTC (orange line) has been pegged at 1MB for a long long time. This means that every processed BTC block is at 100% capacity.
What actually is a block? It's just a chunk of computer memory. However, ALL Bitcoin transactions, every time some person sends another person bitcoin, HAS to fit into the current block (or at least eventually fit into one....) and it's the job of the 'miners' to create new blocks of whatever that size is. This block creation takes on average 10 minutes, sometimes way more or less, but don't stress about that part. For Bitcoin BTC the size is 1MB, which you saw from the graph. When a miner finds the right data to create a 'valid block', they record all the currently pending transactions into that block, until it fills up. When it does that, those transactions are considered valid.
Got it so far. Ok. Now because there are so many transactions currently pending, people have to pay MUCH higher fees to ensure their transactions gets into that 1MB block the miner just found, because of course if you pay more, the miner gets that fee for adding your transaction into the block.
But, you tell yourself, "Aha that means Bitcoin is really popular!" Yes, it is, really really popular in fact! BUT that means it's now so popular that it can't be used the way it was designed to be! When you hear about "the mempool" what you're hearing about is all those transactions which aren't getting into blocks, and sitting around waiting. Why are they waiting? They didn't pay the current (higher and higher) fee needed to get into that little 1MB block. Some people want to send $25 to a family member. Would you pay the $10-20 now necessary for that? Probably not.
Now, we know, lightning network, someday, MAYBE will fix this issue. Unfortunately some have been talking about this for literally years now. But you know what's proving right now that it works? Raising the block size, which is what Bitcoin Cash or BCH does. Not to some obscene level, just 8MB. Like, the amount of RAM a computer had in 1995. And it was done because it was NECESSARY, and unfortunately, BTC is proving that with every full block produced.
Now look at the blue line in that graph which shows BCH blocks. It is sometimes above this magic 1MB level, sometimes it's almost 0. This is how a healthy coin is supposed to look, sometimes spiking up, but not never pegged at its limit. Looking closer, does that mean BCH could have gotten away with a 2 or 3MB block size? Probably, for now. But 8MB was a safe bet, meaning no 'forks' or changes to the core code would be needed for the foreseeable future. If a big event happens (some country goes through major inflation, or BTC just tanks temporarily) the BCH blocks will have room to fit YOUR (and everyone else's) transactions without needing to charge an arm and a leg.
Another (more modern) analogy if you still want one: Imagine if you have a phone with 1GB of RAM. After your OS and security software, etc, you have enough memory left to run ONE application, like your banking app, your email client, your web browser. Switching between them is PAINFUL and maybe sometimes it crashes completely. Now imagine you could get a (somehow cheaper) phone with 8 times the memory and the same apps can now run quickly and at the same time. This is because there is no need for virtual (on-disk) memory to kick in and slowly switch in the new app you want to use (this is called paging but is beyond the scope of this example). Why would you keep using that 1GB phone?
This is not a perfect analogy, but if you're trying to understand why people like BCH because of its on-chain scaling (putting all needed data into the blocks), that's the best analogy I could come up with. There are so many other issues to explain like segwit and so forth but this is already long enough.
In closing, most of us do NOT hate BTC, we hate how BROKEN it is when the solution to fix it exists right now! If you've read this far, thank you for listening and I hope we can begin to have a productive dialog in the comments.
submitted by astyfoo to btc [link] [comments]

Bitcoin's market *price* is trying to rally, but it is currently constrained by Core/Blockstream's artificial *blocksize* limit. Chinese miners can only win big by following the market - not by following Core/Blockstream. The market will always win - either with or without the Chinese miners.

TL;DR:
Chinese miners should think very, very carefully:
The market will always win - with or without you.
The choice is yours.
UPDATE:
The present post also inspired nullc Greg Maxwell (CTO of Blockstream) to later send me two private messages.
I posted my response to him, here:
https://np.reddit.com/btc/comments/4ir6xh/greg_maxwell_unullc_cto_of_blockstream_has_sent/
Details
If Chinese miners continue using artificially constrained code controlled by Core/Blockstream, then Bitcoin price / adoption / volume will also be artificially constrained, and billions (eventually trillions) of dollars will naturally flow into some other coin which is not artificially constrained.
The market always wins.
The market will inevitably determine the blocksize and the price.
Core/Blockstream is temporarily succeeding in suppressing the blocksize (and the price), and Chinese miners are temporarily cooperating - for short-term, relatively small profits.
But eventually, inevitably, billions (and later trillions) of dollars will naturally flow into the unconstrained, free-market coin.
That winning, free-market coin can be Bitcoin - but only if Chinese miners remove the artificial 1 MB limit and install Bitcoin Classic and/or Bitcoin Unlimited.
Previous posts:
There is not much new to say here - we've been making the same points for months.
Below is a summary of the main arguments and earlier posts:
Previous posts providing more details on these economic arguments are provided below:
This graph shows Bitcoin price and volume (ie, blocksize of transactions on the blockchain) rising hand-in-hand in 2011-2014. In 2015, Core/Blockstream tried to artificially freeze the blocksize - and artificially froze the price. Bitcoin Classic will allow volume - and price - to freely rise again.
https://np.reddit.com/btc/comments/44xrw4/this_graph_shows_bitcoin_price_and_volume_ie/
Bitcoin has its own E = mc2 law: Market capitalization is proportional to the square of the number of transactions. But, since the number of transactions is proportional to the (actual) blocksize, then Blockstream's artificial blocksize limit is creating an artificial market capitalization limit!
https://np.reddit.com/btc/comments/4dfb3bitcoin_has_its_own_e_mc2_law_market/
(By the way, before some sophomoric idiot comes in here and says "causation isn't corrrelation": Please note that nobody used the word "causation" here. But there does appear to be a rough correlation between Bitcoin volume and price, as would be expected.)
The Nine Miners of China: "Core is a red herring. Miners have alternative code they can run today that will solve the problem. Choosing not to run it is their fault, and could leave them with warehouses full of expensive heating units and income paid in worthless coins." – tsontar
https://np.reddit.com/btc/comments/3xhejm/the_nine_miners_of_china_core_is_a_red_herring/
Just click on these historical blocksize graphs - all trending dangerously close to the 1 MB (1000KB) artificial limit. And then ask yourself: Would you hire a CTO / team whose Capacity Planning Roadmap from December 2015 officially stated: "The current capacity situation is no emergency" ?
https://np.reddit.com/btc/comments/3ynswc/just_click_on_these_historical_blocksize_graphs/
Blockstream is now controlled by the Bilderberg Group - seriously! AXA Strategic Ventures, co-lead investor for Blockstream's $55 million financing round, is the investment arm of French insurance giant AXA Group - whose CEO Henri de Castries has been chairman of the Bilderberg Group since 2012.
https://np.reddit.com/btc/comments/47zfzt/blockstream_is_now_controlled_by_the_bilderberg/
Austin Hill [head of Blockstream] in meltdown mode, desperately sending out conflicting tweets: "Without Blockstream & devs, who will code?" -vs- "More than 80% contributors of bitcoin core are volunteers & not affiliated with us."
https://np.reddit.com/btc/comments/48din1/austin_hill_in_meltdown_mode_desperately_sending/
Be patient about Classic. It's already a "success" - in the sense that it has been tested, released, and deployed, with 1/6 nodes already accepting 2MB+ blocks. Now it can quietly wait in the wings, ready to be called into action on a moment's notice. And it probably will be - in 2016 (or 2017).
https://np.reddit.com/btc/comments/44y8ut/be_patient_about_classic_its_already_a_success_in/
Classic will definitely hard-fork to 2MB, as needed, at any time before January 2018, 28 days after 75% of the hashpower deploys it. Plus it's already released. Core will maybe hard-fork to 2MB in July 2017, if code gets released & deployed. Which one is safer / more responsive / more guaranteed?
https://np.reddit.com/btc/comments/46ywkk/classic_will_definitely_hardfork_to_2mb_as_needed/
"Bitcoin Unlimited ... makes it more convenient for miners and nodes to adjust the blocksize cap settings through a GUI menu, so users don't have to mod the Core code themselves (like some do now). There would be no reliance on Core (or XT) to determine 'from on high' what the options are." - ZB
https://np.reddit.com/btc/comments/3zki3h/bitcoin_unlimited_makes_it_more_convenient_fo
BitPay's Adaptive Block Size Limit is my favorite proposal. It's easy to explain, makes it easy for the miners to see that they have ultimate control over the size (as they always have), and takes control away from the developers. – Gavin Andresen
https://np.reddit.com/btc/comments/40kmny/bitpays_adaptive_block_size_limit_is_my_favorite/
More info on Adaptive Blocksize:
https://np.reddit.com/bitcoin+btc/search?q=adaptive&restrict_sr=on&sort=relevance&t=all
Core/Blockstream is not Bitcoin. In many ways, Core/Blockstream is actually similar to MtGox. Trusted & centralized... until they were totally exposed as incompetent & corrupt - and Bitcoin routed around the damage which they had caused.
https://np.reddit.com/btc/comments/47735j/coreblockstream_is_not_bitcoin_in_many_ways/
Satoshi Nakamoto, October 04, 2010, 07:48:40 PM "It can be phased in, like: if (blocknumber > 115000) maxblocksize = largerlimit / It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete."
https://np.reddit.com/btc/comments/3wo9pb/satoshi_nakamoto_october_04_2010_074840_pm_it_can/
Theymos: "Chain-forks [='hardforks'] are not inherently bad. If the network disagrees about a policy, a split is good. The better policy will win" ... "I disagree with the idea that changing the max block size is a violation of the 'Bitcoin currency guarantees'. Satoshi said it could be increased."
https://np.reddit.com/btc/comments/45zh9d/theymos_chainforks_hardforks_are_not_inherently/
"They [Core/Blockstream] fear a hard fork will remove them from their dominant position." ... "Hard forks are 'dangerous' because they put the market in charge, and the market might vote against '[the] experts' [at Core/Blockstream]" - ForkiusMaximus
https://np.reddit.com/btc/comments/43h4cq/they_coreblockstream_fear_a_hard_fork_will_remove/
Mike Hearn implemented a test version of thin blocks to make Bitcoin scale better. It appears that about three weeks later, Blockstream employees needlessly commit a change that breaks this feature
https://np.reddit.com/btc/comments/43iup7/mike_hearn_implemented_a_test_version_of_thin/
This ELI5 video (22 min.) shows XTreme Thinblocks saves 90% block propagation bandwidth, maintains decentralization (unlike the Fast Relay Network), avoids dropping transactions from the mempool, and can work with Weak Blocks. Classic, BU and XT nodes will support XTreme Thinblocks - Core will not.
https://np.reddit.com/btc/comments/4cvwru/this_eli5_video_22_min_shows_xtreme_thinblocks/
More info in Xtreme Thinblocks:
https://np.reddit.com/bitcoin+btc/search?q=xtreme+thinblocks&restrict_sr=on&sort=relevance&t=all
4 weird facts about Adam Back: (1) He never contributed any code to Bitcoin. (2) His Twitter profile contains 2 lies. (3) He wasn't an early adopter, because he never thought Bitcoin would work. (4) He can't figure out how to make Lightning Network decentralized. So... why do people listen to him??
https://np.reddit.com/btc/comments/47fr3p/4_weird_facts_about_adam_back_1_he_neve
I think that it will be easier to increase the volume of transactions 10x than it will be to increase the cost per transaction 10x. - jtoomim (miner, coder, founder of Classic)
https://np.reddit.com/btc/comments/48gcyj/i_think_that_it_will_be_easier_to_increase_the/
Spin-offs: bootstrap an altcoin with a btc-blockchain-based initial distribution
https://bitcointalk.org/index.php?topic=563972.480
More info on "spinoffs":
https://duckduckgo.com/?q=site%3Abitco.in%2Fforum+spinoff
submitted by ydtm to btc [link] [comments]

Bitcoin, huh? WTF is going on? Should we scale you on-chain or off-chain? Will you stay decentralized, distributed, immutable?

0. Shit, this is long, TLWR please! Too long, won't read.
EDIT: TLDR TLWR for clarity.
1. Bitcoin, huh? Brief introduction.
There are 3 sections to this overview. The first section is a brief introduction to bitcoin. The second section looks at recent developments in the bitcoin world, through the analogy of email attachments, and the third section discusses what could be next, through the perspective of resilience and network security.
This is just a continuation of a long, long, possibly never-ending debate that started with the release of the bitcoin whitepaper in 2008 (see https://bitcoin.org/bitcoin.pdf). The recent mess during the past few years boils down to the controversy with the block size limit and how to appropriately scale bitcoin, the keyword appropriately. Scaling bitcoin is a controversial debate with valid arguments from all sides (see https://en.bitcoin.it/wiki/Block_size_limit_controversy).
I have researched, studied, and written this overview as objectively and as impartially as possible. By all means, this is still an opinion and everyone is advised to draw their own conclusions. My efforts are to make at least a few readers aware that ultimately there is only one team, and that team is the team bitcoin. Yes, currently though, there are factions within the team bitcoin. I hope that we can get beyond partisan fights and work together for the best bitcoin. I support all scaling proposals as long as they are the best for the given moment in time. Personally, I hate propaganda and love free speech as long as it is not derogatory and as long as it allows for constructive discussions.
The goal of this overview is to explain to a novice how bitcoin network works, what has been keeping many bitcoin enthusiasts concerned, and if we can keep the bitcoin network with three main properties described as decentralized, distributed, immutable. Immutable means censorship resistant. For the distinction between decentralized and distributed, refer to Figure 1: Centralized, decentralized and distributed network models by Paul Baran (1964), which is a RAND Institute study to create a robust and nonlinear military communication network (see https://www.rand.org/content/dam/rand/pubs/research_memoranda/2006/RM3420.pdf). Note that for the overall network resilience and security, distributed is more desirable than decentralized, and the goal is to get as far away from central models as possible. Of course, nothing is strictly decentralized or strictly distributed and all network elements are at different levels of this spectrum.
For those unaware how bitcoin works, I recommend the Bitcoin Wikipedia (see https://en.bitcoin.it/wiki/Main_Page). In short, the bitcoin network includes users which make bitcoin transactions and send them to the network memory pool called mempool, nodes which store the public and pseudonymous ledger called blockchain and which help with receiving pending transactions and updating processed transactions, thus securing the overall network, and miners which also secure the bitcoin network by mining. Mining is the process of confirming pending bitcoin transactions, clearing them from the mempool, and adding them to blocks which build up the consecutive chain of blocks on the blockchain. The blockchain is therefore a decentralized and distributed ledger built on top of bitcoin transactions, therefore impossible to exist without bitcoin. If someone claims to be working on their own blockchain without bitcoin, by the definition of the bitcoin network however, they are not talking about the actual blockchain. Instead, they intend to own a different kind of a private database made to look like the public and pseudonymous blockchain ledger.
There are roughly a couple of dozen mining pools, each possibly with hundreds or thousands of miners participating in them, to several thousand nodes (see https://blockchain.info/pools and https://coin.dance/nodes). Therefore, the bitcoin network has at worst decentralized miners and at best distributed nodes. The miner and node design makes the blockchain resilient and immune to reversible changes, making it censorship resistant, thus immutable. The bitcoin blockchain avoids the previous need for a third party to trust. This is a very elegant solution to peer-to-peer financial exchange via a network that is all: decentralized, distributed, immutable. Extra features (escrow, reversibility via time-locks, and other features desirable in specific instances) can be integrated within the network or added on top of this network, however, they have not been implemented yet.
Miners who participate receive mining reward consisting of newly mined bitcoins at a predetermined deflationary rate and also transaction fees from actual bitcoin transactions being processed. It is estimated that in 2022, miners will have mined more than 90% of all 21 million bitcoins ever to be mined (see https://en.bitcoin.it/wiki/Controlled_supply). As the mining reward from newly mined blocks diminishes to absolute zero in 2140, the network eventually needs the transaction fees to become the main component of the reward. This can happen either via high-volume-low-cost transaction fees or low-volume-high-cost transaction fees. Obviously, there is the need to address the question of fees when dealing with the dilemma how to scale bitcoin. Which type of fees would you prefer and under which circumstances?
2. WTF is going on? Recent developments.
There are multiple sides to the scaling debate but to simplify it, first consider the 2 main poles. In particular, to scale bitcoin on blockchain or to scale it off it, that is the question!
The first side likes the idea of bitcoin as it has been until now. It prefers on-chain scaling envisioned by the bitcoin creator or a group of creators who chose the pseudonym Satoshi Nakamoto. It is now called Bitcoin Cash and somewhat religiously follows Satoshi’s vision from the 2008 whitepaper and their later public forum discussions (see https://bitcointalk.org/index.php?topic=1347.msg15366#msg15366). Creators’ vision is good to follow but it should not be followed blindly and dogmatically when better advancements are possible, the keyword when. To alleviate concerning backlog of transactions and rising fees, Bitcoin Cash proponents implemented a simple one-line code update which increased the block size limit for blockhain blocks from 1MB block size limit to a new, larger 8MB limit. This was done through a fork on August 1, 2017, which created Bitcoin Cash, and which kept the bitcoin transaction history until then. Bitcoin Cash has observed significant increase in support, from 3% of all bitcoin miners at first to over 44% of all bitcoin miners after 3 weeks on August 22, 2017 (see http://fork.lol/pow/hashrate and http://fork.lol/pow/hashrateabs).
An appropriate scaling analogy is to recall email attachments early on. They too were limited to a few MB at first, then 10MB, 20MB, up until 25MB on Gmail. But even then, Gmail eventually started using Google Drive internally. Note that Google Drive is a third party to Gmail, although yes, it is managed by the same entity.
The second side argues that bitcoin cannot work with such a scaling approach of pre-meditated MB increases. Arguments against block size increases include miner and node centralization, and bandwidth limitations. These are discussed in more detail in the third section of this overview. As an example of an alternative scaling approach, proponents of off-chain scaling want to jump to the internally integrated third party right away, without any MB increase and, sadly, without any discussion. Some of these proponents called one particular implementation method SegWit, which stands for Segregated Witness, and they argue that SegWit is the only way that we can ever scale up add the extra features to the bitcoin network. This is not necessarily true because other scaling solutions are feasible, such as already functioning Bitcoin Cash, and SegWit’s proposed solution will not use internally integrated third party as shown next. Note that although not as elegant as SegWit is today, there are other possibilities to integrate some extra features without SegWit (see /Bitcoin/comments/5dt8tz/confused_is_segwit_needed_for_lightning_network).
Due to the scaling controversy and the current backlog of transactions and already high fees, a third side hastily proposed a compromise to a 2MB increase in addition to the proposed SegWit implementation. They called it SegWit2x, which stands for Segregated Witness with 2MB block size limit increase. But the on-chain scaling and Bitcoin Cash proponents did not accept it due to SegWit’s design redundancy and hub centralization which are discussed next and revisited in the third section of this overview. After a few years of deadlock, that is why the first side broke free and created the Bitcoin Cash fork.
The second side stuck with bitcoin as it was. In a way, they inherited the bitcoin network without any major change to public eye. This is crucial because major changes are about to happen and the original bitcoin vision, as we have known it, is truly reflected only in what some media refer to as a forked clone, Bitcoin Cash. Note that to avoid confusion, this second side is referred to as Bitcoin Core by some or Legacy Bitcoin by others, although mainstream media still refers to it simply as Bitcoin. The core of Bitcoin Core is quite hardcore though. They too rejected the proposed compromise for SegWit2x and there are clear indications that they will push to keep SegWit only, forcing the third side with SegWit2x proponents to create another fork in November 2017 or to join Bitcoin Cash. Note that to certain degree, already implemented and working Bitcoin Cash is technically superior to SegWit2x which is yet to be deployed (see /Bitcoin/comments/6v0gll/why_segwit2x_b2x_is_technically_inferior_to).
Interestingly enough, those who agreed to SegWit2x have been in overwhelming majority, nearly 87% of all bitcoin miners on July 31, 2017 prior to the fork, and a little over 90% of remaining Bitcoin Core miners to date after the fork (see https://coin.dance/blocks). Despite such staggering support, another Bitcoin Core fork is anticipated later in November (see https://cointelegraph.com/news/bitcoin-is-splitting-once-again-are-you-ready) and the "Outcome #2: Segwit2x reneges on 2x or does not prioritize on-chain scaling" seems to be on track from the perspective of Bitcoin Core SegWit, publicly seen as the original Bitcoin (see https://blog.bridge21.io/before-and-after-the-great-bitcoin-fork-17d2aad5d512). The sad part is that although in their overwhelming majority, the miners who support SegWit2x would be the ones creating another Bitcoin Core SegWit2x fork or parting ways from the original Bitcoin.
In a way, this is an ironic example how bitcoin’s built-in resiliency to veto changes causes majority to part away when a small minority has status quo and holds off fully-consented progress. Ultimately, this will give the minority Bitcoin Core SegWit proponents the original Bitcoin branding, perhaps to lure in large institutional investors and monetize on bitcoin’s success as we have it seen it during the past 9 years since its inception. Recall that bitcoin of today is already a decentralized, distributed, immutable network by its definition. The bitcoin network was designed to be an alternative to centralized and mutable institutions, so prevalent in modern capitalist societies.
Bitcoin Core SegWit group wants to change the existing bitcoin network to a network with dominant third parties which, unlike Google Drive to Gmail, are not internal. In particular, they intend to do so via the lightning network, which is a second layer solution (2L). This particular 2L as currently designed relies on an artificial block size limit cap which creates a bottleneck in order to provide high incentives for miners to participate. It monetizes on backlog of transaction and high fees, which are allocated to miners, not any group in particular. Cheaper and more instantaneous transactions are shifted to the lightning network which is operated by hubs also earning revenue. Note that some of these hubs may choose to monitor transactions and can possibly censor who is allowed to participate in this no longer strictly peer-to-peer network.
We lose the immutability and instead we have a peer-to-hub-to-peer network that is mutable and at best decentralized, and certainly not distributed (see https://medium.com/@jonaldfyookball/mathematical-proof-that-the-lightning-network-cannot-be-a-decentralized-bitcoin-scaling-solution-1b8147650800). For regular day-to-day and recurring transactions, it is not a considerable risk or inconvenience. And one could choose to use the main chain any time to bypass the lightning network and truly transact peer-to-peer. But since the main chain has an entry barrier in the form of artificially instilled high transaction fees, common people are not able to use bitcoin as we have known it until now. Peer-to-peer bitcoin becomes institution-to-institution bitcoin with peer-to-hub-to-peer 2L.
To reiterate and stress, note the following lightning network design flaw again. Yes, activating SegWit and allowing 2L such as lightning allows for lower transaction fees to coexist side by side with more costly on-chain transactions. For those using this particularly prescribed 2L, the fees remain low. But since these 2L are managed by hubs, we introduce another element to trust, which is contrary to what the bitcoin network was designed to do at the first place. Over time, by the nature of the lightning network in its current design, these third party hubs grow to be centralized, just like Visa, Mastercard, Amex, Discover, etc. There is nothing wrong with that in general because it works just fine. But recall that bitcoin set out to create a different kind of a network. Instead of decentralized, distributed, immutable network with miners and nodes, with the lightning network we end up with at best decentralized but mutable network with hubs.
Note that Bitcoin Core SegWit has a US-based organization backing it with millions of dollars (see https://en.wikipedia.org/wiki/Blockstream and https://steemit.com/bitcoin/@adambalm/the-truth-about-who-is-behind-blockstream-and-segwit-as-the-saying-goes-follow-the-money). Their proponents are quite political and some even imply $1000 fees on the main bitcoin blockchain (see https://cointelegraph.com/news/ari-paul-tuur-demeester-look-forward-to-up-to-1k-bitcoin-fees). Contrary to them, Bitcoin Cash proponents intend to keep small fees on a scale of a few cents, which in large volume in larger blockchain blocks provide sufficient incentive for miners to participate.
On the one hand, sticking to the original vision of peer-to-peer network scaled on-chain has merit and holds potential for future value. On the other hand, 2L have potential to carry leaps forward from current financial infrastructure. As mentioned earlier, 2L will allow for extra features to be integrated off-chain (e.g. escrow, reversibility via time-locks), including entirely new features such as smart contracts, decentralized applications, some of which have been pioneered and tested on another cryptocurrency network called Ethereum. But such features could be one day implemented directly on the main bitcoin blockchain without the lightning network as currently designed, or perhaps with a truly integrated 2L proposed in the third section of this overview.
What makes the whole discussion even more confusing is that there are some proposals for specific 2L that would in fact increase privacy and make bitcoin transactions less pseudonymous than those on the current bitcoin blockchain now. Keep in mind that 2L are not necessarily undesirable. If they add features and keep the main network characteristics (decentralized, distributed, immutable), they should be embraced with open arms. But the lightning network as currently designed gives up immutability and hub centralization moves the network characteristic towards a decentralized rather than a distributed network.
In a sense, back to the initial email attachment analogy, even Gmail stopped with attachment limit increases and started hosting large files on Google Drive internally, with an embedded link in a Gmail email to download anything larger than 25MB from Google Drive. Anticipating the same scaling decisions, the question then becomes not if but when and how such 2L should be implemented, keeping the overall network security and network characteristics in mind. If you have not gotten it yet, repeat, repeat, repeat: decentralized, distributed, immutable. Is it the right time now and is SegWit (one way, my way or highway) truly the best solution?
Those siding away from Bitcoin Core SegWit also dislike that corporate entities behind Blockstream, the one publicly known corporate entity directly supporting SegWit, have allegedly applied for SegWit patents which may further restrict who may and who may not participate in the creation of future hubs, or how these hubs are controlled (see the alleged patent revelations, https://falkvinge.net/2017/05/01/blockstream-patents-segwit-makes-pieces-fall-place, the subsequent Twitter rebuttal Blockstream CEO, http://bitcoinist.com/adam-back-no-patents-segwit, and the subsequent legal threats to SegWit2x proponents /btc/comments/6vadfi/blockstream_threatening_legal_action_against). Regardless if the patent claims are precise or not, the fact remains that there is a corporate entity dictating and vetoing bitcoin developments. Objectively speaking, Bitcoin Core SegWit developers paid by Blockstream is a corporate takeover of the bitcoin network as we have known it.
And on the topic of patents and permissionless technological innovations, what makes all of this even more complicated is that a mining improvement technology called ASICboost is allowed on Bitcoin Cash. The main entities who forked from Bitcoin Core to form Bitcoin Cash had taken advantage of patents to the ASICboost technology on the original bitcoin network prior to the fork (see https://bitcoinmagazine.com/articles/breaking-down-bitcoins-asicboost-scandal). This boost saved estimated 20% electricity for miners on 1MB blocks and created unfair economic advantage for this one particular party. SegWit is one way that this boost is being eliminated, through the code. Larger blocks are another way to reduce the boost advantage, via decreased rate of collisions which made this boost happen at the first place (see https://bitcoinmagazine.com/articles/breaking-down-bitcoins-asicboost-scandal-solutions and https://bitslog.wordpress.com/2017/04/10/the-relation-between-segwit-and-asicboost-covert-and-overt). Therefore, the initial Bitcoin Cash proponents argue that eliminating ASICboost through the code is no longer needed or necessary.
Of course, saving any amount electricity between 0% and 20% is good for all on our planet but in reality any energy saved in a mining operation is used by the same mining operation to increase their mining capacity. In reality, there are no savings, there is just capacity redistribution. The question then becomes if it is okay that only one party currently and already holds onto this advantage, which they covertly hid for relatively long time, and which they could be using covertly on Bitcoin Cash if they desired to do so, even though it would an advantage to a smaller degree. To be fair to them, they are mining manufacturers and operators, they researched and developed the advantage from own resources, so perhaps they do indeed have the right to reap ASICboost benefits while they can. But perhaps it should happen in publicly know way, not behind closed doors, and should be temporary, with agreed patent release date.
In conclusion, there is no good and no bad actor, each side is its own shade of grey. All parties have their own truth (and villainy) to certain degree.
Bitcoin Cash's vision is for bitcoin to be an electronic cash platform and daily payment processor whereas Bitcoin Core SegWit seems to be drawn more to the ideas of bitcoin as an investment vehicle and a larger settlement layer with the payment processor function managed via at best decentralized third party hubs. Both can coexist, or either one can eventually prove more useful and digest the other one by taking over all use-cases.
Additionally, the most popular communication channel on /bitcoin with roughly 300k subscribers censors any alternative non-Bitcoin-Core-SegWit opinions and bans people from posting their ideas to discussions (see https://medium.com/@johnblocke/a-brief-and-incomplete-history-of-censorship-in-r-bitcoin-c85a290fe43). This is because their moderators are also supported by Blockstream. Note that the author of this overview has not gotten banned from this particular subreddit (yet), but has experienced shadow-banning first hand. Shadow-banning is a form of censorship. In this particular case, their moderator robot managed by people moderators, collaboratively with the people moderators, do the following:
  • (1) look for "Bitcoin Cash" and other undesirable keywords,
  • (2) warn authors that “Bitcoin Cash” is not true bitcoin (which objectively speaking it is, and which is by no means “BCash” that Bitcoin Core SegWit proponents refer to, in a coordinated effort to further confuse public, especially since some of them have published plans to officially release another cryptocurrency called “BCash” in 2018, see https://medium.com/@freetrade68/announcing-bcash-8b938329eaeb),
  • (3) further warn authors that if they try to post such opinions again, they could banned permanently,
  • (4) tell authors to delete their already posted posts or comments,
  • (5) hide their post from publicly seen boards with all other posts, thus preventing it from being seeing by the other participants in this roughly 300k public forum,
  • (6) and in extreme cases actually “remove” their valid opinions if they slip by uncensored, gain traction, and are often times raise to popularity as comments to other uncensored posts (see /btc/comments/6v3ee8/on_a_reply_i_made_in_rbitcoin_that_had_over_350 and /btc/comments/6vbyv0/in_case_we_needed_more_evidence_500_upvotes).
This effectively silences objective opinions and creates a dangerous echo-chamber. Suppressing free speech and artificially blowing up transaction fees on Bitcoin Core SegWit is against bitcoin’s fundamental values. Therefore, instead of the original Reddit communication channel, many bitcoin enthusiasts migrated to /btc which has roughly 60k subscribers as of now, up from 20k subscribers a year ago in August 2016 (see http://redditmetrics.com/btc). Moderators there do not censor opinions and allow all polite and civil discussions about scaling, including all opinions on Bitcoin Cash, Bitcoin Core, etc.
Looking beyond their respective leaderships and communication channels, let us review a few network fundamentals and recent developments in Bitcoin Core and Bitcoin Cash networks. Consequently, for now, these present Bitcoin Cash with more favorable long-term prospects.
  • (1) The stress-test and/or attack on the Bitcoin Cash mempool earlier on August 16, 2017 showed that 8MB blocks do work as intended, without catastrophic complications that Bitcoin Core proponents anticipated and from which they attempted to discourage others (see https://jochen-hoenicke.de/queue/uahf/#2w for the Bitcoin Cash mempool and https://core.jochen-hoenicke.de/queue/#2w for the Bitcoin Core mempool). Note that when compared to the Bitcoin Core mempool on their respective 2 week views, one can observe how each network handles backlogs. On the most recent 2 week graphs, the Y-scale for Bitcoin Core is 110k vs. 90k on Bitcoin Cash. In other words, at the moment, Bitcoin Cash works better than Bitcoin Core even though there is clearly not as big demand for Bitcoin Cash as there is for Bitcoin Core. The lack of demand for Bitcoin Cash is partly because Bitcoin Cash is only 3 weeks old and not many merchants have started accepting it, and only a limited number of software applications to use Bitcoin Cash has been released so far. By all means, the Bitcoin Cash stress-test and/or attack from August 16, 2017 reveals that the supply will handle the increased demand, more affordably, and at a much quicker rate.
  • (2) Bitcoin Cash “BCH” mining has become temporarily more profitable than mining Bitcoin Core “BTC” (see http://fork.lol). Besides temporary loss of miners, this puts Bitcoin Core in danger of permanently fleeing miners. Subsequently, mempool backlog and transaction fees are anticipated to increase further.
  • (3) When compared to Bitcoin Cash transaction fees at roughly $0.02, transaction fees per kB are over 800 times as expensive on Bitcoin Core, currently at over $16 (see https://cashvscore.com).
  • (4) Tipping service that used to work on Bitcoin Core's /Bitcoin a few years back has been revived by a new tipping service piloted on the more neutral /btc with the integration of Bitcoin Cash (see /cashtipperbot).
3. Should we scale you on-chain or off-chain? Scaling bitcoin.
Let us start with the notion that we are impartial to both Bitcoin Core (small blocks, off-chain scaling only) and Bitcoin Cash (big blocks, on-chain scaling only) schools of thought. We will support any or all ideas, as long as they allow for bitcoin to grow organically and eventually succeed as a peer-to-peer network that remains decentralized, distributed, immutable. Should we have a preference in either of the proposed scaling solutions?
First, let us briefly address Bitcoin Core and small blocks again. From the second section of this overview, we understand that there are proposed off-chain scaling methods via second layer solutions (2L), most notably soon-to-be implemented lightning via SegWit on Bitcoin Core. Unfortunately, the lightning network diminishes distributed and immutable network properties by replacing bitcoin’s peer-to-peer network with a two-layer institution-to-institution network and peer-to-hub-to-peer 2L. Do we need this particular 2L right now? Is its complexity truly needed? Is it not at best somewhat cumbersome (if not very redundant)? In addition to ridiculously high on-chain transaction fees illustrated in the earlier section, the lightning network code is perhaps more robust than it needs to be now, with thousands of lines of code, thus possibly opening up to new vectors for bugs or attacks (see https://en.bitcoin.it/wiki/Lightning_Network and https://github.com/lightningnetwork/lnd). Additionally, this particular 2L as currently designed unnecessarily introduces third parties, hubs, that are expected to centralize. We already have a working code that has been tested and proven to handle 8MB blocks, as seen with Bitcoin Cash on August 16, 2017 (see https://www.cryptocoinsnews.com/first-8mb-bitcoin-cash-block-just-mined). At best, these third party hubs would be decentralized but they would not be distributed. And these hubs would be by no means integral to the original bitcoin network with users, nodes, and miners.
To paraphrase Ocam’s razor problem solving principle, the simplest solution with the most desirable features will prevail (see https://en.wikipedia.org/wiki/Occam%27s_razor). The simplest scalability solution today is Bitcoin Cash because it updates only one line of code, which instantly increases the block size limit. This also allows other companies building on Bitcoin Cash to reduce their codes when compared to Bitcoin Core SegWit’s longer code, some even claiming ten-fold reductions (see /btc/comments/6vdm7y/ryan_x_charles_reveals_bcc_plan). The bitcoin ecosystem not only includes the network but it also includes companies building services on top of it. When these companies can reduce their vectors for bugs or attacks, the entire ecosystem is healthier and more resilient to hacking disasters. Obviously, changes to the bitcoin network code are desirable to be as few and as elegant as possible.
But what are the long-term implications of doing the one-line update repeatedly? Eventually, blocks would have to reach over 500MB size if they were to process Visa-level capacity (see https://en.bitcoin.it/wiki/Scalability). With decreasing costs of IT infrastructure, bandwidth and storage could accommodate it, but the overhead costs would increase significantly, implying miner and/or full node centralization further discussed next. To decrease this particular centralization risk, which some consider undesirable and others consider irrelevant, built-in and integrated 2L could keep the block size at a reasonably small-yet-still-large limit.
At the first sight, these 2L would remedy the risk of centralization by creating their own centralization incentive. At the closer look and Ocam’s razor principle again, these 2L do not have to become revenue-seeking third party hubs as designed with the current lightning network. They can be integrated into the current bitcoin network with at worst decentralized miners and at best distributed nodes. Recall that miners will eventually need to supplement their diminishing mining reward from new blocks. Additionally, as of today, the nodes have no built-in economic incentive to run other than securing the network and keeping the network’s overall value at its current level. Therefore, if new 2L were to be developed, they should be designed in a similar way like the lightning network, with the difference that the transaction processing revenue would not go to third party hubs but to the already integrated miners and nodes.
In other words, why do we need extra hubs if we have miners and nodes already? Let us consider the good elements from the lightning network, forget the unnecessary hubs, and focus on integrating the hubs’ responsibilities to already existing miner and node protocols. Why would we add extra elements to the system that already functions with the minimum number of elements possible? Hence, 2L are not necessarily undesirable as long as they do not unnecessarily introduce third party hubs.
Lastly, let us discuss partial on-chain scaling with the overall goal of network security. The network security we seek is the immutability and resilience via distributed elements within otherwise decentralized and distributed network. It is not inconceivable to scale bitcoin with bigger blocks as needed, when needed, to a certain degree. The thought process is the following:
  • (1) Block size limit:
We need some upper limit to avoid bloating the network with spam transactions. Okay, that makes sense. Now, what should this limit be? If we agree to disagree with small block size limit stuck at 1MB, and if we are fine with flexible block size limit increases (inspired by mining difficulty readjustments but on a longer time scale) or big block propositions (to be increased incrementally), what is holding us off next?
  • (2) Miner centralization:
Bigger blocks mean that more data will be transferred on the bitcoin network. Consequently, more bandwidth and data storage will be required. This will create decentralized miners instead of distributed ones. Yes, that is true. And it has already happened, due to the economy of scale, in particular the efficiency of grouping multiple miners in centralized facilities, and the creation of mining pools collectively and virtually connecting groups of miners not physically present in the same facility. These facilities tend to have huge overhead costs and the data storage and bandwidth increase costs are negligible in this context. The individual miners participating in mining pools will quite likely notice somewhat higher operational costs but allowing for additional revenue from integrated 2L described earlier will give them economic incentive to remain actively participating. Note that mining was never supposed to be strictly distributed and it was always at worst decentralized, as defined in the first section of this overview. To assure at best a distributed network, we have nodes.
  • (3) Node centralization:
Bigger blocks mean that more data will be transferred on the bitcoin network. Consequently, more bandwidth and data storage will be required. This will create decentralized nodes instead of distributed ones. Again, recall that we have a spectrum of decentralized and distributed networks in mind, not their absolutes. The concern about the node centralization (and the subsequent shift from distributed to decentralized network property) is valid if we only follow on-chain scaling to inconsiderate MB values. If addressed with the proposed integrated 2L that provides previously unseen economic incentives to participate in the network, this concern is less serious.
Furthermore, other methods to reduce bandwidth and storage needs can be used. A popular proposal is block pruning, which keeps only the most recent 550 blocks, and eventually deletes any older blocks (see https://news.bitcoin.com/pros-and-cons-on-bitcoin-block-pruning). Block pruning addresses storage needs and makes sure that not all nodes participating in the bitcoin network have to store all transactions that have ever been recorded on the blockchain. Some nodes storing all transactions are still necessary and they are called full nodes. Block pruning does not eliminate full nodes but it does indeed provide an economic incentive for the reduction and centralization (i.e. saving on storage costs). If addressed with the proposed integrated 2L that provides previously unseen economic incentives to participate in the network, this concern is less serious.
In other words, properly designed 2L should provide economic incentives for all nodes (full and pruned) to remain active and distributed. As of now, only miners earn revenue for participating. The lightning network proposes extra revenue for hubs. Instead, miner revenue could increase by processing 2L transactions as well, and full nodes could have an economic incentive as well. To mine, relatively high startup costs is necessary in order to get the most up to date mining hardware and proper cooling equipment. These have to be maintained and periodically upgraded. To run a full node, one needs only stable bandwidth and a sufficiently large storage, which can be expanded as needed, when needed. To run a full node, one needs only stable bandwidth and relatively small storage, which does not need to be expanded.
Keeping the distributed characteristic in mind, it would be much more secure for the bitcoin network if one could earn bitcoin by simply running a node, full or pruned. This could be integrated with a simple code change requiring each node to own a bitcoin address to which miners would send a fraction of processed transaction fees. Of course, pruned nodes would collectively receive the least transaction fee revenue (e.g. 10%), full nodes would collectively receive relatively larger transaction fee revenue (e.g. 20%), whereas mining facilities or mining pools would individually receive the largest transaction fee revenue (e.g. 70%) in addition to the full mining reward from newly mined blocks (i.e. 100%). This would assure that all nodes would remain relatively distributed. Hence, block pruning is a feasible solution.
However, in order to start pruning, one would have to have the full blockchain to begin with. As currently designed, downloading blockchain for the first time also audits previous blocks for accuracy, this can take days depending on one’s bandwidth. This online method is the only way to distribute the bitcoin blockchain and the bitcoin network so far. When the size of blockchain becomes a concern, a simpler distribution idea should be implemented offline. Consider distributions of Linux-based operating systems on USBs. Similarly, the full bitcoin blockchain up to a certain point can be distributed via easy-to-mail USBs. Note that even if we were to get the blockchain in bulk on such a USB, some form of a block audit would have to happen nevertheless.
A new form of checkpoint hashes could be added to the bitcoin code. For instance, each 2016 blocks (whenever the difficulty readjusts), all IDs from previous 2015 blocks would be hashed and recorded. That way, with our particular offline blockchain distribution, the first time user would have to audit only the key 2016th blocks, designed to occur on average once in roughly 2 weeks. This would significantly reduce bandwidth concerns for the auditing process because only each 2016th block would have to be uploaded online to be audited.
Overall, we are able to scale the bitcoin network via initial on-chain scaling approaches supplemented with off-chain scaling approaches. This upgrades the current network to a pruned peer-to-peer network with integrated 2L managed by miners and nodes who assure that the bitcoin network stays decentralized, distributed, immutable.
  • Discussion at /btc/comments/6vj47c/bitcoin_huh_wtf_is_going_on_should_we_scale_you is greatly encouraged.
  • Note that the author u/bit-architect appreciates any Bitcoin Cash donations on Reddit directly or on bitcoin addresses 178ZTiot2QVVKjru2f9MpzyeYawP81vaXi bitcoincash:qp7uqpv2tsftrdmu6e8qglwr2r38u4twlq3f7a48uq (Bitcoin Cash) and 1GqcFi4Cs1LVAxLxD3XMbJZbmjxD8SYY8S (Bitcoin Core).
  • EDIT: Donation addresses above updated.
submitted by bit-architect to btc [link] [comments]

Pools not mining low fees

It has been discussed here a bit, but i just did some simple math via www.blockchair.com and since block height 507000 there have been 59 blocks that are not full. Those blocks add up to ~27.6MB, and that is around 31MB of block space. (I am using 1MB on the assumption that most of the 1sat/byte transactions are not SegWit to get the worst case) if these pools were to remove the minimums, the pool would currently be empty.
Blockchair link with filter
Johoe's mempool graph
the pools and count come as no surprise: * AntPool 28
and since antpool and bitcoin.com mine 1sat/byte transactions on bcash, they are just being jerks. I found this annoying an informative, but if you are a miner in any of these pools, i suggest you either contact them and ask them to change, or you switch to any other pool that mines low fees, slush is one i know for certain does, but i am sure you guys can make your own recommendations
submitted by opant1234 to Bitcoin [link] [comments]

During the last 3 months almost all periods of increased fee were due to a drop in hashrate. The normal 6 blocks/hour rate would have kept the mempool close to empty instead of resulting in 200+ fees. Roger Ver et al. are literally responsible for what their propaganda machine complains about.

If you kept an eye on the blockrate and the mempool size, the number of blocks missing that would make it 6 per hour was always close to the number of megabyte in the mempool (ignoring the large <5s/B transactions, which are still over 50MB for less than 10k transactions).
Simple botched overlay of the Hashrate and Mempool size:
https://i.imgur.com/0iUjScm.png
Hashrate https://bitinfocharts.com/comparison/bitcoin%20cash-hashrate.html
Mempool https://jochen-hoenicke.de/queue/
fork.lol reduced their graph to 30 days :(
submitted by Bitcoin_Bug to Bitcoin [link] [comments]

Is it a coincidence or a deliberate attack?

As you probably know, several days ago Bitcoin price shoot up to $420. While it's not a big move by Bitcoin standards, it's still highest in the last 4 weeks. A lot of people became optimistic...
And today several things happened:
More about spam attack, as you can see here, for the last week there was no "fee market", many transactions paying only 5000 satoshi per kilobyte were confirmed (red line on the upper graph). Then suddenly mempool size shoots up...
So... is it just a coincidence, or is there a coordinated effort to keep Bitcoin price artificially low?
This idea was formulated before, but today I was able to see it having a predictive power, so to speak: I checked Bitcoin price today only after I saw "blocks are full" article on reddit, and my prediction that massive dump is coming was right.
Of course, this could be just a coincidence. Or there might be a different causal relationship, e.g. there's elevated traffic due to a dump on exchanges.
But these alternative explanations do not look plausible, as both spam and dump are intentional rather than random.
BTW there was another event recently: a day ago we had an NYT article praising Ethereum and trashing Bitcoin (well, kinda). I don't think that NYT is on it, but it might be just a good time for a dump after this article is read.
submitted by killerstorm to Bitcoin [link] [comments]

Graph: Mempool Transaction Count - The number of transactions waiting to be confirmed. Backlogs at an all-time high, users experiencing delays, unable to transact, miners losing fees. Bitcoin network congested and unreliable due to Core/Blockstream's never-ending obstructionism, censorship and lies.

Graph:
https://blockchain.info/charts/mempool-count?timespan=all
Core/Blockstream is sabotaging the network by forcing everyone to use their shitty tiny 1 MB "max blocksize" when everyone knows the network can already support 4 MB blocks.
It's time for the Bitcoin community to tell the owners of Blockstream and "the devs they rode in on" to go fuck themselves.
Bitcoin Unlimited is the real Bitcoin, in line with Satoshi's vision.
Meanwhile, BlockstreamCoin+RBF+SegWitAsASoftFork+LightningCentralizedHub-OfflineIOUCoin is some kind of weird unrecognizable double-spendable non-consensus-driven fiat-financed offline centralized settlement-only non-P2P "altcoin".
Smart miners like ViaBTC have already upgraded to Bitcoin Unlimited - and more and more users and miners are dumping Core.
The best way to ensure Bitcoin's continued success is to abandon the corrupt incompetent liars from Core/Blockstream - and move forward with simple, safe on-chain scaling now by upgrading to Bitcoin Unlimited.
submitted by ydtm to btc [link] [comments]

History Lesson for new VIA Viacoin Investors

Viacoin is an open source cryptocurrency project, based on the Bitcoin blockchain. Publicly introduced on the crypto market in mid 2014, Viacoin integrates decentralized asset transaction on the blockchain, reaching speeds that have never seen before on cryptocurrencies. This Scrypt based, Proof of Work coin was created to try contrast Bitcoin’s structural problems, mainly the congested blockchain delays that inhibit microtransaction as this currency transitions from digital money to a gold-like, mean of solid value storage. Bitcoin Core developers Peter Todd and Btc have been working on this currency and ameliorated it until they was able to reach a lightning fast speed of 24 second per block. These incredible speeds are just one of the features that come with the implementation of Lightning Network, and and make Bitcoin slow transactions a thing of the past. To achieve such a dramatic improvement in performance, the developers modified Viacoin so that its OP_RETURN has been extended to 80 bytes, reducing tx and bloat sizes, overcoming multi signature hacks; the integration of ECDSA optimized C library allowed this coin to reach significant speedup for raw signature validation, making it perform up to 5 times better. This will mean easy adoption by merchants and vendors, which won’t have to worry anymore with long times between the payment and its approval. Todd role as Chief Scientist and Advisor has been proven the right choice for this coin, thanks to his focus on Tree Chains, a ground breaking feature that will fix the main problems revolving around Bitcoin, such as scalability issues and the troubles for the Viacoin miners to keep a reputation on the blockchain in a decentralized mining environment. Thanks to Todd’s expertise in sidechains, the future of this crypto currency will see the implementation of an alternative blockchain that is not linear. According to the developer, the chains are too unregulated when it comes to trying to establish a strong connection between the operations happening on one chain and what happens elsewhere. Merged mining, scalability and safety are at risk and tackling these problems is mandatory in order to create a new, disruptive crypto technology. Tree Chains are going to be the basis for a broader use and a series of protocols that are going to allow users and developers to use Viacoin’s blockchain not just to mine and store coins, but just like other new crypto currencies to allow the creation of secure, decentralized consensus systems living on the blockchain The commander role on this BIP9 compatible coin’s development team has now been taken by a programmer from the Netherlands called Romano, which has a great fan base in the cryptocurrency community thanks to his progressive views on the future of the world of cryptos. He’s in strong favor of SegWit, and considers soft forks on the chain not to be a problem but an opportunity: according to him it will provide an easy method to enable scripting upgrades and the implementation of other features that the market has been looking for, such as peer to peer layers for compact block relay. Segregation Witness allows increased capacity, ends transactions malleability, makes scripting upgradeable, and reduces UTXO set. Because of these reasons, Viacoin Core 0.13 is already SegWit ready and is awaiting for signaling.
Together with implementation of SegWit, Romano has recently been working on finalizing the implementation of merged mining, something that has never been done with altcoins. Merged mining allows users to mine more than one block chain at the same time, this means that every hash the miner does contributes to the total hash rate of all currencies, and as a result they are all more secure. This release pre-announcement resulted in a market spike, showing how interested the market is in the inclusion of these features in the coin core and blockchain. The developer has been introducing several of these features, ranging from a Hierarchical Deterministic key (HD key) generation that allows all Viacoin users to backup their wallets, to a compact block relay, which decreases block propagation times on the peer to peer network; this creates a healthier network and a better baseline relay security margin. Viacoin’s support for relative locktime allows users and miners to time-lock a transaction, this means that a new transaction will be prevented until a relative time change is achieved with a new OP code, OP_CHECKSEQUENCEVERITY, which allows the execution of a script based on the age of the amount that is being spent. Support for Child-Pays-For-Parent procedures in Viacoin has been successfully enabled, CPFP will alleviate the problem of transactions that stuck for a long period in the unconfirmed limbo, either because of network bottlenecks or lack of funds to pay the fee. Thanks to this method, an algorithm will selects transactions based on federate inclusive unconfirmed ancestor transaction; this means that a low fee transaction will be more likely to get picked up by miners if another transaction with an higher fee that speeds its output gets relayed. Several optimizations have been implemented in the blockchain to allow its scaling to proceed freely, ranging from pruning of the chain itsel to save disk space, to optimizing memory use thanks to mempool transaction filtering. UTXO cache has also been optimization, further allowing for significant faster transaction times. Anonymity of transaction has been ameliorated, thanks to increased TOR support by the development team. This feature will help keep this crypto currency secure and the identity of who works on it safe; this has been proven essential, especially considering how Viacoin’s future is right now focused on segwit and lightning network . Onion technology used in TOR has also been included in the routing of transactions, rapid payments and instant transaction on bi directional payment channels in total anonymity. Payments Viacoin’s anonymity is one of the main items of this year’s roadmap, and by the end of 2017 we’ll be able to see Viacoin’s latest secure payment technology, called Styx, implemented on its blockchain. This unlinkable anonymous atomic payment hub combines off-the-blockchain cryptographic computations, thanks to Viacoin’s scriptin functionalities, and makes use of security RSA assumptions, ROM and Elliptic Curve digital signature Algorithm; this will allow participants to make fast, anonymous transfer funds with zero knowledge contingent payment proof. Wallets already offer strong privacy, thanks to transactions being broadcasted once only; this increases anonymity, since it can’t be used to link IPs and TXs. In the future of this coin we’ll also see hardware wallets support reaching 100%, with Trezor and Nano ledger support. These small, key-chain devices connect to the user’s computer to store their private keys and sign transactions in a safe environment. Including Viacoin in these wallets is a smart move, because they are targeted towards people that are outside of hardcore cryptocurrency users circle and guarantees exposure to this currency. The more casual users hear of this coin, the faster they’re going to adopt it, being sure of it’s safety and reliability. In last October, Viacoin price has seen a strong decline, probably linked to one big online retailer building a decentralized crypto stock exchange based on the Counterparty protocol. As usual with crypto currencties, it’s easy to misunderstand the market fluctuations and assume that a temporary underperforming coin is a sign of lack of strength. The change in the development team certainly helped with Viacoin losing value, but by watching the coin graphs it’s easy to see how this momentary change in price is turning out to be just one of those gentle chart dips that precede a sky rocketing surge in price. Romano is working hard on features and focusing on their implementation, keeping his head low rather than pushing on strong marketing like other alt coins are doing. All this investment on ground breaking properties, most of which are unique to this coin, means that Viacoin is one of those well kept secret in the market. Minimal order books and lack of large investors offering liquidity also help keep this coin in a low-key position, something that is changing as support for larger books is growing. As soon as the market notices this coin and investments go up, we are going to see a rapid surge in the market price, around the 10000 mark by the beginning of January 2018 or late February. Instead of focusing on a public ICO like every altcoin, which means a sudden spike in price followed by inclusion on new exchanges that will dry up volume, this crypto coin is growing slowly under the radar while it’s being well tested and boxes on the roadmap get checked off, one after the other. Romano is constantly working on it and the community around this coin knows, such a strong pack of followers is a feature that no other alt currency has and it’s what will bring it back to the top of the coin market in the near future. His attitude towards miners that are opposed to SegWit is another strong feature to add to Viacoin, especially because of what he thinks of F2Pool and Bitmain’s politics towards soft forks. The Chinese mining groups seem scared that once alternative crypto coins switch to it they’re going to lose leveraging power for what concerns Bitcoin’s future and won’t be able to speculate on the mining and trading market as much as they have been doing in the past, especially for what concerns the marketing market.
It’s refreshing to see such dedication and releases being pushed at a constant manner, the only way to have structural changes in how crypto currencies work can only happen when the accent is put on development and not on just trying to convince the market. This strategy is less flashy and makes sure the road is ready for the inevitable increase in the userbase. It’s always difficult to forecast the future, especially when it concerns alternative coins when Bitcoin is raising so fast. A long term strategy suggestion would be to get around 1BTC worth of this cryptocoin as soon as possible and just hold on it: thanks to the features that are being rolled in as within 6 months there is going to be an easy gain to be made in the order of 5 to 10 times the initial investment. Using the recent market dip will make sure that the returns are maximized. What makes Viacoin an excellent opportunity right now is that the price is low and designed to rise fast, as its Lightning Network features become more mainstream. Lightning Network is secure, instant payment that aren’t going to be held back by confirmation bottlenecks, a blockchain capable to scale to the billions of transactions mark, extremely low fees that do not inhibit micropayments and cross-chain atomic swap that allow transaction across blockchain without the need of a third party custodians. These features mean that the future of this coin is going to be bright, and the the dip in price that started just a while ago is going to end soon as the market prepares for the first of August, when when the SegWit drama will affect all crypto markets. The overall trend of viacoin is bullish with a constant uptrend more media attention is expected , when news about the soft fork will spread beyond the inner circle of crypto aficionados and leak in the mainstream finance news networks. Solid coins like Viacoin, with a clear policy towards SegWit, will offer the guarantees that the market will be looking for in times of doubt. INVESTMENT REVIEW Investment Rating :- A+
https://medium.com/@VerthagOG/viacoin-investment-review-ca0982e979bd
submitted by alex61688 to viacoin [link] [comments]

A look back at my BTC TX fees for the last 3+ months.

I've had a few stuck BTC transactions as I've always tried to minimize my fees. Before this week I had not paid for TX acceleration in any way (though one good Samaritan miner saved me once before I tried child pays for parent).
The tx below all were sent using the bitcoin core windows client and custom fee ratios used. Newest TX on top.
224 bytes fee: 0.00010062 BTC (45 per byte) 17,002.07 exchange rate - fee ~$1.71 (1 hour, used accelerator, this child tx pays for parent) 257 bytes fee: 0.00010836 BTC (42 per byte) 17,002.07 exchange rate - fee ~$1.84 (12 hours, used accelerator) 258 bytes fee: 0.00001806 BTC (7 per byte) 11,656.51 exchange rate - fee ~$0.21 (7 days, used child pays for parent) 795 bytes fee: 0.00002394 BTC (3 per byte) 8,650.00 exchange rate - fee ~$0.21 (9+ days, used accelerator)
258 bytes fee: 0.00001806 BTC (7 per byte) 6,922.15 exchange rate - fee ~$0.13 (1 hour 48 minutes to confirm) 617 bytes fee: 0.00004326 BTC (7 per byte) 5,222.83 exchange rate - fee ~$0.23 (2+ days to confirm, used accelerator) 257 bytes fee: 0.00010836 BTC (42 per byte) 4,599.10 exchange rate - fee ~$0.50 (4 hours to confirm) 258 bytes fee: 0.00018576 BTC (72 per byte) 4,331.68 exchange rate - fee ~$0.81 (20 minutes to confirm) 257 bytes fee: 0.00010836 BTC (42 per byte) 4,104.02 exchange rate - fee ~$0.44 (2 days to confirm, sent Aug 22)
Now going oldest to newest to see how I thought of this whole thing:
First it's probably important to say I used these txs to fund a debit card so quick tx confirmation wasn't a concern for me.
as you can see in August 2017 I got a tx through at 42 per byte fee but it was sent on a Tuesday night / Wednesday morning (around 11:30pm eastern).
my reaction to that first slow tx was to up the fee, but my next tx was sent on a Friday and basically went in the first block (took less than 20 minutes but I didn't keep track of the exact time).
I then dropped the fee back to 42 per for another Tuesday tx and it went in just under 4 hours. By this point I thought I'd mastered fees. 4 hours sounded good for keeping the fee low and still having the tx complete while I slept or worked.
So I'm watching fees and I've found a free accelerator. I try 7 per on a Thursday and it sits for 2 days. I try the free accelerator and it works almost immediately. 2 days is too long but I figure I've got the free accelerator as a fall back so...
I try 7 per again on a Thursday when the pool was nearly empty (Nov 2nd). Email confirmation comes in 1 hour 48 minutes, first confirm was probably closer to an hour and half. I'm thinking I'd been happy to have it take longer so I think lets drop the fee some more.
I try 3 per on a Sat when the pool seems low and I this is where I screw up with overconfidence, a few hours after I submit the pool starts overflowing by mid week doom and gloom stories about how many hundred thousand txs are pending are all over the internet. I eventually get bailed out by a good Samaritan when I ask for help. During that week I couldn't find an accelerator that would charge me less than $16. My free options had gone away (having a fee below 0.0001 kept me from using viaBTC 's accelerator).
So you'd think I'd learn my lesson. No I go back to the fee sites and try to pick a fee that will just barely confirm. Another Tuesday tx, I just figured it'd sit until the weekend. I would have been happy with anything under 4 days. A week later and after trying several free accelerators it still hadn't confirmed. Today I finally did a child pays for parent tx essentially paying another $1.71 to get that tx unstuck.
I also did one today at 42 per and used the viaBTC accelerator to get it to go on the next hour.
My two most expensive tx both confirmed on Dec 12, 2017. One had fees I valued at $1.84, The other(s) was the child/parent pair that were $1.92 but could have been cheaper if I'd just paid a proper fee the first time.
I still wonder how high fees will go, but I also have to wonder how much of it is a self fulfilling prophecy. How many people are paying something that they could wait hours for confirmation but pay the higher fee to get in the first block they can?
Well, Child pays for parent and accelerator dances aren't my idea of fun in the long term. I can see why the fee spiral starts. Someone complains about a slow tx, they remember it and up the fee the next time, someone else has to compete with the new higher average fee and does the complain about a slow tx / raise the fee the next time response.
sites used to estimate fees in case you want to be a tightwad and skirt the uncompleted tx line:
end result I'll still use these sites but I'd be better off just paying a higher fee and not spending as much time trying to find the minimum I can get away with.
submitted by dhanson865 to btc [link] [comments]

A scientist or economist who sees Satoshi's experiment running for these 7 years, with price and volume gradually increasing in remarkably tight correlation, would say: "This looks interesting and successful. Let's keep it running longer, unchanged, as-is."

UPDATE: Here's a shorter TL;DR:
https://imgur.com/jLnrOuK
http://nakamotoinstitute.org/static/img/mempool/how-we-know-bitcoin-is-not-a-bubble/MetcalfeGraph.png
Only someone who is anti-science and anti-markets (and anti-investors!) would say:
"The existing Visa credit card network processes about 15 million Internet purchases per day worldwide. Bitcoin can already scale much larger than that with existing hardware for a fraction of the cost. It never really hits a scale ceiling." - Satoshi Nakomoto
https://np.reddit.com/btc/comments/49fzak/the_existing_visa_credit_card_network_processes/
Core / Blockstream are the ones proposing these radical changes in the main parameters of this remarkably successful experiment.
This is anti-scientific of them - and anti-markets, and anti-investors.
They have forgotten the saying:
"If it ain't broke, don't fix it."
They should be free to make their radical changes - but on a side fork.
In this sense, Classic, XT, and Bitcoin Unlimited are all on the "main fork".
Meanwhile Core / Blockstream propose radically veering off onto a "side fork".
Sidebar regarding the confusing terminology around "forks", and an unfortunate historical accident of mathematics allowing the "side fork" to unfairly exploit the apparent "status quo"
The fact that a "hard fork" is necessary to stay on the "main fork" is merely a curious (and in this case, unfortunate) accident of mathematics in this case.
This is because, in this particular case, it happens that staying on the "main fork" involves "loosening" or "widening" or "expanding" or "liberalizing" the definition of valid blocks.
Due to the nature of p2p networks, any fork which "loosens" or "expands" or "liberalizes" the definitions or requirements actually gets the scary-sounding name of "hard fork" - because all of the p2p nodes have to upgrade in order for a definition to be loosened / widened / expanded.
In other words, because the "main fork" involved growth, which involves loosening or removing temporary a hard-coded limit, then staying on the "main fork" actually (counterintuitively!) requires a "hard fork" in this case.
And meanwhile, radically veering off onto a "side fork" can actually (paradoxically) be accomplished by using a "soft fork" - which the developers can quietly add to the network, rather than getting everyone to consciously and explicitly support it.
This is a very unfortunate historical accident of mathematics - which however Core / Blockstream are shamelessly and ruthlessly exploiting (since without this unfair accidental advantage, they would have a much harder time getting the community to agree to all their radical proposed changes above).
So remember:
  • The main fork assumes growth without artificial constraints.
  • Since the code contains a temporaruy anti-spam kludge which is now imposing an artificial constraint on growth, the only way we can stay on the main fork is by doing a hard fork. It sounds weird (paradoxical), but that's the way it is.
  • Core / Blockstream could never get support for their radical changes if they had to be introduced via a hard fork.
  • Conversely, there would be much more support for Satoshi's original plan, if it didn't unfortunately require a hard fork now in order to continue with it.
So this is the big paradox here:
  • Continuing with Satoshi's original plan requires a hard fork.
  • Radically changing Satoshi's plan can be done via soft forking.
And that's the tragic accident of history which we are up against (and which Core / Blockstream is shamelessly and desperately exploiting, since they know that nobody would support their radical changes if they had to be introduced via a hard fork).
A possible novel economic result, shown on an interesting graph
I know all the cynical kids will knee-jerk yell "correlation isn't causation" and "your statistics professor would be cringing" - but hold on a minute: the following graph is actually quite remarkable, and may be illustrating a important and novel emergent market phenomenon (which we simply never had a change to test yet with legacy fiat currencies, due to their, ahem, "irregular" ie poltically-gamed mining a/k/a emission schedule):
https://imgur.com/jLnrOuK
http://nakamotoinstitute.org/static/img/mempool/how-we-know-bitcoin-is-not-a-bubble/MetcalfeGraph.png
This graph shows Bitcoin price and volume (ie, blocksize of transactions on the blockchain) rising hand-in-hand in 2011-2014. In 2015, Core/Blockstream tried to artificially freeze the blocksize - and artificially froze the price. Bitcoin Classic will allow volume - and price - to freely rise again.
https://np.reddit.com/btc/comments/44xrw4/this_graph_shows_bitcoin_price_and_volume_ie/
Sometimes correlation does happen.
And the correlation in that graph is pretty fucking tight.
So perhaps we are about to discover some surprisingly simple and elegant new economic theories or even laws (if Core / Blockstream will let us continue with this experiment on the path intended by Satoshi) now that, for the first time in history, we have a currency where the money supply is pre-determined by an asymptotically declining algorithm - rather than a currency where the supply is established by a cartel via political and social processes which are often corrupt.
Maybe the relationship between volume (velocity) and price really is as simple as suggested by the above graph - and this is the first time in history that we could actually see it (because this is the first time where the politicians and the wealthy can't mess with the supply).
Now we are hitting the point where volume (also known as velocity, or blocksize) is being limited by a cartel - of centralized miners and centralized devs - and it is reasonable to formulate the hypothesis that the price is now, since around late 2014, being suppressed because the velocity / volume is now being suppressed (based on that graph, which shows price dipping away from its previous correlation with volume, starting around late 2014 - when Blockstream came on the scene, and told us we couldn't have nice things anymore).
The devs at Core / Blockstream say:
  • they want to limit volume for the next year, even if it leads to the network getting congested, and users moving to other networks, and
  • they want to increase volume much later by a different, complicated, centralized, slow and expensive approach: side-chains, eg the Lightning Network, which does not exist yet and might never exist.
But a true scientist or economist would say:
  • The possible correlation in the above graph is indeed interesting - and good for investors!
  • Since the original inventor of the experiment (Satoshi Nakamoto) has been right about everything so far, we should continue with his experiment as-is, unchanged.
  • This includes his recommendation that the 1 MB "artificial limit" should be only temporary.
  • So this limit should be increased (or completely removed) so that the experiment can continue un-impeded, and so that we can continue to observe whether the striking correlation between price and volume continues to apply.
This is why Classic, XT and Bitcoin Unlimited are all on the "main fork".
While Core / Blockstream are on a "side fork".
TL;DR
  • Bitcoin has been highly successful for 7 years, also showing a remarkable correlation between volume and price which may herald a new fundamental economic theory or law applicable to cryptocurrencies with algorithmic asymptotically-declining emission schedules (and undiscoverable in legacy fiat currencies due to their erratic and politically influenced emission schedules), namely: value and volume (velocity) are correlated.
  • A true scientist or economist (and a true friend of investors!) would simply allow this highly successful experiment (with its interesting correlation) to continue unchanged. Let's see if the correlation continues!
  • In this case "continuing unchanged" - ie, remaining on the status quo or "main fork", paradoxically requires a "hard fork" now - to remove an anti-spam kludge which introduced an artificial limit (1 MB max block size) which was always intended to be temporary.
  • Core / Blockstream is actually proposing several very radical changes, which constitute a "side fork". But unfortunately they are able to introduce these changes quietly via "soft forks" - which is giving them an unfair advantage, which they are shamelessly exploiting.
  • They are also able to make the temporary (and now unnecessary) anti-spam kludge last much longer than originally intended by doing nothing at all - so inertia / status quo is on their side.
  • Paradoxically, adhering to Satoshi's plan, ie staying on the "main fork" of increasing actual blocksizes (and increasing price!) - requires a change in the code now - a hard fork.
submitted by ydtm to btc [link] [comments]

The Mike Hearn Show: Season Finale (and Bitcoin Classic: Series Premiere)

This post debunks Mike Hearn's conspiracy theories RE Blockstream in his farewell post and points out issues with the behavior of the Bitcoin Classic hard fork and sketchy tactics of its advocates
I used to be torn on how to judge Mike Hearn. On the one hand he has done some good work with BitcoinJ, Lighthouse etc. Certainly his choice of bloom filter has had a net negative effect on the privacy of SPV users, but all in all it works as advertised.* On the other hand, he has single handedly advocated for some of the most alarming behavior changes in the Bitcoin network (e.g. redlists, coinbase reallocation, BIP101 etc...) to date. Not to mention his advocacy in the past year has degraded from any semblance of professionalism into an adversarial us-vs-them propaganda train. I do not believe his long history with the Bitcoin community justifies this adversarial attitude.
As a side note, this post should not be taken as unabated support for Bitcoin Core. Certainly the dev team is made of humans and like all humans mistakes can be made (e.g. March 2013 fork). Some have even engaged in arguably unprofessional behavior but I have not yet witnessed any explicitly malicious activity from their camp (q). If evidence to the contrary can be provided, please share it. Thankfully the development of Bitcoin Core happens more or less completely out in the open; anyone can audit and monitor the goings on. I personally check the repo at least once a day to see what work is being done. I believe that the regular committers are genuinely interested in the overall well being of the Bitcoin network and work towards the common goal of maintaining and improving Core and do their best to juggle the competing interests of the community that depends on them. That is not to say that they are The Only Ones; for the time being they have stepped up to the plate to do the heavy lifting. Until that changes in some way they have my support.
The hard line that some of the developers have drawn in regards to the block size has caused a serious rift and this write up is a direct response to oft-repeated accusations made by Mike Hearn and his supporters about members of the core development team. I have no affiliations or connection with Blockstream, however I have met a handful of the core developers, both affiliated and unaffiliated with Blockstream.
Mike opens his farewell address with his pedigree to prove his opinion's worth. He masterfully washes over the mountain of work put into improving Bitcoin Core over the years by the "small blockians" to paint the picture that Blockstream is stonewalling the development of Bitcoin. The folks who signed Greg's scalability road map have done some of the most important, unsung work in Bitcoin. Performance improvements, privacy enhancements, increased reliability, better sync times, mempool management, bandwidth reductions etc... all those things are thanks to the core devs and the research community (e.g. Christian Decker), many of which will lead to a smoother transition to larger blocks (e.g. libsecp256k1).(1) While ignoring previous work and harping on the block size exclusively, Mike accuses those same people who have spent countless hours working on the protocol of trying to turn Bitcoin into something useless because they remain conservative on a highly contentious issue that has tangible effects on network topology.
The nature of this accusation is characteristic of Mike's attitude over the past year which marked a shift in the block size debate from a technical argument to a personal one (in tandem with DDoS and censorship in /Bitcoin and general toxicity from both sides). For example, Mike claimed that sidechains constitutes a conflict of interest, as Blockstream employees are "strongly incentivized to ensure [bitcoin] works poorly and never improves" despite thousands of commits to the contrary. Many of these commits are top down rewrites of low level Bitcoin functionality, not chump change by any means. I am not just "counting commits" here. Anyways, Blockstream's current client base consists of Bitcoin exchanges whose future hinges on the widespread adoption of Bitcoin. The more people that use Bitcoin the more demand there will be for sidechains to service the Bitcoin economy. Additionally, one could argue that if there was some sidechain that gained significant popularity (hundreds of thousands of users), larger blocks would be necessary to handle users depositing and withdrawing funds into/from the sidechain. Perhaps if they were miners and core devs at the same time then a conflict of interest on small blocks would be a more substantive accusation (create artificial scarcity to increase tx fees). The rational behind pricing out the Bitcoin "base" via capacity constraint to increase their business prospects as a sidechain consultancy is contrived and illogical. If you believe otherwise I implore you to share a detailed scenario in your reply so I can see if I am missing something.
Okay, so back to it. Mike made the right move when Core would not change its position, he forked Core and gave the community XT. The choice was there, most miners took a pass. Clearly there was not consensus on Mike's proposed scaling road map or how big blocks should be rolled out. And even though XT was a failure (mainly because of massive untested capacity increases which were opposed by some of the larger pools whose support was required to activate the 75% fork), it has inspired a wave of implementation competition. It should be noted that the censorship and attacks by members of /Bitcoin is completely unacceptable, there is no excuse for such behavior. While theymos is entitled to run his subreddit as he sees fit, if he continues to alienate users there may be a point of mass exodus following some significant event in the community that he tries to censor. As for the DDoS attackers, they should be ashamed of themselves; it is recommended that alt. nodes mask their user agents.
Although Mike has left the building, his alarmist mindset on the block size debate lives on through Bitcoin Classic, an implementation which is using a more subtle approach to inspire adoption, as jtoomim cozies up with miners to get their support while appealing to the masses with a call for an adherence to Satoshi's "original vision for Bitcoin." That said, it is not clear that he is competent enough to lead the charge on the maintenance/improvement of the Bitcoin protocol. That leaves most of the heavy lifting up to Gavin, as Jeff has historically done very little actual work for Core. We are thus in a potentially more precarious situation then when we were with XT, as some Chinese miners are apparently "on board" for a hard fork block size increase. Jtoomim has expressed a willingness to accept an exceptionally low (60 or 66%) consensus threshold to activate the hard fork if necessary. Why? Because of the lost "opportunity cost" of the threshold not being reached.(c) With variance my guess is that a lucky 55% could activate that 60% threshold. That's basically two Chinese miners. I don't mean to attack him personally, he is just willing to go down a path that requires the support of only two major Chinese mining pools to activate his hard fork. As a side effect of the latency issues of GFW, a block size increase might increase orphan rate outside of GFW, profiting the Chinese pools. With a 60% threshold there is no way for miners outside of China to block that hard fork.
To compound the popularity of this implementation, the efforts of Mike, Gavin and Jeff have further blinded many within the community to the mountain of effort that core devs have put in. And it seems to be working, as they are beginning to successfully ostracize the core devs beyond the network of "true big block-believers." It appears that Chinese miners are getting tired of the debate (and with it Core) and may shift to another implementation over the issue.(d) Some are going around to mining pools and trying to undermine Core's position in the soft vs. hard fork debate. These private appeals to the miner community are a concern because there is no way to know if bad information is being passed on with the intent to disrupt Core's consensus based approach to development in favor of an alternative implementation controlled (i.e. benevolent dictator) by those appealing directly to miners. If the core team is reading this, you need to get out there and start pushing your agenda so the community has a better understanding of what you all do every day and how important the work is. Get some fancy videos up to show the effects of block size increase and work on reading materials that are easy for non technically minded folk to identify with and get behind.
The soft fork debate really highlights the disingenuity of some of these actors. Generally speaking, soft forks are easier on network participants who do not regularly keep up with the network's software updates or have forked the code for personal use and are unable to upgrade in time, while hard forks require timely software upgrades if the user hopes to maintain consensus after a hardfork. The merits of that argument come with heavy debate. However, more concerning is the fact that hard forks require central planning and arguably increase the power developers have over changes to the protocol.(2) In contrast, the 'signal of readiness' behavior of soft forks allows the network to update without any hardcoded flags and developer oversight. Issues with hard forks are further compounded by activation thresholds, as soft forks generally require 95% consensus while Bitcoin Classic only calls for 60-75% consensus, exposing network users to a greater risk of competing chains after the fork. Mike didn't want to give the Chinese any more power, but now the post XT fallout has pushed the Chinese miners right into the Bitcoin Classic drivers seat.
While a net split did happen briefly during the BIP66 soft fork, imagine that scenario amplified by miners who do not agree to hard fork changes while controlling 25-40% of the networks hashing power. Two actively mined chains with competing interests, the Doomsday Scenario. With a 5% miner hold out on a soft fork, the fork will constantly reorg and malicious transactions will rarely have more than one or two confirmations.(b) During a soft fork, nodes can protect themselves from double spends by waiting for extra confirmations when the node alerts the user that a ANYONECANSPEND transaction has been seen. Thus, soft forks give Bitcoin users more control over their software (they can choose to treat a softfork as a soft fork or a soft fork as a hardfork) which allows for greater flexibility on upgrade plans for those actively maintaining nodes and other network critical software. (2) Advocating for a low threshold hard forks is a step in the wrong direction if we are trying to limit the "central planning" of any particular implementation. However I do not believe that is the main concern of the Bitcoin Classic devs.
To switch gears a bit, Mike is ironically concerned China "controls" Bitcoin, but wanted to implement a block size increase that would only increase their relative control (via increased orphans). Until the p2p wire protocol is significantly improved (IBLT, etc...), there is very little room (if any at all) to raise the block size without significantly increasing orphan risk. This can be easily determined by looking at jtoomim's testnet network data that passed through normal p2p network, not the relay network.(3) In the mean time this will only get worse if no one picks up the slack on the relay network that Matt Corallo is no longer maintaining. (4)
Centralization is bad regardless of the block size, but Mike tries to conflate the centralization issues with the Blockstream block size side show for dramatic effect. In retrospect, it would appear that the initial lack of cooperation on a block size increase actually staved off increases in orphan risk. Unfortunately, this centralization metric will likely increase with the cooperation of Chinese miners and Bitcoin Classic if major strides to reduce orphan rates are not made.
Mike also manages to link to a post from the ProHashing guy RE forever-stuck transactions, which has been shown to generally be the result of poorly maintained/improperly implemented wallet software.(6) Ultimately Mike wants fees to be fixed despite the fact you can't enforce fixed fees in a system that is not centrally planned. Miners could decide to raise their minimum fees even when blocks are >1mb, especially when blocks become too big to reliably propagate across the network without being orphaned. What is the marginal cost for a tx that increases orphan risk by some %? That is a question being explored with flexcaps. Even with larger blocks, if miners outside the GFW fear orphans they will not create the bigger blocks without a decent incentive; in other words, even with a larger block size you might still end up with variable fees. Regardless, it is generally understood that variable fees are not preferred from a UX standpoint, but developers of Bitcoin software do not have the luxury of enforcing specific fees beyond basic defaults hardcoded to prevent cheap DoS attacks. We must expose the user to just enough information so they can make an informed decision without being overwhelmed. Hard? Yes. Impossible. No.
Shifting gears, Mike states that current development progress via segwit is an empty ploy, despite the fact that segwit comes with not only a marginal capacity increase, but it also plugs up major malleability vectors, allows pruning blocks for historical data and a bunch of other fun stuff. It's a huge win for unconfirmed transactions (which Mike should love). Even if segwit does require non-negligible changes to wallet software and Bitcoin Core (500 lines LoC), it allows us time to improve block relay (IBLT, weak blocks) so we can start raising the block size without fear of increased orphan rate. Certainly we can rush to increase the block size now and further exacerbate the China problem, or we can focus on the "long play" and limit negative externalities.
And does segwit help the Lightning Network? Yes. Is that something that indicates a Blockstream conspiracy? No. Comically, the big blockians used to criticize Blockstream for advocating for LN when there was no one working on it, but now that it is actively being developed, the tune has changed and everything Blockstream does is a conspiracy to push for Bitcoin's future as a dystopic LN powered settlement network. Is LN "the answer?" Obviously not, most don't actually think that. How it actually works in practice is yet to be seen and there could be unforseen emergent characteristics that make it less useful for the average user than originally thought. But it's a tool that should be developed in unison with other scaling measures if only for its usefulness for instant txs and micropayments.
Regardless, the fundamental divide rests on ideological differences that we all know well. Mike is fine with the miner-only validation model for nodes and is willing to accept some miner centralization so long as he gets the necessary capacity increases to satisfy his personal expectations for the immediate future of Bitcoin. Greg and co believe that a distributed full node landscape helps maintain a balance of decentralization in the face of the miner centralization threat. For example, if you have 10 miners who are the only sources for blockchain data then you run the risk of undetectable censorship, prolific sybil attacks, and no mechanism for individuals to validate the network without trusting a third party. As an analogy, take the tor network: you use it with an expectation of privacy while understanding that the multi-hop nature of the routing will increase latency. Certainly you could improve latency by removing a hop or two, but with it you lose some privacy. Does tor's high latency make it useless? Maybe for watching Netflix, but not for submitting leaked documents to some newspaper. I believe this is the philosophy held by most of the core development team.
Mike does not believe that the Bitcoin network should cater to this philosophy and any activity which stunts the growth of on-chain transactions is a direct attack on the protocol. Ultimately however I believe Greg and co. also want Bitcoin to scale on-chain transactions as much as possible. They believe that in order for Bitcoin to increase its capacity while adhering to acceptable levels of decentralization, much work needs to be done. It's not a matter of if block size will be increased, but when. Mike has confused this adherence to strong principles of decentralization as disingenuous and a cover up for a dystopic future of Bitcoin where sidechains run wild with financial institutions paying $40 per transaction. Again, this does not make any sense to me. If banks are spending millions to co-op this network what advantage does a decentralized node landscape have to them?
There are a few roads that the community can take now: one where we delay a block size increase while improvements to the protocol are made (with the understanding that some users may have to wait a few blocks to have their transaction included, fees will be dependent on transaction volume, and transactions <$1 may be temporarily cost ineffective) so that when we do increase the block size, orphan rate and node drop off are insignificant. Another is the immediate large block size increase which possibly leads to a future Bitcoin which looks nothing like it does today: low numbers of validating nodes, heavy trust in centralized network explorers and thus a more vulnerable network to government coercion/general attack. Certainly there are smaller steps for block size increases which might not be as immediately devastating, and perhaps that is the middle ground which needs to be trodden to appease those who are emotionally invested in a bigger block size. Combined with segwit however, max block sizes could reach unacceptable levels. There are other scenarios which might play out with competing chains etc..., but in that future Bitcoin has effectively failed.
As any technology that requires maintenance and human interaction, Bitcoin will require politicking for decision making. Up until now that has occurred via the "vote download" for software which implements some change to the protocol. I believe this will continue to be the most robust of options available to us. Now that there is competition, the Bitcoin Core community can properly advocate for changes to the protocol that it sees fit without being accused of co-opting the development of Bitcoin. An ironic outcome to the situation at hand. If users want their Bitcoins to remain valuable, they must actively determine which developers are most competent and have their best interests at heart. So far the core dev community has years of substantial and successful contributions under its belt, while the alt implementations have a smattering of developers who have not yet publicly proven (besides perhaps Gavin--although his early mistakes with block size estimates is concerning) they have the skills and endurance necessary to maintain a full node implementation. Perhaps now it is time that we focus on the personalities who many want to trust Bitcoin's future. Let us see if they can improve the speed at which signatures are validated by 7x. Or if they can devise privacy preserving protocols like Confidential Transactions. Or can they figure out ways to improve traversal times across a merkle tree? Can they implement HD functionality into a wallet without any coin-crushing bugs? Can they successfully modularize their implementation without breaking everything? If so, let's welcome them with open arms.
But Mike is at R3 now, which seems like a better fit for him ideologically. He can govern the rules with relative impunity and there is not a huge community of open source developers, researchers and enthusiasts to disagree with. I will admit, his posts are very convincing at first blush, but ultimately they are nothing more than a one sided appeal to the those in the community who have unrealistic or incomplete understandings of the technical challenges faced by developers maintaining a consensus critical, validation-heavy, distributed system that operates within an adversarial environment. Mike always enjoyed attacking Blockstream, but when survey his past behavior it becomes clear that his motives were not always pure. Why else would you leave with such a nasty, public farewell?
To all the XT'ers, btc'ers and so on, I only ask that you show some compassion when you critique the work of Bitcoin Core devs. We understand you have a competing vision for the scaling of Bitcoin over the next few years. They want Bitcoin to scale too, you just disagree on how and when it should be done. Vilifying and attacking the developers only further divides the community and scares away potential future talent who may want to further the Bitcoin cause. Unless you can replace the folks doing all this hard work on the protocol or can pay someone equally as competent, please think twice before you say something nasty.
As for Mike, I wish you the best at R3 and hope that you can one day return to the Bitcoin community with a more open mind. It must hurt having your software out there being used by so many but your voice snuffed. Hopefully one day you can return when many of the hard problems are solved (e.g. reduced propagation delays, better access to cheap bandwidth) and the road to safe block size increases have been paved.
(*) https://eprint.iacr.org/2014/763.pdf
(q) https://github.com/bitcoinclassic/bitcoinclassic/pull/6
(b) https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe012026.html
(c) https://github.com/bitcoinclassic/bitcoinclassic/pull/1#issuecomment-170299027
(d) http://toom.im/jameshilliard_classic_PR_1.html
(0) http://bitcoinstats.com/irc/bitcoin-dev/logs/2016/01/06
(1) https://github.com/bitcoin/bitcoin/graphs/contributors
(2) https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe012014.html
(3) https://toom.im/blocktime (beware of heavy website)
(4) https://bitcointalk.org/index.php?topic=766190.msg13510513#msg13510513
(5) https://news.ycombinator.com/item?id=10774773
(6) http://rusty.ozlabs.org/?p=573
edit, fixed some things.
edit 2, tried to clarify some more things and remove some personal bias thanks to astro
submitted by citboins to Bitcoin [link] [comments]

Is anyone else freaked out by this whole blocksize debate? Does anyone else find themself often agreeing with *both* sides - depending on whichever argument you happen to be reading at the moment? And do we need some better algorithms and data structures?

Why do both sides of the debate seem “right” to me?
I know, I know, a healthy debate is healthy and all - and maybe I'm just not used to the tumult and jostling which would be inevitable in a real live open major debate about something as vital as Bitcoin.
And I really do agree with the starry-eyed idealists who say Bitcoin is vital. Imperfect as it may be, it certainly does seem to represent the first real chance we've had in the past few hundred years to try to steer our civilization and our planet away from the dead-ends and disasters which our government-issued debt-based currencies keep dragging us into.
But this particular debate, about the blocksize, doesn't seem to be getting resolved at all.
Pretty much every time I read one of the long-form major arguments contributed by Bitcoin "thinkers" who I've come to respect over the past few years, this weird thing happens: I usually end up finding myself nodding my head and agreeing with whatever particular piece I'm reading!
But that should be impossible - because a lot of these people vehemently disagree!
So how can both sides sound so convincing to me, simply depending on whichever piece I currently happen to be reading?
Does anyone else feel this way? Or am I just a gullible idiot?
Just Do It?
When you first look at it or hear about it, increasing the size seems almost like a no-brainer: The "big-block" supporters say just increase the blocksize to 20 MB or 8 MB, or do some kind of scheduled or calculated regular increment which tries to take into account the capabilities of the infrastructure and the needs of the users. We do have the bandwidth and the memory to at least increase the blocksize now, they say - and we're probably gonna continue to have more bandwidth and memory in order to be able to keep increasing the blocksize for another couple decades - pretty much like everything else computer-based we've seen over the years (some of this stuff is called by names such as "Moore's Law").
On the other hand, whenever the "small-block" supporters warn about the utter catastrophe that a failed hard-fork would mean, I get totally freaked by their possible doomsday scenarios, which seem totally plausible and terrifying - so I end up feeling that the only way I'd want to go with a hard-fork would be if there was some pre-agreed "triggering" mechanism where the fork itself would only actually "switch on" and take effect provided that some "supermajority" of the network (of who? the miners? the full nodes?) had signaled (presumably via some kind of totally reliable p2p trustless software-based voting system?) that they do indeed "pre-agree" to actually adopt the pre-scheduled fork (and thereby avoid any possibility whatsoever of the precious blockchain somehow tragically splitting into two and pretty much killing this cryptocurrency off in its infancy).
So in this "conservative" scenario, I'm talking about wanting at least 95% pre-adoption agreement - not the mere 75% which I recall some proposals call for, which seems like it could easily lead to a 75/25 blockchain split.
But this time, with this long drawn-out blocksize debate, the core devs, and several other important voices who have become prominent opinion shapers over the past few years, can't seem to come to any real agreement on this.
Weird split among the devs
As far as I can see, there's this weird split: Gavin and Mike seem to be the only people among the devs who really want a major blocksize increase - and all the other devs seem to be vehemently against them.
But then on the other hand, the users seem to be overwhelmingly in favor of a major increase.
And there are meta-questions about governance, about about why this didn't come out as a BIP, and what the availability of Bitcoin XT means.
And today or yesterday there was this really cool big-blockian exponential graph based on doubling the blocksize every two years for twenty years, reminding us of the pure mathematical fact that 210 is indeed about 1000 - but not really addressing any of the game-theoretic points raised by the small-blockians. So a lot of the users seem to like it, but when so few devs say anything positive about it, I worry: is this just yet more exponential chart porn?
On the one hand, Gavin's and Mike's blocksize increase proposal initially seemed like a no-brainer to me.
And on the other hand, all the other devs seem to be against them. Which is weird - not what I'd initially expected at all (but maybe I'm just a fool who's seduced by exponential chart porn?).
Look, I don't mean to be rude to any of the core devs, and I don't want to come off like someone wearing a tinfoil hat - but it has to cross people's minds that the powers that be (the Fed and the other central banks and the governments that use their debt-issued money to run this world into a ditch) could very well be much more scared shitless than they're letting on. If we assume that the powers that be are using their usual playbook and tactics, then it could be worth looking at the book "Confessions of an Economic Hitman" by John Perkins, to get an idea of how they might try to attack Bitcoin. So, what I'm saying is, they do have a track record of sending in "experts" to try to derail projects and keep everyone enslaved to the Creature from Jekyll Island. I'm just saying. So, without getting ad hominem - let's just make sure that our ideas can really stand scrutiny on their own - as Nick Szabo says, we need to make sure there is "more computer science, less noise" in this debate.
When Gavin Andresen first came out with the 20 MB thing - I sat back and tried to imagine if I could download 20 MB in 10 minutes (which seems to be one of the basic mathematical and technological constraints here - right?)
I figured, "Yeah, I could download that" - even with my crappy internet connection.
And I guess the telecoms might be nice enough to continue to double our bandwidth every two years for the next couple decades – if we ask them politely?
On the other hand - I think we should be careful about entrusting the financial freedom of the world into the greedy hands of the telecoms companies - given all their shady shenanigans over the past few years in many countries. After decades of the MPAA and the FBI trying to chip away at BitTorrent, lately PirateBay has been hard to access. I would say it's quite likely that certain persons at institutions like JPMorgan and Goldman Sachs and the Fed might be very, very motivated to see Bitcoin fail - so we shouldn't be too sure about scaling plans which depend on the willingness of companies Verizon and AT&T to double our bandwith every two years.
Maybe the real important hardware buildout challenge for a company like 21 (and its allies such as Qualcomm) to take on now would not be "a miner in every toaster" but rather "Google Fiber Download and Upload Speeds in every Country, including China".
I think I've read all the major stuff on the blocksize debate from Gavin Andresen, Mike Hearn, Greg Maxwell, Peter Todd, Adam Back, and Jeff Garzick and several other major contributors - and, oddly enough, all their arguments seem reasonable - heck even Luke-Jr seems reasonable to me on the blocksize debate, and I always thought he was a whackjob overly influenced by superstition and numerology - and now today I'm reading the article by Bram Cohen - the inventor of BitTorrent - and I find myself agreeing with him too!
I say to myself: What's going on with me? How can I possibly agree with all of these guys, if they all have such vehemently opposing viewpoints?
I mean, think back to the glory days of a couple of years ago, when all we were hearing was how this amazing unprecedented grassroots innovation called Bitcoin was going to benefit everyone from all walks of life, all around the world:
...basically the entire human race transacting everything into the blockchain.
(Although let me say that I think that people's focus on ideas like driverless cabs creating realtime fare markets based on supply and demand seems to be setting our sights a bit low as far as Bitcoin's abilities to correct the financial world's capital-misallocation problems which seem to have been made possible by infinite debt-based fiat. I would have hoped that a Bitcoin-based economy would solve much more noble, much more urgent capital-allocation problems than driverless taxicabs creating fare markets or refrigerators ordering milk on the internet of things. I was thinking more along the lines that Bitcoin would finally strangle dead-end debt-based deadly-toxic energy industries like fossil fuels and let profitable clean energy industries like Thorium LFTRs take over - but that's another topic. :=)
Paradoxes in the blocksize debate
Let me summarize the major paradoxes I see here:
(1) Regarding the people (the majority of the core devs) who are against a blocksize increase: Well, the small-blocks arguments do seem kinda weird, and certainly not very "populist", in the sense that: When on earth have end-users ever heard of a computer technology whose capacity didn't grow pretty much exponentially year-on-year? All the cool new technology we've had - from hard drives to RAM to bandwidth - started out pathetically tiny and grew to unimaginably huge over the past few decades - and all our software has in turn gotten massively powerful and big and complex (sometimes bloated) to take advantage of the enormous new capacity available.
But now suddenly, for the first time in the history of technology, we seem to have a majority of the devs, on a major p2p project - saying: "Let's not scale the system up. It could be dangerous. It might break the whole system (if the hard-fork fails)."
I don't know, maybe I'm missing something here, maybe someone else could enlighten me, but I don't think I've ever seen this sort of thing happen in the last few decades of the history of technology - devs arguing against scaling up p2p technology to take advantage of expected growth in infrastructure capacity.
(2) But... on the other hand... the dire warnings of the small-blockians about what could happen if a hard-fork were to fail - wow, they do seem really dire! And these guys are pretty much all heavyweight, experienced programmers and/or game theorists and/or p2p open-source project managers.
I must say, that nearly all of the long-form arguments I've read - as well as many, many of the shorter comments I've read from many users in the threads, whose names I at least have come to more-or-less recognize over the past few months and years on reddit and bitcointalk - have been amazingly impressive in their ability to analyze all aspects of the lifecycle and management of open-source software projects, bringing up lots of serious points which I could never have come up with, and which seem to come from long experience with programming and project management - as well as dealing with economics and human nature (eg, greed - the game-theory stuff).
So a lot of really smart and experienced people with major expertise in various areas ranging from programming to management to game theory to politics to economics have been making some serious, mature, compelling arguments.
But, as I've been saying, the only problem to me is: in many of these cases, these arguments are vehemently in opposition to each other! So I find myself agreeing with pretty much all of them, one by one - which means the end result is just a giant contradiction.
I mean, today we have Bram Cohen, the inventor of BitTorrent, arguing (quite cogently and convincingly to me), that it would be dangerous to increase the blocksize. And this seems to be a guy who would know a few things about scaling out a massive global p2p network - since the protocol which he invented, BitTorrent, is now apparently responsible for like a third of the traffic on the internet (and this despite the long-term concerted efforts of major evil players such as the MPAA and the FBI to shut the whole thing down).
Was the BitTorrent analogy too "glib"?
By the way - I would like to go on a slight tangent here and say that one of the main reasons why I felt so "comfortable" jumping on the Bitcoin train back a few years ago, when I first heard about it and got into it, was the whole rough analogy I saw with BitTorrent.
I remembered the perhaps paradoxical fact that when a torrent is more popular (eg, a major movie release that just came out last week), then it actually becomes faster to download. More people want it, so more people have a few pieces of it, so more people are able to get it from each other. A kind of self-correcting economic feedback loop, where more demand directly leads to more supply.
(BitTorrent manages to pull this off by essentially adding a certain structure to the file being shared, so that it's not simply like an append-only list of 1 MB blocks, but rather more like an random-access or indexed array of 1 MB chunks. Say you're downloading a film which is 700 MB. As soon as your "client" program has downloaded a single 1-MB chunk - say chunk #99 - your "client" program instantly turns into a "server" program as well - offering that chunk #99 to other clients. From my simplistic understanding, I believe the Bitcoin protocol does something similar, to provide a p2p architecture. Hence my - perhaps naïve - assumption that Bitcoin already had the right algorithms / architecture / data structure to scale.)
The efficiency of the BitTorrent network seemed to jive with that "network law" (Metcalfe's Law?) about fax machines. This law states that the more fax machines there are, the more valuable the network of fax machines becomes. Or the value of the network grows on the order of the square of the number of nodes.
This is in contrast with other technology like cars, where the more you have, the worse things get. The more cars there are, the more traffic jams you have, so things start going downhill. I guess this is because highway space is limited - after all, we can't pave over the entire countryside, and we never did get those flying cars we were promised, as David Graeber laments in a recent essay in The Baffler magazine :-)
And regarding the "stress test" supposedly happening right now in the middle of this ongoing blocksize debate, I don't know what worries me more: the fact that it apparently is taking only $5,000 to do a simple kind of DoS on the blockchain - or the fact that there are a few rumors swirling around saying that the unknown company doing the stress test shares the same physical mailing address with a "scam" company?
Or maybe we should just be worried that so much of this debate is happening on a handful of forums which are controlled by some guy named theymos who's already engaged in some pretty "contentious" or "controversial" behavior like blowing a million dollars on writing forum software (I guess he never heard that reddit.com software is open-source)?
So I worry that the great promise of "decentralization" might be more fragile than we originally thought.
Scaling
Anyways, back to Metcalfe's Law: with virtual stuff, like torrents and fax machines, the more the merrier. The more people downloading a given movie, the faster it arrives - and the more people own fax machines, the more valuable the overall fax network.
So I kindof (naïvely?) assumed that Bitcoin, being "virtual" and p2p, would somehow scale up the same magical way BitTorrrent did. I just figured that more people using it would somehow automatically make it stronger and faster.
But now a lot of devs have started talking in terms of the old "scarcity" paradigm, talking about blockspace being a "scarce resource" and talking about "fee markets" - which seems kinda scary, and antithetical to much of the earlier rhetoric we heard about Bitcoin (the stuff about supporting our favorite creators with micropayments, and the stuff about Africans using SMS to send around payments).
Look, when some asshole is in line in front of you at the cash register and he's holding up the line so they can run his credit card to buy a bag of Cheeto's, we tend to get pissed off at the guy - clogging up our expensive global electronic payment infrastructure to make a two-dollar purchase. And that's on a fairly efficient centralized system - and presumably after a year or so, VISA and the guy's bank can delete or compress the transaction in their SQL databases.
Now, correct me if I'm wrong, but if some guy buys a coffee on the blockchain, or if somebody pays an online artist $1.99 for their work - then that transaction, a few bytes or so, has to live on the blockchain forever?
Or is there some "pruning" thing that gets rid of it after a while?
And this could lead to another question: Viewed from the perspective of double-entry bookkeeping, is the blockchain "world-wide ledger" more like the "balance sheet" part of accounting, i.e. a snapshot showing current assets and liabilities? Or is it more like the "cash flow" part of accounting, i.e. a journal showing historical revenues and expenses?
When I think of thousands of machines around the globe having to lug around multiple identical copies of a multi-gigabyte file containing some asshole's coffee purchase forever and ever... I feel like I'm ideologically drifting in one direction (where I'd end up also being against really cool stuff like online micropayments and Africans banking via SMS)... so I don't want to go there.
But on the other hand, when really experienced and battle-tested veterans with major experience in the world of open-souce programming and project management (the "small-blockians") warn of the catastrophic consequences of a possible failed hard-fork, I get freaked out and I wonder if Bitcoin really was destined to be a settlement layer for big transactions.
Could the original programmer(s) possibly weigh in?
And I don't mean to appeal to authority - but heck, where the hell is Satoshi Nakamoto in all this? I do understand that he/she/they would want to maintain absolute anonymity - but on the other hand, I assume SN wants Bitcoin to succeed (both for the future of humanity - or at least for all the bitcoins SN allegedly holds :-) - and I understand there is a way that SN can cryptographically sign a message - and I understand that as the original developer of Bitcoin, SN had some very specific opinions about the blocksize... So I'm kinda wondering of Satoshi could weigh in from time to time. Just to help out a bit. I'm not saying "Show us a sign" like a deity or something - but damn it sure would be fascinating and possibly very helpful if Satoshi gave us his/hetheir 2 satoshis worth at this really confusing juncture.
Are we using our capacity wisely?
I'm not a programming or game-theory whiz, I'm just a casual user who has tried to keep up with technology over the years.
It just seems weird to me that here we have this massive supercomputer (500 times more powerful than the all the supercomputers in the world combined) doing fairly straightforward "embarassingly parallel" number-crunching operations to secure a p2p world-wide ledger called the blockchain to keep track of a measly 2.1 quadrillion tokens spread out among a few billion addresses - and a couple of years ago you had people like Rick Falkvinge saying the blockchain would someday be supporting multi-million-dollar letters of credit for international trade and you had people like Andreas Antonopoulos saying the blockchain would someday allow billions of "unbanked" people to send remittances around the village or around the world dirt-cheap - and now suddenly in June 2015 we're talking about blockspace as a "scarce resource" and talking about "fee markets" and partially centralized, corporate-sponsored "Level 2" vaporware like Lightning Network and some mysterious company is "stess testing" or "DoS-ing" the system by throwing away a measly $5,000 and suddenly it sounds like the whole system could eventually head right back into PayPal and Western Union territory again, in terms of expensive fees.
When I got into Bitcoin, I really was heavily influenced by vague analogies with BitTorrent: I figured everyone would just have tiny little like utorrent-type program running on their machine (ie, Bitcoin-QT or Armory or Mycelium etc.).
I figured that just like anyone can host a their own blog or webserver, anyone would be able to host their own bank.
Yeah, Google and and Mozilla and Twitter and Facebook and WhatsApp did come along and build stuff on top of TCP/IP, so I did expect a bunch of companies to build layers on top of the Bitcoin protocol as well. But I still figured the basic unit of bitcoin client software powering the overall system would be small and personal and affordable and p2p - like a bittorrent client - or at the most, like a cheap server hosting a blog or email server.
And I figured there would be a way at the software level, at the architecture level, at the algorithmic level, at the data structure level - to let the thing scale - if not infinitely, at least fairly massively and gracefully - the same way the BitTorrent network has.
Of course, I do also understand that with BitTorrent, you're sharing a read-only object (eg, a movie) - whereas with Bitcoin, you're achieving distributed trustless consensus and appending it to a write-only (or append-only) database.
So I do understand that the problem which BitTorrent solves is much simpler than the problem which Bitcoin sets out to solve.
But still, it seems that there's got to be a way to make this thing scale. It's p2p and it's got 500 times more computing power than all the supercomputers in the world combined - and so many brilliant and motivated and inspired people want this thing to succeed! And Bitcoin could be our civilization's last chance to steer away from the oncoming debt-based ditch of disaster we seem to be driving into!
It just seems that Bitcoin has got to be able to scale somehow - and all these smart people working together should be able to come up with a solution which pretty much everyone can agree - in advance - will work.
Right? Right?
A (probably irrelevant) tangent on algorithms and architecture and data structures
I'll finally weigh with my personal perspective - although I might be biased due to my background (which is more on the theoretical side of computer science).
My own modest - or perhaps radical - suggestion would be to ask whether we're really looking at all the best possible algorithms and architectures and data structures out there.
From this perspective, I sometimes worry that the overwhelming majority of the great minds working on the programming and game-theory stuff might come from a rather specific, shall we say "von Neumann" or "procedural" or "imperative" school of programming (ie, C and Python and Java programmers).
It seems strange to me that such a cutting-edge and important computer project would have so little participation from the great minds at the other end of the spectrum of programming paradigms - namely, the "functional" and "declarative" and "algebraic" (and co-algebraic!) worlds.
For example, I was struck in particular by statements I've seen here and there (which seemed rather hubristic or lackadaisical to me - for something as important as Bitcoin), that the specification of Bitcoin and the blockchain doesn't really exist in any form other than the reference implementation(s) (in procedural languages such as C or Python?).
Curry-Howard anyone?
I mean, many computer scientists are aware of the Curry-Howard isomorophism, which basically says that the relationship between a theorem and its proof is equivalent to the relationship between a specification and its implementation. In other words, there is a long tradition in mathematics (and in computer programming) of:
And it's not exactly "turtles all the way down" either: a specification is generally simple and compact enough that a good programmer can usually simply visually inspect it to determine if it is indeed "correct" - something which is very difficult, if not impossible, to do with a program written in a procedural, implementation-oriented language such as C or Python or Java.
So I worry that we've got this tradition, from the open-source github C/Java programming tradition, of never actually writing our "specification", and only writing the "implementation". In mission-critical military-grade programming projects (which often use languages like Ada or Maude) this is simply not allowed. It would seem that a project as mission-critical as Bitcoin - which could literally be crucial for humanity's continued survival - should also use this kind of military-grade software development approach.
And I'm not saying rewrite the implementations in these kind of theoretical languages. But it might be helpful if the C/Python/Java programmers in the Bitcoin imperative programming world could build some bridges to the Maude/Haskell/ML programmers of the functional and algebraic programming worlds to see if any kind of useful cross-pollination might take place - between specifications and implementations.
For example, the JavaFAN formal analyzer for multi-threaded Java programs (developed using tools based on the Maude language) was applied to the Remote Agent AI program aboard NASA's Deep Space 1 shuttle, written in Java - and it took only a few minutes using formal mathematical reasoning to detect a potential deadlock which would have occurred years later during the space mission when the damn spacecraft was already way out around Pluto.
And "the Maude-NRL (Naval Research Laboratory) Protocol Analyzer (Maude-NPA) is a tool used to provide security proofs of cryptographic protocols and to search for protocol flaws and cryptosystem attacks."
These are open-source formal reasoning tools developed by DARPA and used by NASA and the US Navy to ensure that program implementations satisfy their specifications. It would be great if some of the people involved in these kinds of projects could contribute to help ensure the security and scalability of Bitcoin.
But there is a wide abyss between the kinds of programmers who use languages like Maude and the kinds of programmers who use languages like C/Python/Java - and it can be really hard to get the two worlds to meet. There is a bit of rapprochement between these language communities in languages which might be considered as being somewhere in the middle, such as Haskell and ML. I just worry that Bitcoin might be turning into being an exclusively C/Python/Java project (with the algorithms and practitioners traditionally of that community), when it could be more advantageous if it also had some people from the functional and algebraic-specification and program-verification community involved as well. The thing is, though: the theoretical practitioners are big on "semantics" - I've heard them say stuff like "Yes but a C / C++ program has no easily identifiable semantics". So to get them involved, you really have to first be able to talk about what your program does (specification) - before proceeding to describe how it does it (implementation). And writing high-level specifications is typically very hard using the syntax and semantics of languages like C and Java and Python - whereas specs are fairly easy to write in Maude - and not only that, they're executable, and you state and verify properties about them - which provides for the kind of debate Nick Szabo was advocating ("more computer science, less noise").
Imagine if we had an executable algebraic specification of Bitcoin in Maude, where we could formally reason about and verify certain crucial game-theoretical properties - rather than merely hand-waving and arguing and deploying and praying.
And so in the theoretical programming community you've got major research on various logics such as Girard's Linear Logic (which is resource-conscious) and Bruni and Montanari's Tile Logic (which enables "pasting" bigger systems together from smaller ones in space and time), and executable algebraic specification languages such as Meseguer's Maude (which would be perfect for game theory modeling, with its functional modules for specifying the deterministic parts of systems and its system modules for specifiying non-deterministic parts of systems, and its parameterized skeletons for sketching out the typical architectures of mobile systems, and its formal reasoning and verification tools and libraries which have been specifically applied to testing and breaking - and fixing - cryptographic protocols).
And somewhat closer to the practical hands-on world, you've got stuff like Google's MapReduce and lots of Big Data database languages developed by Google as well. And yet here we are with a mempool growing dangerously big for RAM on a single machine, and a 20-GB append-only list as our database - and not much debate on practical results from Google's Big Data databases.
(And by the way: maybe I'm totally ignorant for asking this, but I'll ask anyways: why the hell does the mempool have to stay in RAM? Couldn't it work just as well if it were stored temporarily on the hard drive?)
And you've got CalvinDB out of Yale which apparently provides an ACID layer on top of a massively distributed database.
Look, I'm just an armchair follower cheering on these projects. I can barely manage to write a query in SQL, or read through a C or Python or Java program. But I would argue two points here: (1) these languages may be too low-level and "non-formal" for writing and modeling and formally reasoning about and proving properties of mission-critical specifications - and (2) there seem to be some Big Data tools already deployed by institutions such as Google and Yale which support global petabyte-size databases on commodity boxes with nice properties such as near-real-time and ACID - and I sometimes worry that the "core devs" might be failing to review the literature (and reach out to fellow programmers) out there to see if there might be some formal program-verification and practical Big Data tools out there which could be applied to coming up with rock-solid, 100% consensus proposals to handle an issue such as blocksize scaling, which seems to have become much more intractable than many people might have expected.
I mean, the protocol solved the hard stuff: the elliptical-curve stuff and the Byzantine General stuff. How the heck can we be falling down on the comparatively "easier" stuff - like scaling the blocksize?
It just seems like defeatism to say "Well, the blockchain is already 20-30 GB and it's gonna be 20-30 TB ten years from now - and we need 10 Mbs bandwidth now and 10,000 Mbs bandwidth 20 years from - assuming the evil Verizon and AT&T actually give us that - so let's just become a settlement platform and give up on buying coffee or banking the unbanked or doing micropayments, and let's push all that stuff into some corporate-controlled vaporware without even a whitepaper yet."
So you've got Peter Todd doing some possibly brilliant theorizing and extrapolating on the idea of "treechains" - there is a Let's Talk Bitcoin podcast from about a year ago where he sketches the rough outlines of this idea out in a very inspiring, high-level way - although the specifics have yet to be hammered out. And we've got Blockstream also doing some hopeful hand-waving about the Lightning Network.
Things like Peter Todd's treechains - which may be similar to the spark in some devs' eyes called Lightning Network - are examples of the kind of algorithm or architecture which might manage to harness the massive computing power of miners and nodes in such a way that certain kinds of massive and graceful scaling become possible.
It just seems like a kindof tiny dev community working on this stuff.
Being a C or Python or Java programmer should not be a pre-req to being able to help contribute to the specification (and formal reasoning and program verification) for Bitcoin and the blockchain.
XML and UML are crap modeling and specification languages, and C and Java and Python are even worse (as specification languages - although as implementation languages, they are of course fine).
But there are serious modeling and specification languages out there, and they could be very helpful at times like this - where what we're dealing with is questions of modeling and specification (ie, "needs and requirements").
One just doesn't often see the practical, hands-on world of open-source github implementation-level programmers and the academic, theoretical world of specification-level programmers meeting very often. I wish there were some way to get these two worlds to collaborate on Bitcoin.
Maybe a good first step to reach out to the theoretical people would be to provide a modular executable algebraic specification of the Bitcoin protocol in a recognized, military/NASA-grade specification language such as Maude - because that's something the theoretical community can actually wrap their heads around, whereas it's very hard to get them to pay attention to something written only as a C / Python / Java implementation (without an accompanying specification in a formal language).
They can't check whether the program does what it's supposed to do - if you don't provide a formal mathematical definition of what the program is supposed to do.
Specification : Implementation :: Theorem : Proof
You have to remember: the theoretical community is very aware of the Curry-Howard isomorphism. Just like it would be hard to get a mathematician's attention by merely showing them a proof without telling also telling them what theorem the proof is proving - by the same token, it's hard to get the attention of a theoretical computer scientist by merely showing them an implementation without showing them the specification that it implements.
Bitcoin is currently confronted with a mathematical or "computer science" problem: how to secure the network while getting high enough transactional throughput, while staying within the limited RAM, bandwidth and hard drive space limitations of current and future infrastructure.
The problem only becomes a political and economic problem if we give up on trying to solve it as a mathematical and "theoretical computer science" problem.
There should be a plethora of whitepapers out now proposing algorithmic solutions to these scaling issues. Remember, all we have to do is apply the Byzantine General consensus-reaching procedure to a worldwide database which shuffles 2.1 quadrillion tokens among a few billion addresses. The 21 company has emphatically pointed out that racing to compute a hash to add a block is an "embarrassingly parallel" problem - very easy to decompose among cheap, fault-prone, commodity boxes, and recompose into an overall solution - along the lines of Google's highly successful MapReduce.
I guess what I'm really saying is (and I don't mean to be rude here), is that C and Python and Java programmers might not be the best qualified people to develop and formally prove the correctness of (note I do not say: "test", I say "formally prove the correctness of") these kinds of algorithms.
I really believe in the importance of getting the algorithms and architectures right - look at Google Search itself, it uses some pretty brilliant algorithms and architectures (eg, MapReduce, Paxos) which enable it to achieve amazing performance - on pretty crappy commodity hardware. And look at BitTorrent, which is truly p2p, where more demand leads to more supply.
So, in this vein, I will close this lengthy rant with an oddly specific link - which may or may not be able to make some interesting contributions to finding suitable algorithms, architectures and data structures which might help Bitcoin scale massively. I have no idea if this link could be helpful - but given the near-total lack of people from the Haskell and ML and functional worlds in these Bitcoin specification debates, I thought I'd be remiss if I didn't throw this out - just in case there might be something here which could help us channel the massive computing power of the Bitcoin network in such a way as to enable us simply sidestep this kind of desperate debate where both sides seem right because the other side seems wrong.
https://personal.cis.strath.ac.uk/neil.ghani/papers/ghani-calco07
The above paper is about "higher dimensional trees". It uses a bit of category theory (not a whole lot) and a bit of Haskell (again not a lot - just a simple data structure called a Rose tree, which has a wikipedia page) to develop a very expressive and efficient data structure which generalizes from lists to trees to higher dimensions.
I have no idea if this kind of data structure could be applicable to the current scaling mess we apparently are getting bogged down in - I don't have the game-theory skills to figure it out.
I just thought that since the blockchain is like a list, and since there are some tree-like structures which have been grafted on for efficiency (eg Merkle trees) and since many of the futuristic scaling proposals seem to also involve generalizing from list-like structures (eg, the blockchain) to tree-like structures (eg, side-chains and tree-chains)... well, who knows, there might be some nugget of algorithmic or architectural or data-structure inspiration there.
So... TL;DR:
(1) I'm freaked out that this blocksize debate has splintered the community so badly and dragged on so long, with no resolution in sight, and both sides seeming so right (because the other side seems so wrong).
(2) I think Bitcoin could gain immensely by using high-level formal, algebraic and co-algebraic program specification and verification languages (such as Maude including Maude-NPA, Mobile Maude parameterized skeletons, etc.) to specify (and possibly also, to some degree, verify) what Bitcoin does - before translating to low-level implementation languages such as C and Python and Java saying how Bitcoin does it. This would help to communicate and reason about programs with much more mathematical certitude - and possibly obviate the need for many political and economic tradeoffs which currently seem dismally inevitable - and possibly widen the collaboration on this project.
(3) I wonder if there are some Big Data approaches out there (eg, along the lines of Google's MapReduce and BigTable, or Yale's CalvinDB), which could be implemented to allow Bitcoin to scale massively and painlessly - and to satisfy all stakeholders, ranging from millionaires to micropayments, coffee drinkers to the great "unbanked".
submitted by BeYourOwnBank to Bitcoin [link] [comments]

Live Bitcoin Trading With DeriBot on Deribit Do you REALLY understand the Bitcoin Mempool? Programmer explains. Trading Bitcoin - A quick look at charts before the 15 hr flight Bitcoin Transactions Explained Live Bitcoin Trading With DeriBot on Deribit

Bitcoin Mempool Summary. The Mempool is a “waiting area” for Bitcoin transactions that each full node maintains for itself. After a transaction is verified by a node, it waits inside the Mempool until it’s picked up by a Bitcoin miner and inserted into a block. That’s the Bitcoin mempool in a nutshell. The mempool is where all the valid transactions wait to be confirmed by the Bitcoin network. A high mempool size indicates more network traffic which will result in longer average confirmation time and higher priority fees. The mempool size is a good metric to estimate how long the congestion will last whereas the Mempool Transaction Count ... Bitcoin charts, BTC price, historical and live graph and other cryptocurrency visualizations Cada nodo Bitcoin crea su propia versión de mempool conectándose a la red de Bitcoin. El contenido de la mempool se agrega a partir de algunas instancias de nodos Bitcoin actualizados, mantenidos por el equipo de desarrolladores de Blockchain.com. De esta forma, recopilamos la mayor cantidad de información posible para proporcionar métricas precisas de Mempool. This page displays the number and size of the unconfirmed bitcoin transactions, also known as the transactions in the mempool. It gives a real-time view and shows how the mempool evolves over the time. The transactions are colored by the amount of fee they pay per byte. The data is generated from my full node and is updated every minute.

[index] [5413] [8371] [6788] [4523] [20455] [33347] [23587] [30813] [9706] [13377]

Live Bitcoin Trading With DeriBot on Deribit

The live Bitcoin transactions are spawned when they are broadcast to mempool, using the blockchain.info websocket, the spheres are proportional to their BTC value. Colour code: less than 0.01 BTC ... BITCOIN CAPITULATION OR BULL RUN NEXT?!📈 LIVE Crypto Analysis TA & BTC Cryptocurrency Price News Now - Duration: 17:21. Crypto Kirby Trading 13,890 views 17:21 Bitcoin cash is the real bitcoin, that is the topic of this video. Bitcoin Cash is set to overtake Bitcoin as the true Bitcoin for various reasons including,... 🔴[LIVE] Bitcoin Mining Farm Online - Digital Gold In 2020 Loot-Btc 162 watching Live now 95% Winning Forex Trading Formula - Beat The Market Maker📈 - Duration: 37:53. The Bitcoin Mempool, Difficulty Adjustment, Hashrate, Block Time, Block Reward, Transaction Fees and much more is explained simply in this video. Bitcoin onchain data: https://studio.glassnode.com ...

#