Bitcoin Core Developer Lays Out Scalability Roadmap Based ...

Full overview of Eth 2.0 & 1.x roadmaps from Messari

Full section on Messari's Ethereum trends for 2020 here

ETH 2.0 Research/Governance/Roadmap at a glance

If history is any guide, we’re not going to see ETH 2.0 until 2022 at the earliest, even if the earliest phases of “Serenity” begin getting pushed in mid-2020. ETH 2.0’s rollout breaks down into seven (7!!!) phases and brings with it the promise of staking, sharding, a new virtual machine, and more dancing badgers.
(One of our analysts, Wilson Withiam, put together an excellent overview of both the ETH 2.0 and ETH 1.x roadmaps for this report. They are critical to track and understand at a high-level given how much Ethereum’s performance will affect other competitive projects and most of the DeFi and Web 3 infrastructure. So these next two sections are longer and more technical.)
Here’s what you need to know about the current game plan for crypto’s largest platform.
Phase 0 marks the launch of the “beacon chain”, which will serve as the backbone for a new blockchain. The beacon chain will manage network validators (large early stakers like ConsenSys) and ultimately assign validators to individual shards (slicing the new blockchain into smaller chunks is a key, difficult, controversial scaling decision that’s been made). The new chain will support Ethereum’s new proof-of-stake consensus mechanism, and offer inflation rewards with new ETH2 for those that pony up and lock 32 ETH1 tokens into an irreversible contract. That one way bridge into the new system is also contentious, but it means ETH1 supply will start getting “effectively burned” once token holder begin claiming beacon chain validator slots. Initial reports claimed Jan. 3 as a realistic launch date (lol). It will be amazing to see this launched by end of June.
Phase 1 will introduce 64 individual shard chains (reduced from 1,024!!!) to the network, with the option to increase the total down the road as the design gets tested. The Ethereum elite see sharding as the “key to future scalability” as shards can parallelize transaction processing, something that could improve network performance and reduce individual validator’s costs (good for decentralization). It comes with big risk: this is still theoretical. No network the size of Ethereum has successfully sharded its blockchain. In Phase 1, shard chains will only contain simple data sets (no smart contracts or transaction executions) to test the system’s structure. As with Phase 0, the beacon chain will continue to run in parallel with ETH 1.x throughout the phase. Don’t expect Phase 1 anytime before 2021.
Phase 2 marks the full launch of the ETH2 chain, allowing for on-chain contract execution and introducing the new eWASM virtual machine (dubbed EVM 2.0). At this point, existing dApps can start migrating their contracts from ETH 1.x to a specific shard (one shard per contract) in the new network. Storage rent, charging contract owners for storing data on the network (more on this below), is in the cards as well, which would require mass contract rewrites. Even though Phase 2 intends to replace the original Ethereum blockchain entirely, ETH 1.x may still live on as a shard within ETH2. (How confused are you by now? See why bitcoin will still dominate the macro narrative for a while?) A late 2021 release for Phase 2 is optimistic. Before the end of 2022 would be a win.
The final four phases are less defined, and without an attached timeline:
Phase 3 implements state-minimized clients (because stateless clients are just too much). Phase 4 allows for cross-shard transactions. Phase 5 improves network security and the availability of data proofs. Phase 6 introduces meta-shards, as in “shards within shards within shards,” for near-infinite scaling. If you’re scratching your head and are sadistic enough to read more, the Sharding Wiki page does note, “this may be difficult.”
Scaling and compilation efficiencies aside, the most notable change in Ethereum’s metamorphosis is the transition from proof-of-work to proof-of-stake. PoW is the more battle tested security model for blockchain networks, while PoS may prove to be more efficient but with new and less obvious attack vectors. For the more technical, we recommend reading Bison Trails’ Viktor Bunin on the subject of PoS security threats.
Past research has also shown PoS requires an extra layer of “trust” vs. PoW, to help nodes sync to the network. Most models share specific characteristics to address this trust issue, such as allowing for a dynamic set of validators (rotate your security), promoting token holder participation in consensus, and assessing steep penalties (slashing) for any network participant that violates the protocol guidelines. ETH 2.0 will function similarly, but may be able to learn from other PoS networks (and their R&D) as well as those come live and see real world issues. As Vitalik points out, recent research in PoS resulted in “great theoretical progress,” But...
Listen, we're talking about practice. Not a game. Not a game. Not a game. We're talking about practice. Not a game….Practice? We're talking about practice, man? We're talking about practice. We're talking about practice. We ain't talking about the game. We're talking about practice, man.
Vitalik was eight when this happened, so the clip might help and prove metaphoric.

2 ETH 1.x Research/Governance/Roadmap at a glance.

Ok, one more. Bear with us. Let’s reiterate, ETH 2.0 is a brand new blockchain. It’s going to be a chaotic and high-risk transition. In the meantime, the existing network needs to run existing applications (particularly financial settlements for DeFi transactions). More critical upgrades are needed in the current system.
To that end, ETH 1.x devs have three goals to boost performance and reduce blockchain bloat: (1) introduce client optimizations that increase transaction capacity; (2) cap disk space requirements and prune old, memory-sucking data (so running a node is less expensive and more decentralized); and (3) upgrade the EVM to eWASM, a newer open standard for code compilers that simplifies debugging, and is also used by all the newer smart contract platforms. ETH 1.x developers have decided to split the major tasks amongst four working groups:

Core developers intend to introduce most of these implementations through a series of hard forks, the latest of which activated just over a week ago (Istanbul, Dec. 7). However, Istanbul’s second phase, tentatively scheduled for Q2 next year, has Ethereans at each other’s throats. The controversy boils down to the fork’s inclusion of ProgPoW, an ASIC-resistant hashing algorithm designed to replace Ethereum’s current algo. ProgPoW aims to even the playing field for GPU miners and ward off the entrance of potential ASIC competitors. The miners like that. But many miners and investors see ProgPoW as a threat to their investments. For miners, the change would shift the power dynamic away from mining farms and render expensive, specialized mining hardware useless. Ethereum (and ERC-20) investors intent on securing their assets might balk because ASIC miners typically prop up hash rates (overall chain security) and their costs “naturally create a price-floor for ASK prices of miners’ sell-orders.”
This saga is far from over. The infighting will likely continue leading up to ProgPoW’s activation date mid-next year, and presents the strongest potential for a network split since “The DAO” fork that spawned Ethereum Classic. The looming transition to ETH 2.0 (and proof-of-stake) will likely deter investor pushback, because it’s a short-term battle in a war the miners are ultimately going to lose, anyway.
Unless the roadmap changes back to supporting a hybrid PoW/PoS system, of course, but... Oh my god, I’m just kidding. This section is mercifully over.
submitted by CryptigoVespucci to ethereum [link] [comments]

Vitalik's response to Tuur

I interlaced everything between Vitalik and Tuur to make it easier to read.
1/ People often ask me why I’m so “against” Ethereum. Why do I go out of my way to point out flaws or make analogies that put it in a bad light?
Intro
2/ First, ETH’s architecture & culture is opposite that of Bitcoin, and yet claims to offer same solutions: decentralization, immutability, SoV, asset issuance, smart contracts, …
Second, ETH is considered a crypto ‘blue chip’, thus colors perception of uninformed newcomers.
Agree! I personally find Ethereum culture far saner, though I am a bit biased :)
3/ I've followed Ethereum since 2014 & feel a responsibility to share my concerns. IMO contrary to its marketing, ETH is at best a science experiment. It’s now valued at $13B, which I think is still too high.
Not an argument
4/ I agree with Ethereum developer Vlad Zamfir that it’s not money, not safe, and not scalable. https://twitter.com/VladZamfistatus/838006311598030848
@VladZamfir Eth isn't money, so there is no monetary policy. There is currently fixed block issuance with an exponential difficulty increase (the bomb).
I'm pretty sure Vlad would say the exact same thing about Bitcoin
5/ To me the first red flag came up when in our weekly hangout we asked the ETH founders about to how they were going to scale the network. (We’re now 4.5 years later, and sharding is still a pipe dream.)
Ethereum's Joe Lubin in June 2014: "anticipate blockchain bloat—working on various sharding ideas". https://www.youtube.com/watch?v=oJG9g0lCPU8&feature=youtu.be&t=36m41s
The core principles have been known for years, the core design for nearly a year, and details for months, with implementations on the way. So sharding is definitely not at the pipe dream stage at this point.
6/ Despite strong optimism that on-chain scaling of Ethereum was around the corner (just another engineering job), this promise hasn’t been delivered on to date.
Sure, sharding is not yet finished. Though more incremental stuff has been going well, eg. uncle rates are at near record lows despite very high chain usage.
7/ Recently, a team of reputable developers decided to peer review a widely anticipated Casper / sharding white paper, concluding that it does not live up to its own claims.
Unmerciful peer review of Vlad Zamfir & co's white paper to scale Ethereum: "the authors do NOT prove that the CBC Casper family of protocols is Byzantine fault tolerant in either practice or theory".
That review was off the mark in many ways, eg. see https://twitter.com/technocrypto/status/1071111404340604929, and by the way CBC is not even a prerequisite for Serenity
8/ On the 2nd layer front, devs are now trying to scale Ethereum via scale via state channels (ETH’s version of Lightning), but it is unclear whether main-chain issued ERC20 type tokens will be portable to this environment.
Umm... you can definitely use Raiden with arbitrary ERC20s. That's why the interface currently uses WETH (the ERC20-fied version of ether) and not ETH
9/ Compare this to how the Bitcoin Lightning Network project evolved:
elizabeth stark @starkness: For lnd: First public code released: January 2016 Alpha: January 2017 Beta: March 2018…
Ok
10/ Bitcoin’s Lightning Network is now live, and is growing at rapid clip.
Jameson Lopp @lopp: Lightning Network: January 2018 vs December 2018
Sure, though as far as I understand there's still a low probability of finding routes for nontrivial amounts, and there's capital lockup griefing vectors, and privacy issues.... FWIW I personally never thought lightning is unworkable, it's just a design that inherently runs into ten thousand small issues that will likely take a very long time to get past.
11/ In 2017, more Ethereum scaling buzz was created, this time the panacea was “Plasma”.
@TuurDemeester Buterin & Poon just published a new scaling proposal for Ethereum, "strongly complementary to base-layer PoS and sharding": plasma.io https://twitter.com/VitalikButerin/status/895467347502182401
Yay, Plasma!
12/ However, upon closer examination it was the recycling of some stale ideas, and the project went nowhere:
Peter Todd @peterktodd These ideas were all considered in the Treechains design process, and ultimately rejected as insecure.
Just because Peter Todd rejected something as "insecure" doesn't mean that it is. In general, the ethereum research community is quite convinced that the fundamental Plasma design is fine, and as far as I understand there are formal proofs on the way. The only insecurity that can't be avoided is mass exit vulns, and channel-based systems have those too.
13/ The elephant in the room is the transition to proof-of-stake, an “environmentally friendly” way to secure the chain. (If this was the plan all along, why create a proof-of-work chain first?)
@TuurDemeester "Changing from proof of work to proof of stake changes the economics of the system, all the rules change and it will impact everything."
Umm... we created a proof of work chain first because we did not have a satisfactory proof of stake algo initially?
14/ For the uninitiated, here’s a good write-up that highlights some of the fundamental design problems of proof-of-stake. Like I said, this is science experiment territory.
And here's a set of long arguments from me on why proof of stake is just fine: https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQ. For a more philosophical piece, see https://medium.com/@VitalikButerin/a-proof-of-stake-design-philosophy-506585978d51
15/ Also check out this thread about how Proof of Stake blockchains require subjectivity (i.e. a trusted third party) to achieve consensus: https://forum.blockstack.org/t/pos-blockchains-require-subjectivity-to-reach-consensus/762?u=muneeb … and this thread on Bitcoin: https://www.reddit.com/Bitcoin/comments/59t48m/proofofstake_question/
Yes, we know about weak subjectivity, see https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/. It's really not that bad, especially given that users need to update their clients once in a while anyway, oh and by the way even if the weak subjectivity assumption is broken an attacker still needs to gather up that pile of old keys making up 51% of the stake. And also to defend against that there's Universal Hash Time.
16/ Keep in mind that Proof of Stake (PoS) is not a new concept at all. Proof-of-Work actually was one of the big innovations that made Bitcoin possible, after PoS was deemed impractical because of censorship vulnerability.
@TuurDemeester TIL Proof-of-stake based private currency designs date at least back to 1998. https://medium.com/swlh/the-untold-history-of-bitcoin-enter-the-cypherpunks-f764dee962a1
Oh I definitely agree that proof of work was superior for bootstrap, and I liked it back then especially because it actually managed to be reasonably egalitarian around 2009-2012 before ASICs fully took over. But at the present time it doesn't really have that nice attribute.
17/ Over the years, this has become a pattern in Ethereum’s culture: recycling old ideas while not properly referring to past research and having poor peer review standards. This is not how science progresses.Tuur Demeester added,
[email protected] has been repeatedly accused of /criticised for not crediting prior art. Once again with plasma: https://twitter.com/DamelonBCWS/status/895643582278782976
I try to credit people whenever I can; half my blog and ethresear.ch posts have a "special thanks" section right at the top. Sometimes we end up re-inventing stuff, and sometimes we end up hearing about stuff, forgetting it, and later re-inventing it; that's life as an autodidact. And if you feel you've been unfairly not credited for something, always feel free to comment, people have done this and I've edited.
18/ One of my big concerns is that sophistry and marketing hype is a serious part of Ethereum’s success so far, and that overly inflated expectations have lead to an inflated market cap.
Ok, go on.
19/ Let’s illustrate with an example.
...
20/ A few days ago, I shared a critical tweet that made the argument that Ethereum’s value proposition is in essence utopian.
@TuurDemeester Ethereum-ism sounds a bit like Marxism to me:
  • What works today (PoW) is 'just a phase', the ideal & unproven future is to come: Proof-of-Stake.…
...
21/ I was very serious about my criticism. In fact, each one of the three points addressed what Vitalik Buterin has described as “unique value propositions of Ethereum proper”. https://www.reddit.com/ethereum/comments/5jk3he/how_to_prevent_the_cannibalism_of_ethereum_into/dbgujr8/
...
22/ My first point, about Ethereum developers rejecting Proof-of-Work, has been illustrated many times over By Vitalik and others. (See earlier in this tweetstorm for more about how PoS is unproven.)
Vitalik Non-giver of Ether @VitalikButerin: I don't believe in proof of work!
See above for links as to why I think proof of stake is great.
23/ My second point addresses Ethereum’s romance with the vague and dangerous notion of ‘social consensus’, where disruptive hard-forks are used to ‘upgrade’ or ‘optimize’ the system, which inevitably leads to increased centralization. More here:
See my rebuttal to Tuur's rebuttal :)
24/ My third point addresses PoS’ promise of perpetual income to ETHizens. Vitalik is no stranger to embracing free lunch ideas, e.g. during his 2014 ETH announcement speech, where he described a coin with a 20% inflation tax as having “no cost” to users.
Yeah, I haven't really emphasized perpetual income to stakers as a selling point in years. I actually favor rewards being as low as possible while still being high enough for security.
25/ In his response to my tweet, Vitalik adopted my format to “play the same game” in criticizing Bitcoin. My criticisms weren't addressed, and his response was riddled with errors. Yet his followers gave it +1,000 upvotes!
Vitalik Non-giver of Ether @VitalikButerin: - What works today (L1) is just a phase, ideal and unproven future (usable L2) is to come - Utopian concept of progress: we're already so confident we're finished we ain't needin no hard forks…
Ok, let's hear about what the errors are...
26/ Rebuttal: - BTC layer 1 is not “just a phase”, it always will be its definitive bedrock for transaction settlement. - Soft forking digital protocols has been the norm for over 3 decades—hard-forks are the deviation! - Satoshi never suggested hyperbitcoinization as a goal.
Sure, but (i) the use of layer 1 for consumer payments is definitely, in bitcoin ideology, "just a phase", (ii) I don't think you can make analogies between consensus protocols and other kinds of protocols, and between soft forking consensus protocols and protocol changes in other protocols, that easily, (iii) plenty of people do believe that hyperbitcoinization as a goal. Oh by the way: https://twitter.com/tuurdemeestestatus/545993119599460353
27/ This kind of sophistry is exhausting and completely counter-productive, but it can be very convincing for an uninformed retail public.
Ok, go on.
28/ Let me share a few more inconvenient truths.
...
29/ In order to “guarantee” the transition to PoS’ utopia of perpetual income (staking coins earns interest), a “difficulty bomb” was embedded in the protocol, which supposedly would force miners to accept the transition.
The intended goal of the difficulty bomb was to prevent the protocol from ossifying, by ensuring that it has to hard fork eventually to reset the difficulty bomb, at which point the status quo bias in favor of not changing other protocol rules at the same time would be weaker. Though forcing a switch to PoS was definitely a key goal.
30/ Of course, nothing came of this, because anything in the ETH protocol can be hard-forked away. Another broken promise.
Tuur Demeester @TuurDemeester: Looks like another Ethereum hard-fork is going to remove the "Ice Age" (difficulty increase meant to incentivize transition to PoS). https://www.cryptocompare.com/coins/guides/what-is-the-ethereum-ice-age/
How is that a broken promise? There was no social contract to only replace the difficulty-bombed protocol with a PoS chain.
31/ Another idea that was marketed heavily early on, was that with ETH you could program smart contract as easily as javascript applications.
Tuur Demeester @TuurDemeester: I forgot, but in 2014 Ethereum was quite literally described as "Javascript-on-the-blockchain"
Agree that was over-optimistic, though the part of the metaphor that's problematic is the "be done with complex apps in a couple hours" part, NOT the "general-purpose languages are great" part.
32/ This was criticized by P2P & OS developers as a reckless notion, given that every smart contracts is actually a “de novo cryptographic protocol”. In other words, it’s playing with fire. https://bitcointalk.org/index.php?topic=1427885.msg14601127#msg14601127
See above
33/ The modular approach to Bitcoin seems to be much better at compartmentalizing risk, and thus reducing attack surfaces. I’ve written about modular scaling here...
To be fair, risk is reduced because Bitcoin does less.
34/ Another huge issue that Ethereum has is with scaling. By putting “everything on the blockchain” (which stores everything forever) and dubbing it “the world computer”, you are going to end up with a very slow and clogged up system.
Christopher Allen @ChristopherA: AWS cost: $0.000000066 for calc, Ethereum: $26.55. This is about 400 million times as expensive. World computer? https://hackernoon.com/ether-purchase-power-df40a38c5a2f
We never advocated "putting everything on the blockchain". The phrase "world computer" was never meant to be interpreted as "everyone's personal desktop", but rather as a common platform specifically for the parts of applications that require consensus on shared state. As evidence of this, notice how Whisper and Swarm were part of the vision as complements to Ethereum right from the start.
35/ By now the Ethereum bloat is so bad that cheaply running an individual node is practically impossible for a lay person. ETH developers are also imploring people to not deploy more smart contract apps on its blockchain.
Tuur Demeester @TuurDemeester: But... deploying d-apps on the "Ethereum Virtual Machine" is exactly what everyone was encouraged to do for the past 4 years. Looks like on-chain scaling wasn't such a great idea after all.
Umm.... I just spun up a node from scratch last week. On a consumer laptop.
36/ As a result, and despite the claims that running a node in “warp” mode is easy and as good as a full node, Ethereum is becoming increasingly centralized.
@TuurDemeester Finally a media article touching on the elephant in the room: Ethereum has become highly centralized. #infura https://www.coindesk.com/the-race-is-on-to-replace-ethereums-most-centralized-layeamp?__twitter_impression=true
See above
37/ Another hollow claim: in 2016, Ethereum was promoted as being censorship resistant…
Tuur Demeester @TuurDemeester: Pre TheDAO #Ethereum presentation: "uncensorable, code is law, bottom up". http://ow.ly/qW49302Pp92
Yes, the DAO fork did violate the notion of absolute immutability. However, the "forking the DAO will lead to doom and gloom" crowd was very wrong in one key way: it did NOT work as a precedent justifying all sorts of further state interventions. The community clearly drew a line in the sand by firmly rejecting EIP 867, and EIP 999 seems to now also be going nowhere. So it seems like there's some evidence that the social contract of "moderately but not infinitely strong immutability" actually can be stable.
38/ Yet later that year, after only 6% of ETH holders had cast a vote, ETH core devs decided to endorse a hard-fork that clawed back the funds from a smart contract that held 4.5% of all ETH in circulation. More here: ...
See above
39/ Other potential signs of centralization: Vitalik Buterin signing a deal with a Russian government institution, and ETH core developers experimenting with semi-closed meetings: https://twitter.com/coindesk/status/902892844955860993 …,
Hudson Jameson @hudsonjameson: The "semi-closed" Ethereum 1.x meeting from last Friday was an experiment. The All Core Dev meeting this Friday will be recorded as usual.
Suppose I were to tomorrow sign up to work directly for Kim Jong Un. What concretely would happen to the Ethereum protocol? I suspect very little; I am mostly involved in the Serenity work, and the other researchers have proven very capable of both pushing the spec forward even without me and catching any mistakes with my work. So I don't think any argument involving me applies. And we ended up deciding not to do more semi-closed meetings.
40/ Another red flag to me is the apparent lack of relevant expertise in the ETH development community. (Check the responses…)
Tuur Demeester @TuurDemeester: Often heard: "but Ethereum also has world class engineers working on the protocol". Please name names and relevant pedigree so I can follow and learn. https://twitter.com/TuurDemeestestatus/963029019447955461
I personally am confident in the talents of our core researchers, and our community of academic partners. Most recently the latter group includes people from Starkware, Stanford CBR, IC3, and other groups.
41/ For a while, Microsoft veteran Lucius Meredith was mentioned as playing an important role in ETH scaling, but now he is likely distracted by the failure of his ETH scaling company RChain. https://blog.ethereum.org/2015/12/24/understanding-serenity-part-i-abstraction/
I have no idea who described Lucius Meredith's work as being important for the Serenity roadmap.... oh and by the way, RChain is NOT an "Ethereum scaling company"
42/ Perhaps the recently added Gandalf of Ethereum, with his “Fellowship of Ethereum Magicians” [sic] can save the day, but imo that seems unlikely...
Honestly, I don't see why Ethereum Gandalf needs to save the day, because I don't see what is in danger and needs to be saved...
43/ This is becoming a long tweetstorm, so let’s wrap up with a few closing comments.
Yay!
44/ Do I have a conflict of interest? ETH is a publicly available asset with no real barriers to entry, so I could easily get a stake. Also, having met Vitalik & other ETH founders several times in 2013-’14, it would have been doable for me to become part of the in-crowd.
Agree there. And BTW I generally think financial conflicts of interest are somewhat overrated; social conflicts/tribal biases are the bigger problem much of the time. Though those two kinds of misalignments do frequently overlap and reinforce each other so they're difficult to fully disentangle.
45/ Actually, I was initially excited about Ethereum’s smart contract work - this was before one of its many pivots.
Tuur Demeester @TuurDemeester: Ethereum is probably the first programming language I will teach myself - who wouldn't want the ability to program smart BTC contracts?
Ethereum was never about "smart BTC contracts"..... even "Ethereum as a Mastercoin-style meta-protocol" was intended to be built on top of Primecoin.
46/ Also, I have done my share of soul searching about whether I could be suffering from survivor’s bias.
@TuurDemeester I just published “I’m not worried about Bitcoin Unlimited, but I am losing sleep over Ethereum” https://medium.com/p/im-not-worried-about-bitcoin-unlimited-but-i-am-losing-sleep-over-ethereum-b5251c54e66d
Ok, good.
47/ Here’s why Ethereum is dubious to me: rather than creating an open source project & testnet to work on these interesting computer science problems, its founders instead did a securities offering, involving many thousands of clueless retail investors.
What do you mean "instead of"? We did create an open source project and testnet! Whether or not ETH is a security is a legal question; seems like SEC people agree it's not: https://www.cnbc.com/2018/06/14/bitcoin-and-ethereum-are-not-securities-but-some-cryptocurrencies-may-be-sec-official-says.html
48/ Investing in the Ethereum ICO was akin to buying shares in a startup that had “invent time travel” as part of its business plan. Imo it was a reckless security offering, and it set the tone for the terrible capital misallocation of the 2017 ICO boom.
Nothing in the ethereum roadmap requires time-travel-like technical advancements or anything remotely close to that. Proof: we basically have all the fundamental technical advancements we need at this point.
49/ In my view, Ethereum is the Yahoo of our day - an unscalable “blue chip” cryptocurrency:
Tuur Demeester @TuurDemeester: 1/ The DotCom bubble shows that the market isn't very good at valuing early stage technology. I'll use Google vs. Yahoo to illustrate.
Got it.
50/ I’ll close with a few words from Gregory Maxwell from 2016,: https://bitcointalk.org/index.php?topic=1427885.msg14601127#msg14601127
See my rebuttal to Greg from 2 years ago: https://www.reddit.com/ethereum/comments/4g1bh6/greg_maxwells_critique_of_ethereum_blockchains/
submitted by shouldbdan to ethtrader [link] [comments]

Preventing double-spends is an "embarrassingly parallel" massive search problem - like Google, [email protected], [email protected], or PrimeGrid. BUIP024 "address sharding" is similar to Google's MapReduce & Berkeley's BOINC grid computing - "divide-and-conquer" providing unlimited on-chain scaling for Bitcoin.

TL;DR: Like all other successful projects involving "embarrassingly parallel" search problems in massive search spaces, Bitcoin can and should - and inevitably will - move to a distributed computing paradigm based on successful "sharding" architectures such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture) - which use simple mathematical "decompose" and "recompose" operations to break big problems into tiny pieces, providing virtually unlimited scaling (plus fault tolerance) at the logical / software level, on top of possibly severely limited (and faulty) resources at the physical / hardware level.
The discredited "heavy" (and over-complicated) design philosophy of centralized "legacy" dev teams such as Core / Blockstream (requiring every single node to download, store and verify the massively growing blockchain, and pinning their hopes on non-existent off-chain vaporware such as the so-called "Lightning Network" which has no mathematical definition and is missing crucial components such as decentralized routing) is doomed to failure, and will be out-competed by simpler on-chain "lightweight" distributed approaches such as distributed trustless Merkle trees or BUIP024's "Address Sharding" emerging from independent devs such as u/thezerg1 (involved with Bitcoin Unlimited).
No one in their right mind would expect Google's vast search engine to fit entirely on a Raspberry Pi behind a crappy Internet connection - and no one in their right mind should expect Bitcoin's vast financial network to fit entirely on a Raspberry Pi behind a crappy Internet connection either.
Any "normal" (ie, competent) company with $76 million to spend could provide virtually unlimited on-chain scaling for Bitcoin in a matter of months - simply by working with devs who would just go ahead and apply the existing obvious mature successful tried-and-true "recipes" for solving "embarrassingly parallel" search problems in massive search spaces, based on standard DISTRIBUTED COMPUTING approaches like Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture). The fact that Blockstream / Core devs refuse to consider any standard DISTRIBUTED COMPUTING approaches just proves that they're "embarrassingly stupid" - and the only way Bitcoin will succeed is by routing around their damage.
Proven, mature sharding architectures like the ones powering Google Search, [email protected], [email protected], or PrimeGrid will allow Bitcoin to achieve virtually unlimited on-chain scaling, with minimal disruption to the existing Bitcoin network topology and mining and wallet software.
Longer Summary:
People who argue that "Bitcoin can't scale" - because it involves major physical / hardware requirements (lots of processing power, upload bandwidth, storage space) - are at best simply misinformed or incompetent - or at worst outright lying to you.
Bitcoin mainly involves searching the blockchain to prevent double-spends - and so it is similar to many other projects involving "embarrassingly parallel" searching in massive search spaces - like Google Search, [email protected], [email protected], or PrimeGrid.
But there's a big difference between those long-running wildly successful massively distributed infinitely scalable parallel computing projects, and Bitcoin.
Those other projects do their data storage and processing across a distributed network. But Bitcoin (under the misguided "leadership" of Core / Blockstream devs) instists on a fatally flawed design philosophy where every individual node must be able to download, store and verify the system's entire data structure. And it's even wore than that - they want to let the least powerful nodes in the system dictate the resource requirements for everyone else.
Meanwhile, those other projects are all based on some kind of "distributed computing" involving "sharding". They achieve massive scaling by adding a virtually unlimited (and fault-tolerant) logical / software layer on top of the underlying resource-constrained / limited physical / hardware layer - using approaches like Google's MapReduce algorithm or Berkeley's Open Infrastructure for Network Computing (BOINC) grid computing architecture.
This shows that it is a fundamental error to continue insisting on viewing an individual Bitcoin "node" as the fundamental "unit" of the Bitcoin network. Coordinated distributed pools already exist for mining the blockchain - and eventually coordinated distributed trustless architectures will also exist for verifying and querying it. Any architecture or design philosophy where a single "node" is expected to be forever responsible for storing or verifying the entire blockchain is the wrong approach, and is doomed to failure.
The most well-known example of this doomed approach is Blockstream / Core's "roadmap" - which is based on two disastrously erroneous design requirements:
  • Core / Blockstream erroneously insist that the entire blockchain must always be downloadable, storable and verifiable on a single node, as dictated by the least powerful nodes in the system (eg, u/bitusher in Costa Rica), or u/Luke-Jr in the underserved backwoods of Florida); and
  • Core / Blockstream support convoluted, incomplete off-chain scaling approaches such as the so-called "Lightning Network" - which lacks a mathematical foundation, and also has some serious gaps (eg, no solution for decentralized routing).
Instead, the future of Bitcoin will inevitably be based on unlimited on-chain scaling, where all of Bitcoin's existing algorithms and data structures and networking are essentially preserved unchanged / as-is - but they are distributed at the logical / software level using sharding approaches such as u/thezerg1's BUIP024 or distributed trustless Merkle trees.
These kinds of sharding architectures will allow individual nodes to use a minimum of physical resources to access a maximum of logical storage and processing resources across a distributed network with virtually unlimited on-chain scaling - where every node will be able to use and verify the entire blockchain without having to download and store the whole thing - just like Google Search, [email protected], [email protected], or PrimeGrid and other successful distributed sharding-based projects have already been successfully doing for years.
Details:
Sharding, which has been so successful in many other areas, is a topic that keeps resurfacing in various shapes and forms among independent Bitcoin developers.
The highly successful track record of sharding architectures on other projects involving "embarrassingly parallel" massive search problems (harnessing resource-constrained machines at the physical level into a distributed network at the logical level, in order to provide fault tolerance and virtually unlimited scaling searching for web pages, interstellar radio signals, protein sequences, or prime numbers in massive search spaces up to hundreds of terabytes in size) provides convincing evidence that sharding architectures will also work for Bitcoin (which also requires virtually unlimited on-chain scaling, searching the ever-expanding blockchain for previous "spends" from an existing address, before appending a new transaction from this address to the blockchain).
Below are some links involving proposals for sharding Bitcoin, plus more discussion and related examples.
BUIP024: Extension Blocks with Address Sharding
https://np.reddit.com/btc/comments/54afm7/buip024_extension_blocks_with_address_sharding/
Why aren't we as a community talking about Sharding as a scaling solution?
https://np.reddit.com/Bitcoin/comments/3u1m36/why_arent_we_as_a_community_talking_about/
(There are some detailed, partially encouraging comments from u/petertodd in that thread.)
[Brainstorming] Could Bitcoin ever scale like BitTorrent, using something like "mempool sharding"?
https://np.reddit.com/btc/comments/3v070a/brainstorming_could_bitcoin_ever_scale_like/
[Brainstorming] "Let's Fork Smarter, Not Harder"? Can we find some natural way(s) of making the scaling problem "embarrassingly parallel", perhaps introducing some hierarchical (tree) structures or some natural "sharding" at the level of the network and/or the mempool and/or the blockchain?
https://np.reddit.com/btc/comments/3wtwa7/brainstorming_lets_fork_smarter_not_harder_can_we/
"Braiding the Blockchain" (32 min + Q&A): We can't remove all sources of latency. We can redesign the "chain" to tolerate multiple simultaneous writers. Let miners mine and validate at the same time. Ideal block time / size / difficulty can become emergent per-node properties of the network topology
https://np.reddit.com/btc/comments/4su1gf/braiding_the_blockchain_32_min_qa_we_cant_remove/
Some kind of sharding - perhaps based on address sharding as in BUIP024, or based on distributed trustless Merkle trees as proposed earlier by u/thezerg1 - is very likely to turn out to be the simplest, and safest approach towards massive on-chain scaling.
A thought experiment showing that we already have most of the ingredients for a kind of simplistic "instant sharding"
A simplistic thought experiment can be used to illustrate how easy it could be to do sharding - with almost no changes to the existing Bitcoin system.
Recall that Bitcoin addresses and keys are composed from an alphabet of 58 characters. So, in this simplified thought experiment, we will outline a way to add a kind of "instant sharding" within the existing system - by using the last character of each address in order to assign that address to one of 58 shards.
(Maybe you can already see where this is going...)
Similar to vanity address generation, a user who wants to receive Bitcoins would be required to generate 58 different receiving addresses (each ending with a different character) - and, similarly, miners could be required to pick one of the 58 shards to mine on.
Then, when a user wanted to send money, they would have to look at the last character of their "send from" address - and also select a "send to" address ending in the same character - and presto! we already have a kind of simplistic "instant sharding". (And note that this part of the thought experiment would require only the "softest" kind of soft fork: indeed, we haven't changed any of the code at all, but instead we simply adopted a new convention by agreement, while using the existing code.)
Of course, this simplistic "instant sharding" example would still need a few more features in order to be complete - but they'd all be fairly straightforward to provide:
  • A transaction can actually send from multiple addresses, to multiple addresses - so the approach of simply looking at the final character of a single (receive) address would not be enough to instantly assign a transaction to a particular shard. But a slightly more sophisticated decision criterion could easily be developed - and computed using code - to assign every transaction to a particular shard, based on the "from" and "to" addresses in the transaction. The basic concept from the "simplistic" example would remain the same, sharding the network based on some characteristic of transactions.
  • If we had 58 shards, then the mining reward would have to be decreased to 1/58 of what it currently is - and also the mining hash power on each of the shards would end up being roughly 1/58 of what it is now. In general, many people might agree that decreased mining rewards would actually be a good thing (spreading out mining rewards among more people, instead of the current problems where mining is done by about 8 entities). Also, network hashing power has been growing insanely for years, so we probably have way more than enough needed to secure the network - after all, Bitcoin was secure back when network hash power was 1/58 of what it is now.
  • This simplistic example does not handle cases where you need to do "cross-shard" transactions. But it should be feasible to implement such a thing. The various proposals from u/thezerg1 such as BUIP024 do deal with "cross-shard" transactions.
(Also, the fact that a simplified address-based sharding mechanics can be outlined in just a few paragraphs as shown here suggests that this might be "simple and understandable enough to actually work" - unlike something such as the so-called "Lightning Network", which is actually just a catchy-sounding name with no clearly defined mechanics or mathematics behind it.)
Addresses are plentiful, and can be generated locally, and you can generate addresses satisfying a certain pattern (eg ending in a certain character) the same way people can already generate vanity addresses. So imposing a "convention" where the "send" and "receive" address would have to end in the same character (and where the miner has to only mine transactions in that shard) - would be easy to understand and do.
Similarly, the earlier solution proposed by u/thezerg1, involving distributed trustless Merkle trees, is easy to understand: you'd just be distributing the Merkle tree across multiple nodes, while still preserving its immutablity guarantees.
Such approaches don't really change much about the actual system itself. They preserve the existing system, and just split its data structures into multiple pieces, distributed across the network. As long as we have the appropriate operators for decomposing and recomposing the pieces, then everything should work the same - but more efficiently, with unlimited on-chain scaling, and much lower resource requirements.
The examples below show how these kinds of "sharding" approaches have already been implemented successfully in many other systems.
Massive search is already efficiently performed with virtually unlimited scaling using divide-and-conquer / decompose-and-recompose approaches such as MapReduce and BOINC.
Every time you do a Google search, you're using Google's MapReduce algorithm to solve an embarrassingly parallel problem.
And distributed computing grids using the Berkeley Open Infrastructure for Network Computing (BOINC) are constantly setting new records searching for protein combinations, prime numbers, or radio signals from possible intelligent life in the universe.
We all use Google to search hundreds of terabytes of data on the web and get results in a fraction of a second - using cheap "commodity boxes" on the server side, and possibly using limited bandwidth on the client side - with fault tolerance to handle crashing servers and dropped connections.
Other examples are [email protected], [email protected] and PrimeGrid - involving searching massive search spaces for protein sequences, interstellar radio signals, or prime numbers hundreds of thousands of digits long. Each of these examples uses sharding to decompose a giant search space into smaller sub-spaces which are searched separately in parallel and then the resulting (sub-)solutions are recomposed to provide the overall search results.
It seems obvious to apply this tactic to Bitcoin - searching the blockchain for existing transactions involving a "send" from an address, before appending a new "send" transaction from that address to the blockchain.
Some people might object that those systems are different from Bitcoin.
But we should remember that preventing double-spends (the main thing that the Bitcoin does) is, after all, an embarrassingly parallel massive search problem - and all of these other systems also involve embarrassingly parallel massive search problems.
The mathematics of Google's MapReduce and Berkeley's BOINC is simple, elegant, powerful - and provably correct.
Google's MapReduce and Berkeley's BOINC have demonstrated that in order to provide massive scaling for efficient searching of massive search spaces, all you need is...
  • an appropriate "decompose" operation,
  • an appropriate "recompose" operation,
  • the necessary coordination mechanisms
...in order to distribute a single problem across multiple, cheap, fault-tolerant processors.
This allows you to decompose the problem into tiny sub-problems, solving each sub-problem to provide a sub-solution, and then recompose the sub-solutions into the overall solution - gaining virtually unlimited scaling and massive efficiency.
The only "hard" part involves analyzing the search space in order to select the appropriate DECOMPOSE and RECOMPOSE operations which guarantee that recomposing the "sub-solutions" obtained by decomposing the original problem is equivalent to the solving the original problem. This essential property could be expressed in "pseudo-code" as follows:
  • (DECOMPOSE ; SUB-SOLVE ; RECOMPOSE) = (SOLVE)
Selecting the appropriate DECOMPOSE and RECOMPOSE operations (and implementing the inter-machine communication coordination) can be somewhat challenging, but it's certainly doable.
In fact, as mentioned already, these things have already been done in many distributed computing systems. So there's hardly any "original work to be done in this case. All we need to focus on now is translating the existing single-processor architecture of Bitcoin to a distributed architecture, adopting the mature, proven, efficient "recipes" provided by the many examples of successful distributed systems already up and running like such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture).
That's what any "competent" company with $76 million to spend would have done already - simply work with some devs who know how to implement open-source distributed systems, and focus on adapting Bitcoin's particular data structures (merkle trees, hashed chains) to a distributed environment. That's a realistic roadmap that any team of decent programmers with distributed computing experience could easily implement in a few months, and any decent managers could easily manage and roll out on a pre-determined schedule - instead of all these broken promises and missed deadlines and non-existent vaporware and pathetic excuses we've been getting from the incompetent losers and frauds involved with Core / Blockstream.
ASIDE: MapReduce and BOINC are based on math - but the so-called "Lightning Network" is based on wishful thinking involving kludges on top of workarounds on top of hacks - which is how you can tell that LN will never work.
Once you have succeeded in selecting the appropriate mathematical DECOMPOSE and RECOMPOSE operations, you get simple massive scaling - and it's also simple for anyone to verify that these operations are correct - often in about a half-page of math and code.
An example of this kind of elegance and brevity (and provable correctness) involving compositionality can be seen in this YouTube clip by the accomplished mathematician Lucius Greg Meredith presenting some operators for scaling Ethereum - in just a half page of code:
https://youtu.be/uzahKc_ukfM?t=1101
Conversely, if you fail to select the appropriate mathematical DECOMPOSE and RECOMPOSE operations, then you end up with a convoluted mess of wishful thinking - like the "whitepaper" for the so-called "Lightning Network", which is just a cool-sounding name with no actual mathematics behind it.
The LN "whitepaper" is an amateurish, non-mathematical meandering mishmash of 60 pages of "Alice sends Bob" examples involving hacks on top of workarounds on top of kludges - also containing a fatal flaw (a lack of any proposed solution for doing decentralized routing).
The disaster of the so-called "Lightning Network" - involving adding never-ending kludges on top of hacks on top of workarounds (plus all kinds of "timing" dependencies) - is reminiscent of the "epicycles" which were desperately added in a last-ditch attempt to make Ptolemy's "geocentric" system work - based on the incorrect assumption that the Sun revolved around the Earth.
This is how you can tell that the approach of the so-called "Lightning Network" is simply wrong, and it would never work - because it fails to provide appropriate (and simple, and provably correct) mathematical DECOMPOSE and RECOMPOSE operations in less than a single page of math and code.
Meanwhile, sharding approaches based on a DECOMPOSE and RECOMPOSE operation are simple and elegant - and "functional" (ie, they don't involve "procedural" timing dependencies like keeping your node running all the time, or closing out your channel before a certain deadline).
Bitcoin only has 6,000 nodes - but the leading sharding-based projects have over 100,000 nodes, with no financial incentives.
Many of these sharding-based projects have many more nodes than the Bitcoin network.
The Bitcoin network currently has about 6,000 nodes - even though there are financial incentives for running a node (ie, verifying your own Bitcoin balance.
[email protected] and [email protected] each have over 100,000 active users - even though these projects don't provide any financial incentives. This higher number of users might be due in part the the low resource demands required in these BOINC-based projects, which all are based on sharding the data set.
[email protected]
As part of the client-server network architecture, the volunteered machines each receive pieces of a simulation (work units), complete them, and return them to the project's database servers, where the units are compiled into an overall simulation.
In 2007, Guinness World Records recognized [email protected] as the most powerful distributed computing network. As of September 30, 2014, the project has 107,708 active CPU cores and 63,977 active GPUs for a total of 40.190 x86 petaFLOPS (19.282 native petaFLOPS). At the same time, the combined efforts of all distributed computing projects under BOINC totals 7.924 petaFLOPS.
[email protected]
Using distributed computing, [email protected] sends the millions of chunks of data to be analyzed off-site by home computers, and then have those computers report the results. Thus what appears an onerous problem in data analysis is reduced to a reasonable one by aid from a large, Internet-based community of borrowed computer resources.
Observational data are recorded on 2-terabyte SATA hard disk drives at the Arecibo Observatory in Puerto Rico, each holding about 2.5 days of observations, which are then sent to Berkeley. Arecibo does not have a broadband Internet connection, so data must go by postal mail to Berkeley. Once there, it is divided in both time and frequency domains work units of 107 seconds of data, or approximately 0.35 megabytes (350 kilobytes or 350,000 bytes), which overlap in time but not in frequency. These work units are then sent from the [email protected] server over the Internet to personal computers around the world to analyze.
Data is merged into a database using [email protected] computers in Berkeley.
The [email protected] distributed computing software runs either as a screensaver or continuously while a user works, making use of processor time that would otherwise be unused.
Active users: 121,780 (January 2015)
PrimeGrid
PrimeGrid is a distributed computing project for searching for prime numbers of world-record size. It makes use of the Berkeley Open Infrastructure for Network Computing (BOINC) platform.
Active users 8,382 (March 2016)
MapReduce
A MapReduce program is composed of a Map() procedure (method) that performs filtering and sorting (such as sorting students by first name into queues, one queue for each name) and a Reduce() method that performs a summary operation (such as counting the number of students in each queue, yielding name frequencies).
How can we go about developing sharding approaches for Bitcoin?
We have to identify a part of the problem which is in some sense "invariant" or "unchanged" under the operations of DECOMPOSE and RECOMPOSE - and we also have to develop a coordination mechanism which orchestrates the DECOMPOSE and RECOMPOSE operations among the machines.
The simplistic thought experiment above outlined an "instant sharding" approach where we would agree upon a convention where the "send" and "receive" address would have to end in the same character - instantly providing a starting point illustrating some of the mechanics of an actual sharding solution.
BUIP024 involves address sharding and deals with the additional features needed for a complete solution - such as cross-shard transactions.
And distributed trustless Merkle trees would involve storing Merkle trees across a distributed network - which would provide the same guarantees of immutability, while drastically reducing storage requirements.
So how can we apply ideas like MapReduce and BOINC to providing massive on-chain scaling for Bitcoin?
First we have to examine the structure of the problem that we're trying to solve - and we have to try to identify how the problem involves a massive search space which can be decomposed and recomposed.
In the case of Bitcoin, the problem involves:
  • sequentializing (serializing) APPEND operations to a blockchain data structure
  • in such a way as to avoid double-spends
Can we view "preventing Bitcoin double-spends" as a "massive search space problem"?
Yes we can!
Just like Google efficiently searches hundreds of terabytes of web pages for a particular phrase (and [email protected], [email protected], PrimeGrid etc. efficiently search massive search spaces for other patterns), in the case of "preventing Bitcoin double-spends", all we're actually doing is searching a massive seach space (the blockchain) in order to detect a previous "spend" of the same coin(s).
So, let's imagine how a possible future sharding-based architecture of Bitcoin might look.
We can observe that, in all cases of successful sharding solutions involving searching massive search spaces, the entire data structure is never stored / searched on a single machine.
Instead, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) a "virtual" layer or grid across multiple machines - allowing the data structure to be distributed across all of them, and allowing users to search across all of them.
This suggests that requiring everyone to store 80 Gigabytes (and growing) of blockchain on their own individual machine should no longer be a long-term design goal for Bitcoin.
Instead, in a sharding environment, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) should allow everyone to only store a portion of the blockchain on their machine - while also allowing anyone to search the entire blockchain across everyone's machines.
This might involve something like BUIP024's "address sharding" - or it could involve something like distributed trustless Merkle trees.
In either case, it's easy to see that the basic data structures of the system would remain conceptually unaltered - but in the sharding approaches, these structures would be logically distributed across multiple physical devices, in order to provide virtually unlimited scaling while dramatically reducing resource requirements.
This would be the most "conservative" approach to scaling Bitcoin: leaving the data structures of the system conceptually the same - and just spreading them out more, by adding the appropriately defined mathematical DECOMPOSE and RECOMPOSE operators (used in successful sharding approaches), which can be easily proven to preserve the same properties as the original system.
Conclusion
Bitcoin isn't the only project in the world which is permissionless and distributed.
Other projects (BOINC-based permisionless decentralized [email protected], [email protected], and PrimeGrid - as well as Google's (permissioned centralized) MapReduce-based search engine) have already achieved unlimited scaling by providing simple mathematical DECOMPOSE and RECOMPOSE operations (and coordination mechanisms) to break big problems into smaller pieces - without changing the properties of the problems or solutions. This provides massive scaling while dramatically reducing resource requirements - with several projects attracting over 100,000 nodes, much more than Bitcoin's mere 6,000 nodes - without even offering any of Bitcoin's financial incentives.
Although certain "legacy" Bitcoin development teams such as Blockstream / Core have been neglecting sharding-based scaling approaches to massive on-chain scaling (perhaps because their business models are based on misguided off-chain scaling approaches involving radical changes to Bitcoin's current successful network architecture, or even perhaps because their owners such as AXA and PwC don't want a counterparty-free new asset class to succeed and destroy their debt-based fiat wealth), emerging proposals from independent developers suggest that on-chain scaling for Bitcoin will be based on proven sharding architectures such as MapReduce and BOINC - and so we should pay more attention to these innovative, independent developers who are pursuing this important and promising line of research into providing sharding solutions for virtually unlimited on-chain Bitcoin scaling.
submitted by ydtm to btc [link] [comments]

148 – Kyle Torpey: Diving Into Bitcoin - The Debates, The Issues and What's To Come Bitcoin - Lightning Network RSK @MIT How To Buy Bitcoin Cash [Quick And Easy Guide] Andreas Antonopolous on altcoins and Bitcoin's scalability problems

Simple Road Map Clipart, Free Clipart Archive. Download and use these free Simple Road Map Clipart #25402 for your personal projects or designs. Bitcoin and blockchain will finally break up. Bitcoin should be revered as the patriarch of digital assets. Bitcoin confluenced cryptography, peer-to-peer networking, a virtual machine, and a consensus formation algorithm to solve “the double spend” and “the Byzantine general’s problem” elegantly. That said, time moves on. The Bitcoin maximalists that believe Bitcoin is where this ... Bitcoin Core Developer Lays Out Scalability Roadmap Based on ..... ⊕ Homepage - Bitcoin Core Developer Lays Out Scalability Roadmap Based on ..... images. Use these free Simple Road Map Clipart for your personal projects or designs clipart. Download Clipart. Similar Simple Road Map Cliparts. If you find any inappropriate content on 123clipartpng.com, please contact us and we will take ... Forked on 1August 2017; 4 months ago(2017-08-01) Bitcoin Cash also known as BCash (BCH) is a hard fork of the cryptocurrency bitcoin . The fork occurred on August 1, 2017. [3] On July 20, 2017 Bitcoin Improvement Proposal (BIP) 91, aka Segregated Witness , activated. [4] [5] [6] Some members of the bitcoin community f Free Bitcoin Relay bitcoin lightning network roadmap: best way to buy bitcoin in the us: one satoshi equals bitcoin : bitcoin capital gains taxes: cost of bitcoin stock: bitcoin atm locations bellevue washington: bitcoin card list: bitcoin algorithm trading: bitcoin wallet address invalid: bitcoin symbol stock fund: secure bitcoin exchanges: best bitcoin fees: bitcoin deals: do bitcoin ...

[index] [40222] [6160] [31857] [19452] [9596] [4794] [3411] [33955] [12995] [8531]

148 – Kyle Torpey: Diving Into Bitcoin - The Debates, The Issues and What's To Come

This is from the Q&A section of a talk by Andreas Antonopolous at Hubud in Bali put on by "Bitcoins in Bali". http://bitcoinsinbali.org/event/evening-andreas... Find out how using Lightning Network the biggest challenge of Bitcoin i.e Scalability is getting addressed These proposals have lots of positive implications, including an improvement to privacy and scalability, but what will it mean for an average Bitcoin user? What is next on the roadmap? In this ... RSK (Rootstock) Smart Bitcoin and Scalability - Duration: 34:30. TokenMarket 514 views. 34:30. Eliud Kipchoge - 15x1000m before @berlinmarathon - Duration: 5:02. RUN'IX Recommended for you. 5:02 ... On Chain Scalability - Bitcoin Cash follows the Nakamoto roadmap of global adoption with on-chain scaling. As a first step, the blocksize limit has been made adjustable, with an increased default ...

#