#1 Broker! 15 000 welcome bonus!
2nd place! Good choice!
The Best Binary Broker 2020!
Perfect For Beginners!
Free Trading Education!
Free Demo Account!
Get Your Sign-up Bonus Now!
Good Broker For Experienced Traders!
A New Test Network Just Activated and Ethereum 2.0 Will Be Using It
A New Test Network Just Activated and Ethereum 2.0 Will Be Using It
A highly anticipated upgrade to ethereum is gearing up for its first dry run, and the test network of choice will be an unfamiliar name to many.
Launched today, the Görli blockchain will be the proving ground for Prysm, software created by Prysmatic Labs that will feature the first of three phases of deployment of the upgrade, often referred to as ethereum 2.0 or Serenity. As such, Görli represents a small step on the long road to solving ethereum’s well-known scaling challenges.
Görli is described on the official website as “the first proof-of-authority cross-client testnet, synching Parity Ethereum, Geth, Nethermind, Pantheon, and EthereumJS.” The website adds that the testnet “is a community-based project, completely open-source” that has been in the works since September.
As developer Afri Schoedon, the release manager of the Parity Ethereum client, told CoinDesk:
“The vision of the Görli testnet is to enable developers for building applications on a reliable, high available testing network without restricting them in the tools they can use.”
And though the community-driven ethos surrounding the creation of Görli is important, Preston Van Loon, co-founder and technical lead of Prysmatic Labs, admitted that there is a different reason why it decided to launch Prysm on this particular test network.
Creating a formal Github request about two weeks prior to Görli’s launch, developers at Prysmatic Labs jumped at the opportunity to secure a fixed amount of GöETH – the native currency of the Görli network.
“Using Görli is beneficial because they haven’t started yet so we can ask for a really large amount of tokens in their genesis file. I’ve asked them for 10 million which is representative of what we might actually need to start ethereum 2.0,” said Van Loon. “It’s like the initial value for launching the test network … We’ll start at block zero with this amount and move on from there.”
Adding that it would be exceedingly difficult to secure this amount of tokens on a test network mid-chain, Van Loon elaborated:
“I wanted to get enough that we can get a realistic simulation …. and it will be really hard to do this later – to ask for that amount – [after] they launch.”
Still, even in the blockchain space, nothing is achieved without a little give-and-take.
In talks with members of the Görli team about this initial deposit of GöETH, Van Loon said that in exchange, Prysmatic Labs has agreed to run a node that will connect to and support validation of the Görli testnet.
“We’ve volunteered to run a node for them in exchange for this and so we’re helping each other out in that sense and we’re totally happy to do that … The talks have been so far [about] can we work together, can we support you and you’ll support us kind of thing,” said Van Loon.
The road map
Stepping back, Prysm itself will not launch on the new testnet until February or March, Van Loon said.
As explained by ethereum core developer Justin Drake during an “Ask Me Anything” Reddit forum last week, the full Serenity upgrade is envisioned to be rolled out piecemeal onto the main ethereum blockchain.
Phase zero – the one that will be tested by Prysmatic Labs – will include, among other features, a beacon chain, or a proof-of-stake (PoS) blockchain functioning as the “heartbeat” of the new ethereum network, as ethereum founder Vitalik Buterin put it on a Reddit thread.
As a caveat, Van Loon told CoinDesk the release would not be “a 100 percent complete phase zero implementation,” explaining that validators do not stand to lose their staked ether if they misbehave.
“When our test net launches, it’s going to be making assumptions that everybody is honest. So the penalty system I don’t think will be there on the day we launch but that’s not the point yet. That’s going to come later. The point is do these have connectivity and does the chain advance over longer periods of time.”
The other test network
In fact, Prysm will be operating on two test networks.
By staking a set amount of GöETH, users of the Görli network will be able to participate on a separate Prysm test network as “validators.”
As Van Loon explained, these validators are basically the new miners of ethereum 2.0. In phase zero of Serenity, they will either “produce a block as a block producer or attest to newly produced blocks [as] valid and it exists.”
As such, Görli will actually function as the test network to initiate new validators, while a separate test network engineered by Prysmatic Labs and featuring beacon chain technology will monitor the activity and “work” of these transaction validators.
On the latter system, developers will make sure the on-boarding process for new validators, in which they must stake a certain amount of GöETH before being assigned roles to perform, goes smoothly.
The big picture
While this is a partial “phase zero implementation” excluding key elements of ethereum 2.0 such as sharding which comes in later phases, Van Loon adds that the testnet launch will be useful for a number of other reasons.
For example, he told CoinDesk:
“The purpose of the testnet is to explore connectivity between other [software] clients, so we’re hoping that we’ll launch ours and then another team will launch their test network and we can have ours talking to each other and start finding problems sooner rather than later.”
But perhaps the most important reason Van Loon highlighted is that up until this point ethereum 2.0 had only materialized into proofs-of-concept and demos. A prototype, he said, while “it looks cool and it’s exciting, doesn’t really mean much.”
“We want to give someone who’s interested in this something to play with because when you download a demo you’re talking to yourself … Having this actual representation of a blockchain network is going to be important,” Van Loon said.
Speaking to this though during a developer call about Serenity, ethereum core developer Danny Ryan warned client teams like Prysmatic Labs to first target “your client to speak to other versions of your client.”
“I think demonstrating internal networks within a singular client is more sane,” said Ryan in the call. “And just going to keep you from, one, having to keep the [specifications] from having to harden entirely and, two, there could be a lot of time wasted until we iron out some of these bugs … I would imagine the release today has critical bugs.
Looking ahead, Van Loon added that in terms of continued research and development on ethereum 2.0, ethereum researchers are actually ahead of implementors like Prysmatic Labs and other client teams. The design is almost fully fleshed out, with a first version of the technical guidelines also called specifications for phase zero of Serenity released today.
Now, ethereum 2.0 client builders are racing to catch up, the reverse of an earlier situation where implementors were waiting on the researchers to come up with proposals.
Van Loon concluded:
“In terms of the research I think it’s pretty much there. We haven’t been blocked by research for quite some time. We haven’t been able to keep up with them.”
Görli launch watch party image courtesy María Paula Fernandez
Read more about.
The leader in blockchain news, CoinDesk is a media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. CoinDesk is an independent operating subsidiary of Digital Currency Group, which invests in cryptocurrencies and blockchain startups.
Toward a 12-second Block Time
One of the annoyances of the blockchain as a decentralized platform is the sheer length of delay before a transaction gets finalized. One confirmation in the Bitcoin network takes ten minutes on average, but in reality due to statistical effects when one sends a transaction one can only expect a confirmation within ten minutes 63.2% of the time; 36.8% of the time it will take longer than ten minutes, 13.5% of the time longer than twenty minutes and 0.25% of the time longer than an hour. Because of fine technical points involving Finney attacks and sub-50% double spends, for many use cases even one confirmation is not enough; gambling sites and exchanges often need to wait for three to six blocks to appear, often taking over an hour, before a deposit is confirmed. In the time before a transaction gets into a block, security is close to zero; although many miners refuse to forward along transactions that conflict with transactions that had already been sent earlier, there is no economic necessity for them to do so (in fact quite the contrary), and some don’t, so reversing an unconfirmed transaction is possible with about a 10-20% success rate.
In many cases, this is fine; if you pay for a laptop online, and then manage to yank back the funds five minutes later, the merchant can simply cancel the shipping; online subscription services work the same way. However, in the context of some in-person purchases and digital goods purchases, it is highly inconvenient. In the case of Ethereum, the inconvenience is greater; we are trying to be not just a currency, but rather a generalized platform for decentralized applications, and especially in the context of non-financial apps people tend to expect a much more rapid response time. Thus, for our purposes, having a blockchain that is faster than 10 minutes is critical. However, the question is, how low can we go, and if we go too low does that destabilize anything?
Overview of Mining
First off, let us have a quick overview of how mining works. The Bitcoin blockchain is a series of blocks, with each one pointing to (ie. containing the hash of) the previous. Each miner in the network attempts to produce blocks by first grabbing up the necessary data (previous block, transactions, time, etc), building up the block header, and then continually changing a value called the nonce until the nonce satisfies a function called a “proof of work condition” (or “mining algorithm”). This algorithm is random and usually fails; on average, in Bitcoin the network needs to collectively make about 10 20 attempts before a valid block is found. Once some random miner finds a block that is valid (ie. it points to a valid previous block, its transactions and metadata are valid, and its nonce satisfies the PoW condition), then that block is broadcast to the network and the cycle begins again. As a reward, the miner of that block gets some quantity of coins (25 BTC in Bitcoin) as a reward.
The “score” of a block is defined in a simplified model as the number of blocks in the chain going back from it all the way to the genesis (formally, it’s the total mining difficulty, so if the difficulty of the proof of work condition increases blocks created under this new more stringent condition count for more). The block that has the highest score is taken to be “truth”. A subtle, but important, point is that in this model the incentive for miners is always to mine on the block with the highest score, because the block with the highest score is what users ultimately care about, and there are never any factors that make a lower-score block better. If we fool around with the scoring model, then if we are not careful this might change; but more on this later.
We can model this kind of network thus:
However, the problems arise when we take into account the fact that network propagation is not instant. According to a 2020 paper from Decker and Wattenhofer in Zurich, once a miner produces a block on average it takes 6.5 seconds for the block to reach 50% of nodes, 40 seconds for it to reach 95% of nodes and the mean delay is 12.6 seconds. Thus, a more accurate model might be:
This gives rise to the following problem: if, at time T = 500, miner M mines a block B’ on top of B (where “on top of” is understood to mean “pointing to as the previous block in the chain”), then miner N might not hear about the block until time T = 510, so until T = 510 miner N will still be mining on B. If miner B finds a block in that interval, then the rest of the network will reject miner B’s block because they already saw miner M’s block which has an equal score:
Stales, Efficiency and Centralization
So what’s wrong with this? Actually, two things. First, it weakens the absolute strength of the network against attacks. At a block time of 600 seconds, as in Bitcoin, this is not an issue; 12 seconds is a very small amount of time, and Decker and Wattenhofer estimate the total stale rate as being around 1.7%. Hence, an attacker does not actually need 50.001% of the network in order to launch a 51% attack; if the attacker is a single node, they would only need 0.983 / 1 + 0.983 = 49.5%. We can estimate this via a mathematical formula: if transit time is 12 seconds, then after a block is produced the network will be producing stales for 12 seconds before the block propagates, so we can assume an average of 12 / 600 = 0.02 stales per valid block or a stale rate of 1.97%. At 60 seconds per block, however, we get 12 / 60 = 0.2 stales per valid block or a stale rate of 16.67%. At 12 seconds per block, we get 12 / 12 = 1 stale per valid block, or a stale rate of 50%. Thus, we can see the network get substantially weaker against attacks.
However, there is also another negative consequence of stale rates. One of the more pressing issues in the mining ecosystem is the problem of mining centralization. Currently, most of the Bitcoin network is split up into a small number of “mining pools”, centralized constructions where miners share resources in order to receive a more even reward, and the largest of these pools has for months been bouncing between 33% and 51% of network hashpower. In the future, even individual miners may prove threatening; right now 25% of all new bitcoin mining devices are coming out of a single factory in Shenzhen, and if the pessimistic version of my economic analysis proves correct that may eventually morph into 25% of all Bitcoin miners being in a single factory in Shenzhen.
So how do stale rates affect centralization? The answer is a clever one. Suppose that you have a network with 7000 pools with 0.01% hashpower, and one pool with 30% hashpower. 70% of the time, the last block is produced by one of these miners, and the network hears about it in 12 seconds, and things are somewhat inefficient but nevertheless fair. 30% of the time, however, it is the 30% hashpower mining pool that produced the last block; thus, it “hears” about the block instantly and has a 0% stale rate, whereas everyone else still has their full stale rate.
Because our model is still pretty simple, we can still do some math on an approximation in closed form. Assuming a 12 second transit time and a 60-second block time, we have a stale rate of 16.67% as described above. The 30% mining pool will have a 0% stale rate 30% of the time, so its efficiency multiplier will be 0.833 * 0.7 + 1 * 0.3 = 0.8831 , whereas everyone else will have an efficiency multiplier of 0.833; that’s a 5.7% efficiency gain which is pretty economically significant especially for mining pools where the difference in fees is only a few percent either way. Thus, if we want a 60 second block time, we need a better strategy.
The beginnings of a better approach come from a paper entitled “Fast Money Grows on Trees, not Chains”, published by Aviv Zohar and Yonatan Sompolinsky in December 2020. The idea is that even though stale blocks are not currently counted as part of the total weight of the chain, they could be; hence they propose a blockchain scoring system which takes stale blocks into account even if they are not part of the main chain. As a result, even if the main chain is only 50% efficient or even 5% efficient, an attacker attempting to pull off a 51% attack would still need to overcome the weight of the entire network. This, theoretically, solves the efficiency issue all the way down to 1-second block times. However, there is a problem: the protocol, as described, only includes stales in the scoring of a blockchain; it does not assign the stales a block reward. Hence, it does nothing to solve the centralization problem; in fact, with a 1-second block time the most likely scenario involves the 30% mining pool simply producing every block. Of course, the 30% mining pool producing every block on the main chain is fine, but only if the blocks off chain are also fairly rewarded, so the 30% mining pool still collects not much more than 30% of the revenue. But for that rewarding stales will be required.
Now, we can’t reward all stales always and forever; that would be a bookkeeping nightmare (the algorithm would need to check very diligently that a newly included uncle had never been included before, so we would need an “uncle tree” in each block alongside the transaction tree and state tree) and more importantly it would make double-spends cost-free. Thus, let us construct our first protocol, single-level GHOST, which does the minimal thing and takes uncles only up to one level (this is the algorithm used in Ethereum up to now):
- Every block must point to a parent (ie. previous block), and can also include zero or more uncles. An “uncle” is defined as a block with a valid header (the block itself need not be valid, since we only care about its proof-of-work) which is the child of the parent of the parent of the block but not the parent (ie. the standard definition of “uncle” from genealogy that you learned at age 4).
- A block on the main chain gets a reward of 1. When a block includes an uncle, the uncle gets a reward of 7/8 and the block including the uncle gets a reward of 1/16.
- The score of a block is zero for the genesis block, otherwise the score of the parent plus the difficulty of the block multiplied by one plus the number of included uncles.
Thus, in the graphical blockchain example given above, we’ll instead have something like this:
Here, the math gets more complex, so we’ll make some intuitive arguments and then take the lazy approach and simulate the whole thing. The basic intuitive argument is this: in the basic mining protocol, for the reasons we described above, the stale rate is roughly t/(T+t) where t is the transit time and T is the block interval, because t/T of the time miners are mining on old data. With single-level GHOST, the failure condition changes from mining one stale to mining two stales in a row (since uncles can get included but relatives with a divergence of 2 or higher cannot), so the stale rate should be (t/T)^2 , ie. about 2.7% instead of 16.7%. Now, let’s use a Python script to test that theory:
The results can be parsed as follows. The top five numbers are a centralization indicator; here, we see that a miner with 25% hashpower gets 25.39x as much reward as a miner with 1% hashpower. The efficiency is 0.9798 meaning that 2.02% of all blocks are not included at all, and there are 0.158 uncles per block; hence, our intuitions about a
16% stale rate without uncle inclusion and 2.7% with uncle inclusion are confirmed almost exactly. Note that the actual block time is 70.85s because even though there is a valid proof of work solution every 60s, 2% of them are lost and 14% of them make it into only the next block as an uncle, not into the main chain.
Now, there is a problem here. The original authors of the GHOST paper did not include uncle/stale rewards, and although I believe it is a good idea to deviate from their prescription for the reasons I described above, they did not do so for a reason: it makes the economic analysis more uncomfortable. Specifically, when only the main chain gets rewarded there is an unambiguous argument why it’s always worth it to mine on the head and not some previous block, namely the fact that the only thing that conceivably differentiates any two blocks is their score and higher score is obviously better than lower score, but once uncle rewards are introduced there are other factors that make things somewhat tricky.
Specifically, suppose that the main chain has its last block M (score 502) with parent L (score 501) with parent K (score 500). Also suppose that K has two stale children, both of which were produced after M so there was no chance for them to be included in M as uncles. If you mine on M , you would produce a block with score 502 + 1 = 503 and reward 1, but if you mine on L you would be able to include K ’s children and get a block with score 501 + 1 + 2 = 504 and reward 1 + 0.0625 * 2 = 1.125 .
Additionally, there is a selfish-mining-esque attack against single-level GHOST. The argument is as follows: if a mining pool with 25% hashpower were not to include any other blocks, then in the short term it would hurt itself because it would no longer receive the 1/16x nephew reward but it would hurt others more. Because in the long-term mining is a zero-sum game since the block time rebalances to keep issuance constant, this means that not including uncles might actually be a dominant strategy, so centralization concerns are not entirely gone (specifically, they still remain 30% of the time). Additionally, if we decide to crank up the speed further, say to a 12 second target block time, single-level is just not good enough. Here’s a result with those statistics:
18% centralization gain. Thus, we need a new strategy.
A New Strategy
The first idea I tried about one week ago was requiring every block to have five uncles; this would in a sense decentralize the production of each block further, ensuring that no miner had a clear advantage in making the next block. Since the math for that is pretty hopelessly intractable (well, if you try hard at it for months maybe you could come up with something involving nested Poisson processes and combinatorical generating functions, but I’d rather not), here’s the sim script. Note that there are actually two ways you can do the algorithm: require the parent to be the lowest-hash child of the grandparent, or require the parent to be the highest-score child of the grandparent. The first way (to do this yourself, modify line 56 to if newblock[“id”] > self.blocks[self.head][“id”]: , we get this:
Ooooops! Well, let’s try the highest-score model:
So here we have a very counterintuitive result: the 25% hashpower mining pool gets only 24x as much as a 1% hashpower pool. Economic sublinearity is a cryptoeconomic holy grail, but unfortunately it is also somewhat of a perpetual motion machine; unless you rely on some specific thing that people have a certain amount of (eg. home heating demand, unused CPU power), there is no way to get around the fact even if you come up with some clever sublinear concoction an entity with 25x as much power going in will at the very least be able to pretend to be 25 separate entities and thus claim a 1x reward. Thus, we have an unambiguous (okay, fine, 99 point something percent confidence) empirical proof that the 25x miners are acting suboptimally, meaning that the optimal strategy in this environment is not to always mine the block with the highest score.
The reasoning here is this: if you mine on a block that has the highest score, then there is some chance that someone else will discover a new uncle one level back, and then mine a block on top of that, creating a new block at the same level as your block but with a slightly higher score and leaving you in the dust. However, if you try to be one of those uncles, then the highest-score block at the next level will certainly want to include you, so you will get the uncle reward. The presence of one non-standard strategy strongly suggests the existence of other, and more exploitative, non-standard strategies, so we’re not going this route. However, I chose to include it in the blog post to show an example of what the dangers are.
So what is the best way forward? As it turns out, it’s pretty simple. Go back to single level GHOST, but allow uncles to come from up to 5 blocks back. Hence, the child of a parent of a parent (hereinafter, -2,+1-ancestor) is a valid uncle, a -3,+1-ancestor is a valid uncle, as is a -4,+1-ancestor and a -5,+1-ancestor, but a -6,+1-ancestor or a -4,+2-ancestor (ie. c(c(P(P(P(P(head)))))) where no simplification is possible) is not. Additionally, we increase the uncle reward to 15/16, and cut the nephew reward to 1/32. First, let’s make sure that it works under standard strategies. In the GHOST sim script, set UNCLE_DEPTH to 4, POW_SOLUTION_TIME to 12, TRANSIT_TIME to 12, UNCLE_REWARD_COEFF to 15/16 and NEPHEW_REWARD_COEFF to 1/32 and see what happens:
Completely reasonable all around, although note that the actual block time is 21s due to inefficiency and uncles rather than the 12s we targeted. Now, let’s try a few more trials for enlightenment and fun:
- UNCLE_REWARD_COEFF = 0.998 , NEPHEW_REWARD_COEFF = 0.001 lead to the 25% mining pool getting a roughly 25.3x return, and setting UNCLE_REWARD_COEFF = 7/8 and NEPHEW_REWARD_COEFF = 1/16 leads to the 25% mining pool getting a 26.26% return. Obviously setting the UNCLE_REWARD_COEFF all the way to zero would negate the benefit completely, so it’s good to have it be as close to one as possible, but if it’s too close to one than there’s no incentive to include uncles. UNCLE_REWARD_COEFF = 15/16 seems to be a fair middle ground, giving the 25% miner a 2.5% centralization advantage
- Allowing uncles going back 50 blocks, surprisingly, has fairly little substantial efficiency gain. The reason is that the dominant weakness of -5,+1 GHOST is the +1, not the -5, ie. stale c(c(P(P(..P(head)..)))) blocks are the problem. As far as centralization goes, with 0.998/0.001 rewards it knocks the 25% mining pool’s reward down to essentially 25.0x. With 15/16 and 1/32 rewards there is no substantial gain over the -4,+1 approach.
- Allowing -4,+3 children increases efficiency to effectively 100%, and cuts centralization to near-zero assuming 0.998/0.001 rewards and has negligible benefit assuming 15/16 and 1/32 rewards.
- If we reduce the target block time to 3 seconds, efficiency goes down to 66% and the 25% miner gets a 31.5x return (ie. 26% centralization gain). If we couple this with a -50,+1 rule, the effect is negligible (25% -> 31.3x), but if we use a -4,+3 rule efficiency goes up to 83% and the 25% miner only gets a 27.5x return (the way to add this to the sim script is to add after line 65 for c2 in self.children.get(c, <>): u[c2] = True for a -n,+2 rule and then similarly nest down one level further for -n,+3). Additionally, the actual block time in all three of these scenarios is around 10 seconds.
- If we reduce the target block time to 6 seconds, then we get an actual block time of 15 seconds and the efficiency is 82% and the 25% miner gets 26.8x even without improvements.
Now, let’s look at the other two risks of limited GHOST that we discussed above: the non-head dominant strategy and the selfish-mining attack. Note that there are actually two non-head strategies: try to take more uncles, and try to be an uncle. Trying to take more uncles was useful in the -2,+1 case, and trying to be an uncle was useful in the cas of my abortive mandatory-5-uncles idea. Trying to be an uncle is not really useful when multiple uncles are not required, since the reason why that alternative strategy worked in the mandatory-5-uncle case is that a new block is useless for further mining without siblings. Thus, the only potentially problematic strategy is trying to include uncles. In the one-block case, it was a problem, but here is it not because most uncles that can be included after n blocks can also be included after n+1 blocks, so the practical extent to which it will matter is limited.
The selfish-mining attack also no longer works for a similar reason. If you fail to include uncles, then the guy after you will. There are four chances for an uncle to get in, so not including uncles is a 4-party prisoner’s dilemma between anonymous players – a game that is doomed to end badly for everyone involved (except of course the uncles themselves). There is also one last concern with this strategy: we saw that rewarding all uncles makes 51% attacks cost-free, so are they cost-free here? Beyond one block, the answer is no; although the first block of an attempted fork will get in as an uncle and receive its 15/16x reward, the second and third and all subsequent ones will not, so starting from two confirmations attacks still cost miners almost as much as they did before.
Twelve seconds, really?
The most surprising finding about Decker and Wattenhofer’s finding is the sheer length of time that blocks take to propagate – an amazingly slow 12 seconds. In Decker and Wattenhofer’s analysis, the 12 second delay is actually mostly because of the need to download and verify the blocks themselves; ie. the algorithm that Bitcoin clients follow is:
However, Decker and Wattenhofer did propose a superior strategy which looks something like this:
This allows all of the steps to happen in parallel; headers can get broadcasted first, then blocks, and the verifications do not need to all be done in series. Although Decker and Wattenhofer do not provide their own estimate, intuitively this seems like it may speed up propagation by 25-50%. The algorithm is still non-exploitable because in order to produce an invalid block that passes the first check a miner would still need to produce a valid proof of work, so there is nothing that the miner could gain. Another point that the paper makes is that the transit time is, beyond a certain point, proportional to block size; hence, cutting block size by 50% will also cut transit time to something like 25-40%; the nonscaling portion of the transit time is something like 2s. Hence, a 3-second target block time (and 5s actual block time) may be quite viable. As usual, we’ll be more conservative at first and not take things that far, but a block time of 12s does nevertheless seem to be very much achievable.
How does Ethereum regulate the time between blocks?
I was reading “Explanation of genesis file” and I found the description of the timestamp:
A scalar value equal to the reasonable output of Unix’ time() function at this block inception.
This mechanism enforces a homeostasis in terms of the time between blocks. A smaller period between the last two blocks results in an increase in the difficulty level and thus additional computation required to find the next valid block. If the period is too large, the difficulty, and expected time to the next block, is reduced.
The timestamp also allows to verify the order of block within the chain (Yellowpaper, 4.3.4. (43)).
Note: Homeostasis is the property of a system in which variables are regulated so that internal conditions remain stable and relatively constant.
How does, generally speaking, Ethereum maintain it’s homeostasis and regulate the time between blocks? Now it’s 15 sec, but is it possible for the Ethereum developers to increase or decrease that time without touching the client or is this a paramether harcoded in the client? How can I set, for example, a 30 sec time between blocks on my testnet?
#1 Broker! 15 000 welcome bonus!
2nd place! Good choice!
The Best Binary Broker 2020!
Perfect For Beginners!
Free Trading Education!
Free Demo Account!
Get Your Sign-up Bonus Now!
Good Broker For Experienced Traders!