Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not sure I understand Q1 - that's exactly the point: If you withdraw _from your account_ and customer B withdraws from _their_ account, then the two events are unrelated and can be executed in either order (and, in fact, replicas would still have the same state even if some executed AB and some BA).

The replay is part of what the authors fixed in the original protocol. I believe but need to read their protocol in more detail on Monday that the intuition for this is that when there's an outage and you bring a new node online, the system commits a Nop operation that conflicts with everything. This effectively creates a synchronization barrier that that forces re-reading all of the previous commits.

But I'm confused about the phrasing of your question because the actor isn't clear here when you say "I re-read events 1-100" -- which actor is "I"? Remember that a client of the system doesn't read "events", it performs operations, such as "read the value of variable X". In other words, clients perform operations that observe _state_, and the goal of the algorithm is to ensure that the state at the nodes is consistent according to a specific definition of consistency.

So if a client is performing operations that involve a replacement node, the client contacts the node to read the state, and the node is responsible for synchronizing with the state as defined by the graph of operations conflicting with the part of the state requested by the client, which will include _all_ operations prior to the replacement of the node due to the no-op.



I forget the term, it might be Dependency Graph.

Hypothetically lets say there's a synchronized quantum every 60 seconds. Order of operations might not matter if transactions within that window do not touch any account referenced by other transactions.

However every withdrawal is also a deposit. If Z withdraws from Y, and Y withdraws from X, and X also withdraws from Z there's a related path.

Order also matters if any account along the chain would reach an 'overdraft' state. The profitable thing for banks to do would be to synchronously deduct the withdrawals first, then apply them to maximize the overdraft fees. A kind thing would be the inverse, assume all payments succeed and then go after the sources. Specifying the order of applied operations, including aborts, in the case of failures is important.


Those transfers would be represented as having dependencies on both accounts they touch, and so would be forced to be ordered.

Transfer(a, b, $50)

And

Transfer(b, c, $50)

Are conflicting operations. They don't commute because of the possibility that b could overdraft. So the programmer would need to list (a, b) as the dependencies of the first transaction and (b, c) as the second. Doing so would prevent concurrent submission of these transactions from being executed on the fast path.


As I was implying with chains after the toy example, the issue of ordering matters when there's a long sequence of operations that touches many accounts. How easy is it to track all of the buckets touched when for every source there could be a source upstream from any other transaction?

A temporary table could hold that sort of data in a format that makes sense for rolling back the related transactions and then replying them 'on the slow path' (if order matters).


> I'm not sure I understand Q1 - that's exactly the point: If you withdraw _from your account_ and customer B withdraws from _their_ account

Same account.

> the actor isn't clear here when you say "I re-read events 1-100" -- which actor is "I"?

The fundamental purpose of Paxos is that different actors will come to a consensus. If different actors see different facts, no consensus was reached, and Paxos wasn't necessary.


If it's the same account, the two operations will have the same dependencies, and thus the system will be forced to order them the same at all replicas.


Between their two questions, I'm guessing more directly what they're getting at is if events 100 and 101 can be reordered, what's the guarantee that reconnecting doesn't end up giving you event 100 twice and skipping 101?

[Edit, rereading] Shortened down, just this part is probably it:

> which will include _all_ operations prior to the replacement of the node due to the no-op.

Sounds like a graph merge, not actually a replay.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: