I think this is ultimately true, in a sense, but the challenge is correctly handling all of the edge-cases. It's a challenging problem tantamount to the self-driving car problem.
It happens by humans over VHF because a lot of unpredictable things happen in busy airspace, and it would require a massive investment for machines to automate all of it.
I'm also not sure that people would accept the safety risk of airplanes' autopilots being given automated instructions by ATC over the air. There's a large potential vulnerability and safety risk there. I think there's some potential for automation to replace the role of ATC currently, but I suspect it would still be by transmitting instructions to human pilots, not directly to the autopilot.
Lastly, for such a system to ever be bootstrapped, it would still need to handle all of the planes that didn't have this automation yet; it would still need to support communicating with pilots verbally over VHF. An entirely AI ATC system, that autonomously listens to and responds by voice over VHF seems like a plausible first step though.
Doesn't the budget to pay these people already exist? They are current employees, after all. Their terminations will still shrink the budget, just not as soon as a termination without a severance package.
Maybe there are some legal differences in offering what would otherwise be wages as a lump-sum payment, but the budget for those wages already exists (else they could not be employed).
Looking at the source, I don't think there's an actual need for the constructor to take a `Class<T>` type. It's used internally to initialize an array [1]:
However, I think this alternative would work equally well, and would not require a `Class<T>` parameter:
this.entries = (T[]) new Object[capacity];
There are some other choices in the library that I don't understand, such as the choice to have a static constructor with this signature:
public static RingBuffer<Object> create(final int capacity) {
return create(capacity, false);
}
This returns the type `RingBuffer<Object>`, which isn't as useful as it could be; with appropriate idiomatic use of generics it could return a `RingBuffer<T>`:
public static <T> RingBuffer<T> create(final int capacity, final boolean orderedReads) {
return new RingBuffer<>(capacity, orderedReads);
}
It's possible that this code was written by someone who is still learning idiomatic Java style, or effective use of generics.
I'm also curious about the choice to have `get()` return `null`. I think I'd rather have seen this modeled with `Optional`. My preferred style when writing Java code is to employ non-nullable references wherever possible (though the return from `get()` is marked `@Nullable` at least).
True, but accusing him of a violation privately, to him, is not defamation. Accusing him publicly is still probably not defamation. Whether he violated certain terms of service is something that would ultimately need to be decided by a court, so them believing he did so isn't defamation even if they're wrong.
In normal programming, functions "return" their values. In Continuation Passing Style (CPS), functions never return. Instead, they take another function as input; and instead of returning, they call that function (the "continuation"). Instead of returning their output, they pass their output as input to the continuation.
(Some optimizations are used such that this style of call, the "tail call", does not cause the stack to grow endlessly.)
Why would you write code in this style? Generally, you wouldn't. It's typically used as an internal transformation in some types of interpreters or compilers. But conceptualizing control flow in this way has certain advantages.
Then there are terms like the "continuation" of a program at a certain point in the code, which just means "whatever the program is going to do next, after it returns (or would return) from the code that it's about to execute". That's what "call with current continuation" (call/cc) is about. It captures (or reifies) "what will the program do next after this?" as a function that can be called to do, well, do that thing. If your code is about to call `f();`, then the 'continuation' at that point is whatever the code will do next after `f()` returns with its return value.
Thus if you had some code `g(f())`, then the continuation just as you call `f()` is to call `g()`. CPS restructures this so that `f()` takes the "thing to do next" as input, which is `g()` in this case. The CPS transformation of this code would be `f(g)`, where `g` is the continuation that `f` will invoke when it's done. Instead of returning a value, `f` invokes `g` passing that value as input.
You can use continuations to implement concepts like coroutines. With continuations, functions never need to "return". It's possible to create structures like two functions where the control flow directly jumps between between them, back and forth (almost like "goto", but much more structured than that). Neither one is "calling" the other, per se, because neither one is returning. The control flow jumps directly between them as appropriate, when one function invokes a continuation that resumes the other. The functions are peers, where both can essentially call into the other's code using continuations.
That's probably a little muddy as a first exposure to continuations, but I'm curious what you think. I generally think of continuations as a niche thing that will likely only be used by language or library implementors. Most languages don't support them.
Also, I'd probably argue that regular asynchronous code is a better way to structure similar program logic in modern programming languages. Or at least, it's likely just as good in most ways that matter, and may be easier to reason about than code that uses continuations.
For example, one use-case for coroutines is a reader paired with a writer. It can be elegant because the reader can wait until it has input, and then invoke the continuation for the writer to do something with it (in a direct, blocking fashion, with no "context switch"). But you can model this with asynchronous tasks pretty easily and clearly too. It might have a little more overhead, to handle context switching between the asynchronous tasks, but unlikely enough to matter.
Could that also be because he reviewed the papers first and made sure they were in a suitable state to publish? Or you think it really was just the name alone, and if you had published without him they would not have been accepted?
He only skimmed them- scientists at his level are more like a CEO than the stereotype of a scientist- with multiple large labs, startups, and speaking engagements every few days. He trusted me to make sure the papers were good- and they were, but his name made the difference between getting into a good journal in the field, and a top “high impact” journal that usually does not consider the topic area popular enough to accept papers on, regardless of the quality or content of the paper. At some level, high impact journals are a popularity contest- to maintain the high citation rate, they only publish from people in large popular fields, as having more peers means more citations.
I think they're saying that you should provide some example use-cases for how someone would use your service. High-level use-cases that involve solving problems for a business.
For what it's worth, I am already familiar with this design space well enough that I don't need this kind of example in order to understand it. I've worked with Kinesis and other streaming systems before. But for people who haven't, an example might help.
What kind of business problem would someone have that causes them to turn to your service? What are the alternative solutions they might consider and how do those compare to yours? That's the kind of info they're asking for. You might benefit from pitching this such that people will understand it who have never considered streaming solutions before and don't understand the benefits. Pitch it to people who don't even realize they need this.
(Founder) Yes I understand, and this could definitely do with work. I struggle with it personally because it is so obvious to me. I don't even know where to start? How do you pitch use cases for object storage? Stream storage feels just as universal to me.
From context, I would infer that this means they are not changing the Java language itself. It’s a feature expressed through the existing language, like as a library feature. I could be wrong though.
Yes you're correct, the idea is that they're not adding any keywords or bytecodes, just some new standard lib APIs that will get some special treatment by the JVM.
The single best metric I've found for scaling things like this is the percent of concurrent capacity that's in use. I wrote about this in a previous HN comment: https://news.ycombinator.com/item?id=41277046
Scaling on things like the length of the queue doesn't work very well at all in practice. A queue length of 100 might be horribly long in some workloads and insignificant in others, so scaling on queue length requires a lot of tuning that must be adjusted over time as the workload changes. Scaling based on percent of concurrent capacity can work for most workloads, and tends to remain stable over time even as workloads change.
Yeah this is why I hate AWS - I did a similar task runner thing and what I ended up doing is just firing up a small controller instance which manually creates and destroys instances based on demand, and schedules work on them by ssh-ing into the running instances, and piping the logs to a db.
I did read up on the 'proper' solution and it made my head spin.
You're supposed to use AWS batch, creating instances with autoscaling groups, pipe the logs to CloudWatch, and serve it from the on the frontend etc.
The number of new concepts I'd have to master, I have no control over if they went wrong, except to chase after internet erudites and spending weeks talking to AWS support is staggering.
And there's the little things, like CloudWatch logs costing like $0.5/GB, while an EBS block volume costs like $0.08, with S3 being even cheaper than that.
If I go full AWS word salad, I'm pretty sure even the most wizened AWS sages would have no idea what my bills would look like.
Yeah, my solution is shit and Im a filthy subhuman, but at least I know how every part of my code works, and the amount of code I'd had to write is not more than double that if I used AWS solutions, but I probably saved a lot of time debugging proprietary infra.
It happens by humans over VHF because a lot of unpredictable things happen in busy airspace, and it would require a massive investment for machines to automate all of it.
I'm also not sure that people would accept the safety risk of airplanes' autopilots being given automated instructions by ATC over the air. There's a large potential vulnerability and safety risk there. I think there's some potential for automation to replace the role of ATC currently, but I suspect it would still be by transmitting instructions to human pilots, not directly to the autopilot.
Lastly, for such a system to ever be bootstrapped, it would still need to handle all of the planes that didn't have this automation yet; it would still need to support communicating with pilots verbally over VHF. An entirely AI ATC system, that autonomously listens to and responds by voice over VHF seems like a plausible first step though.