Is the non-trivial amount of time significantly less than you trying to ramp up yourself?
I am still hesitant using AI for solving problems for me. Either it hallucinates and misleads me. Or it does a great job and I worry that my ability of reasoning through complex problems with rigor will degenerate. When my ability of solving complex problems degenerated, patience diminished, attention span destroyed, I will become so reliant on a service that other entities own to perform in my daily life. Genuine question - are people comfortable with this?
The ramp-up time with AI is absolutely lower than trying to ramp up without AI.
My comment is specifically in contrast to working in a codebase where I'm at "max AI productivity". In a new codebase, it just takes a bit of time to work out kinks and figure out tendencies of the LLMs in those codebases. It's not that I'm slower than I'd be without AI, I'm just not at my "usual" AI-driven productivity levels.
You use it when you know how to do something and know exactly what the solution looks like, but can't be arsed to do it. Like most UI work where you just want something in there with the basic framework to update content etc. There's nothing challenging in doing it, you know what has to be done, but figuring out the weird-ass React footguns takes time. Most LLMs can one-shot it with enough information.
You can also use it as a rubber duck, ask it to analyse some code, read and see if you agree. Ask for improvements or modifications, read and see if you agree.
>Genuine question - are people comfortable with this?
It's a question of degree, but in general, yeah. I'm totally comfortable being reliant on other entities to solve complex problems for me.
That's how economies work [1]. I neither have nor want to acquire the lifetime of experience I would need to learn how to produce the tea leaves in my tea, or the clean potable water in it, or the mug they are contained within, or the concrete walls 50 meters up from ground level I am surrounded by, or so on and so forth. I can live a better life by outsourcing the need for this specialized knowledge to other people, and trade with them in exchange for my own increasingly-specialized knowledge. Even if I had 100 lifetimes to spend, and not the 1 I actually have, I would probably want to put most of them to things that, you know, aren't already solved-enough problems.
Everyone doing anything interesting works like this, with vanishingly few exceptions. My dad doesn't need to know how to do algebra to get his taxes done, he just has an accountant. And his accountant doesn't need to know how to rewire his turn of the century New England home. And if you look at the exceptions, like that really cute 'self sufficient' family who uploads weekly YouTube videos called "Our Homestead Life"... It often turns out that the revenue from that YouTube stream is nontrivial to keeping the whole operation running. In other words, even if they genuinely no longer go to Costco, it's kind of a gyp.
> My dad doesn't need to know how to do algebra to get his taxes done, he just has an accountant.
This is not quite the same thing. The AI is not perfect, it frequently makes mistakes or suboptimal code. As a software engineer, you are responsible for finding and fixing those. This means you have to review and fully understand everything that the AI has written.
Quite a different situation than your dad and his accountant.
I see your point. I don't think it's different in kind, just degree. My thought process: First, is my dad's accountant infallible?
If not, then they must themselves make mistakes or do things suboptimally sometimes. Whose responsibility is that - my dad, or my dad's accountant?
If it is my dad, does that then mean my dad has an obligation to review and fully understand everything the accountant has written?
And do we have to generalize that responsibility to everything and everyone my dad has to hand off work to in order to get something done? Clearly not, that's absurd. So where do we draw the line? You draw it in the same place I do for right now, but I don't see why we expect that line to be static.
But there’s no way one is giving as thorough a review as if one had written code to solve the problem themselves. Writing is understanding. You’re trading thoroughness and integrity for chance.
Writing code should never have been a bottle neck. And since it wasn’t, any massive gains are due to being ok with trusting the AI.
I would honestly say, it's more like autocomplete on steroids, like you know what you want so you just don't wanna type it out (e.g. scripts and such)
And so if you don't use it then someone else will... But as for the models, we already have some pretty good open source ones like Qwen and it'll only get better from here so I'm not sure why the last part would be a dealbreaker
> I’m not totally sure if this is a GOOD idea to add to the c++ standard
What are the downsides? Naively, it seems like a good idea to both provide a coroutine spec (for power users) and a default task type & default executor.
well, Rust didn't do the same thing for a reason. Rust lets you pick and choose what async runtime to use (even though everyone has decided to use Tokio anyways). This is good because it allows for alternative async runtimes like Embassy (https://embassy.dev/) and it also doesn't freeze the API into something that can't change. It could totally be possible that people find a new style of async that works better than std::execution.
I don't know how it works for C++ but you're not locked down to a single implementation with how C# does it. You can have it use different executors/schedulers, different task types, etc.
I am not a native speaker and I joke about my typos and grammar mistakes being the evidence that none of my code or post is AI generated.
Sorry about the typos. I just fixed all the ones I can find. Hope it's better now.
Neat. It certainly makes oncall and maintenance easier! It is likely more resource efficient by e.g. minimizing idle compute, maximizing cache hit rate, etc.
Love it! I wonder if the team knew this explicitly or intuitively when they deployed the strategy.
> We created a rule in our central monitoring and alerting system to randomly kill a few instances every 15 minutes. Every killed instance would be replaced with a healthy, fresh one.
It doesn't look like they worked out the numbers ahead of the time.
I should say, though, that if you're on Windows then I have yet to find a real workload where SRWLock isn't the fastest (provided you're fine with no recursion and with a lock that is word-sized). That lock has made some kind of deal with the devil AFAICT.
Math, sure - doesn't the understanding of physics also go through changes? Do we really understand the reality of the world; or how do we know our current understanding of it won't change?
Asimov wrote an essay called "relativity of wrong" that I think does a good job of capturing the changes our understanding of the world goes through.
Yes, Einstein's theory of relativity was a change from Newtonian physics but it's a fairly minor correction for most practical purposes and Newtonian physics is still important to know and understand.
So yeah, our understanding of physics will likely change but it'll only matter in more and more extreme edge cases and will likely build on our current understanding. Maybe it'll result in us finally having fusion reactor, room temperature super conductors, or quantum computers but you're still going to get a roughly parabolic arc when you throw a ball through the air.
I think 20 years for physics won’t really make much of an impact. Maybe you build an even bigger particle accelerator and confirm another well accepted idea. But, there’s not really going to be groundbreaking changes that affects people on the daily.
The same happens in work place as well, when multiple people have similar ideas and it is unclear or impossible to credit who came up with the idea first. It is often the case that people with shared context had similar ideas independently when facing the same set of problems.
I have the impression that it is controversial about who invented the hybrid logical clock first as well. Although most people cite the Kulkani paper from 2015 I think?
Do people know who else claims to invent HLC first?
The biggest difference between automatically keeping an MV up to date vs keeping indices up to date is that the write amplification of the latter is a function of the index count you have, while the former is a function of data and query. It’s easy to come up with cases when users update a single db row, and you end up having to update millions of rows in a MV (eg the every row in the MV has a name “Rob” and Bob changes his name).
I read the timely dataflow which underpins materialize.com. It seems like we don’t necessarily need the support of loops, which timely dataflow allows, for regular SQL, which is a DAG of operators. It appears that as long as the database supports snapshot reads, one can have a push-based query execution to enable incremental MV updates. The problem, I think, is still in the write-amp-as-a-function-of-data, which is unbounded. It is very cool regardless.
The technique can be used for cache invalidation as well, given the data cached needs to be described in SQL, which seems reasonable.
Apple claims that its Weather App has integrated DS’s features. However, for a few times it’s ridiculously wrong. It said it’s cloudy when it’s raining onto the very phone, which I don’t remember happened to DS. Forecast for the next hour feels like as accurate as forecast for next month. I live in New England. I am going to miss DS a lot.
I am still hesitant using AI for solving problems for me. Either it hallucinates and misleads me. Or it does a great job and I worry that my ability of reasoning through complex problems with rigor will degenerate. When my ability of solving complex problems degenerated, patience diminished, attention span destroyed, I will become so reliant on a service that other entities own to perform in my daily life. Genuine question - are people comfortable with this?