Which ISO certification matters, but the key thing people should be aware of is that the primary value of the certification to customers is that your processes are documented and that deviations are tracked, so that customers can check whether the processes makes sense before signing a contract. It's important not to expect the certification itself to guarantee quality.
When we visited Tokyo last year, what stopped us from even trying was the online information we came across was unclear and suggested we could only get the physical cards at the airport and at some tourist office, and we forgot to look for it at the airport... I don't know if that is correct or not, but compare Oyster in London which is advertised at practically every corner store, so even if you get into town not knowing the system, it's hard not to find somewhere you can get a card (or you can just use contactless - I haven't had an oyster card in years).
The UK is completely chaotic ticket-wise on a national level, though.
I wish we'd known that ahead of time... It looks like the only difference is the deposit and expiry? Which seems like it makes the tourist-only version pretty much pointless.
> we could only get the physical cards at the airport and at some tourist office, and we forgot to look for it at the airport
Little over a decade ago I did exactly the same. I ended up buying a Suica card at Ueno station from a clerk, which was a bit of an adventure since she was eager to help but barely spoke any English and I barely spoke any Japanese. Together we skillfully massacred both languages with an ad-hoc pidgin and lots of gesturing. Due to an issue with my wireless hotspot I only had an old school phrasebook at my disposal, which was about as helpful as the infamous Monty Python sketch implies. The airport seemed much more convenient as a tourist since everyone there at the very least spoke basic English. At the time it was certainly possible to get a Suica card at a major train station, though admittedly not easy.
We got IC cards (ICOCA) in Osaka for 500 Yen each, and used them for 2 weeks travelling across Japan this March. Worked like a charm, only thing that's annoying for us tourists is how it is a stored value card and needs to be topped up. I think we still had like 500 Yen on our cards when we departed, even though we bought a lot of stuff with it on the last few days.
While we got ours at the Osaka airport (KIX), I am sure I saw the "purchase a new SUICA/ICOCA" options at a few terminals while topping up. I suppose you mixed up the "Welcome to SUICA" tourist card (available at fewer locations) with the normal one? I was under the impression there was a lot of confusing information floating around online.
But I agree, public transport in London is - as a tourist - more straight forward. Just a matter of spotting the terminals at some stations IIRC. OTOH in Japan we found no station with an elevator smelling like someone used a hippie bus as an emergency toilet ;-)
Even the lowest density US states have most of the population in corridors or areas with sufficient density.
E.g. Montana used to have passenger rail through the most densely populated Southern part of the state. That region has comparable density to regions of Norway that have regular rail service. (There are efforts to restart passenger service in Southern Montana)
And it's not like places like Norway have rail everywhere either - the lower threshold for density where rail is considered viable is just far lower.
The actual proportion of the US population that lives in areas with too low density to support rail is really tiny.
That sounds like a solution looking for a problem though, i see plenty of arguments against throwing critical safety information that are in charge of peoples lives into an LLM "just in case the result is better than the result that the current battle-hardened systems already provide"
To properly test an LLM based emergency system against the current as-is system there needs to be a way of verifying whether the LLM detected emergency is classed as an emergency as-is. If this information was available publicaly it could enable bad actors things like stress-testing the EMP-tolerance of the current systems or what level of malware infiltration is detected.
The article is not about practical measurements at all. Doing it manually has nothing to do with it. It is explicitly about why the measured length depends on the precision you choose to measure with.
The article spends a lot of time acting like this is some intractable problem and that its intractability is the reason there is so much discrepancy between countries and agencies.
Yeah, I mostly stopped checking hardware compatibility for Linux ~10 years ago. Every now and again there's an issue, but it's usually easy to work around, or I wait a little bit and it's resolved. When it got to the point that I felt I didn't need to check any more, it was a big deal.
I had an RX 5700XT at launch, that was about the most painful... but 6mo later it worked fine... But by then I did switch back to Windows because I couldn't deal with the day to day issues... A year later, I went back to Linux and haven't looked back though.
There was a couple of PhD theses at ETH Zurich in the 90s on optimizations for Oberon, as well as SSA support. I haven't looked at your language yet, but depending on how advanced your compiler is, and how similar to Oberon, they might be worth looking up.
I'm only aware of Brandis’s thesis who did optimizations on a subset of Oberon for the PPC architecture. There was also a JIT compiler, but not particularly optimized. OP2 was the prevalent compiler and continued to be extended and used for AOS, and it wasn't optimizing. To really assess whether a given language can achieve higher performance than other languages due to its special design features, we should actually implement it on the same optimizing infrastructure as the other languages (e.g. LLVM) so that both implementations have the same chance to get out the maximum possible benefit. Otherwise there are always alternative explanations for performance differences.
It might have been Brandis' thesis I was primarily thinking about. Of the PhD theses at EHTz on Oberon, I'm also a big fan of Michael Franz' thesis on Semantic Dictionary Encoding, but that only touched on optimization potential as a sidenote. I'm certain there was at least one other paper on optimization, but it might not have been a PhD thesis...
I get the motivation for wanting to use LLVM, but personally I don't like it (and have the luxury of ignoring it since I only do compilers as a hobby...) and prefer to aim for self-hosting whenever I work on a language. But LLVM is of course a perfectly fine choice if your goal doesn't include self-hosting - you get a lot for free.
I don’t like LLVM either, because its size and complexity are simply spiraling out of control, and especially because I consider the IR to be a total design failure. If I use LLVM at all, it would be version 4.0.1 or 3.4 at most. But it is the standard, especially if you want to run tests related to the question the fellow asked above. The alternative would be to build a frontend for GCC, but that is no less complex or time-consuming (and ultimately, you’re still dependent on binutils). However, C on LLVM or GCC should probably be considered the “upper bound” when it comes to how well a program can be optimized, and thus the benchmark for any performance measurement.
> However, C on LLVM or GCC should probably be considered the “upper bound” when it comes to how well a program can be optimized, and thus the benchmark for any performance measurement.
Is it? Isn't it rather the case that C is too low level to express intent and (hence) offer room to optimize? I would expect that a language in which, e.g. matrix multiplication can be natively expressed, could be compiled to more efficient code for such.
I would rather expect, that for compilers which don't optimize well, C is the easiest to produce fairly efficient code for (well, perhaps BCPL would be even easier, but nobody wants to use that these days).
> I would expect that a language in which, e.g. matrix multiplication can be natively expressed, could be compiled to more efficient code for such.
That's exactly the question we would hope to answer with such an experiment. Given that your language received sufficient investments to implement an optimal LLVM adaptation (as C did), we would then expect your language to be significantly faster on a benchmark heavily depending on matrix multiplication. If not, this would mean that the optimizer can get away with any language and the specific language design features have little impact on performance (and we can use them without performance worries).
Rochus, your point about LLVM and the 'upper bound' of C optimization is a bit of a bitter pill for systems engineers. In my own work, I often hit that wall where I'm trying to express high-level data intent (like vector similarity semantics) but end up fighting the optimizer because it can't prove enough about memory aliasing or data alignment to stay efficient.
I agree with guenthert that higher-level intent should theoretically allow for better optimization, but as you said, without the decades of investment that went into the C backends, it's a David vs. Goliath situation.
The 'spiraling complexity' of LLVM you mentioned is exactly why some of us are looking back at leaner designs. For high-density data tasks (like the 5.2M documents in 240MB I'm handling), I'd almost prefer a language that gives me more predictable, transparent control over the machine than one that relies on a million-line optimizer to 'guess' what I'm trying to do. It feels like we are at a crossroads between 'massive compilers' and 'predictable languages' again.
When you call LLVM IR a design failure, do you mean its semantic model (e.g., memory/UB), or its role as a cross-language contract? Is there a specific IR propert that prevents clean mapping from Oberon?
Several historical design choices within the IR itself have created immense complexity, leading to unsound optimizations and severe compile-time bloat. It's not high-level enough so you e.g. don't have to care about ABI details, and it's not low-level enought to actually take care of those ABI details in a decent way. And it's a continuous moving target. You cannot implement something which then continus to work.
To be fair they also kind of share that opinion, hence why MLIR came to be, first only for AI, nowadays for everything, even C is going to get its own MLIR (ongoing effort).
Threre are at least two projects I'm aware of, but I don't think they are ready yet to make serious measurements or to make optimal use of LLVM (just too big and complex for most people).
I hit the limits on the lower tiers of Codex just as fast as with Claude. At the moment I'm cycling between Claude, Codex, GLM5.1, and Kimi. The latter two are getting good enough, though, that I can make things go really far by doing planning with Opus and then switching to one of the cheap models for execution.
I'd say JS Bach was one of the fruits of our labor, so were Newton, Einstein and van Gogh.
Olympic Athletes are a combination of luck in the genetics department and a lot of effort, but ultimately do not seem to be sufficient to help the athletes themselves.
reply