"Chinese repos" is a very charitable interpretation of the Google drive links they used to distribute the os. It seemed like it was on the free plan too, it often didn't work because it tripped the maximum downloads per month limit.
It's always better than a link in the sticky post on the manufacturer's phpbb forum. I bought some audio equipment directly from a Chinese company, and everything look like a hobbies/student project.
Keep in mind that for a lot of Chinese companies, it's difficult to (legally) access some outside resources.
My company hosts our docker images on quay.io and docker hub, but we also have a tarball of images that we post to our Github releases. Recently our release tooling had a glitch and didn't upload the tarballs, and we very quickly got Github issues opened about it from a user who isn't able to access either docker registry and has to download the tarball from Github instead.
It doesn't surprise me that a lot of these companies have the same "release process" as Wii U homebrew utilities, since I can imagine there's not a lot of options unless you're pretty big and well-experienced (and fluent in English).
I bought a MiniPC directly from a Chinese company (an AOOSTAR G37) and the driver downloads on their website are MEGA links. I thought only piracy and child porn sites used those..
I am somewhat amazed how you can manufacture such expensive high tech equipment yet are too cheap to setup a proper download service for the software, which would be very simple and cheap compared to making the hardware itself.
Maybe it is a Chinese mentality thing where the first question is always "What is the absolutely cheapest way to do this?" and all other concerns are secondary at best.
..which does not inspire confidence in the hardware either.
Maybe Chinese customers are different, see this, and think "These people are smart! Why pay more if you don't have to!".
That was not my experience, at least for very large files (100+ GB). There was a workaround (that has since been patched) where you could link files into your own Google drive and circumvent the bandwidth restriction that way. The current workaround is to link the files into a directory and then download the directory containing the link as an archive, which does not count against the bandwidth limit.
As nice as it looks, I have a lot of trouble believing the "we have magic money, it's free because that's good for business" logic.
PDFgear is free of charge, and we don’t generate income through any hidden means. We Do NOT misuse or sell user data and we Do Not display ads. Here’s how we keep operations running:
We’ve secured investment to cover operational costs, including team expenses and technology like the ChatGPT API. We’re also experienced in optimizing technology usage to manage costs more effectively.
In the future, most features will remain free, but there will be a fee for some advanced options. Paid options may include AI-driven tools requiring cloud computing and special PDF conversion features. This balanced approach will allow PDFgear to remain widely accessible while meeting users’ evolving needs with advanced solutions.
> even a 1 minute compile time is dwarfed by the time it takes to write and reason about code, run tests, work with version control, etc.
You are far from the embedded world if you think 1 minute here or there is long. I have been involved with many projects that take hours to build, usually caused by hardware generation (fpga hdl builds) or poor cross compiling support (custom/complex toolchain requirements). These days I can keep most of the custom shenanigans in the 1hr ballpark by throwing more compute at a very heavy emulator (to fully emulate the architecture) but that's still pretty painful. One day I'll find a way to use the zig toolchain for cross compiles but it gets thrown off by some of the c macro or custom resource embedding nonsense.
Edit: missed some context on lazy first read so ignore the snark above.
> Edit: missed some context on lazy first read so ignore the snark above.
Yeah, 1 minute was the OP's number, not mine.
> fpga hdl builds
These are another thing entirely from software compilation. Placing and routing is a Hard Problem(TM) which evolutionary algorithms only find OK solutions for in reasonable time. Improvements to the algorithms for such carry broad benefits. Not just because they could be faster, but because being faster allows you to find better solutions.
I've used them as a quick way to get rootless configured base images. Not sure if official repos provide those now, but it used to be a big hassle to get things like postgres images running without root in their containers. Although I often had to read through their dockerfiles to figure out the uid setup, where configs live, etc because they were not consistent between the various bitnami images.
If you really want the 80 col experience, I think that was the fortran 77 default. You'd even get compiler errors if you try to exceed it. Of course there were flags to increase the line length limit but don't tell your colleagues.
F# has been a third class citizen for a long time... Last I heard the entire f# team was ~10 people. Pretty sure "find references" still doesn't work across c# and f# (if you call a c# method from f# or vise versa). That also means symbol renames don't work correctly.
I agree, but somewhat paradoxically, F#’s lack of new features kind of becomes a feature. Have you seen the number of C# features added in the last 5-10 years? It’s crazy
The F# team is smaller than 10 people, always has been.
~10 people is the size of the C# and VB language design and compiler team. The IDE side of things for C# and VB is about another 20+ people depending on how you count, although they also build and own infrastructure that (a) the F# team sits atop, and (b) is used by other languages in Visual Studio and is used in VS Code.
The #1 thing that people always end up surprised by is just how small and effective these teams are.
Side note, if you're a lazygit fan, consider using gitui as an alternative. Feature wise they're pretty similar but gitui is much faster and I find it easier to use.
I wish setup.py was actually on the way out, but sadly, it's the only straightforward way to handle packages that use cython or interop. In these cases, libraries use setup.py to compile the dll/so/dylib at install time. Naturally this is a bit of nightmare fuel since installing gets arbitrary code execution privileges but there's no real standard for privileges in python package installs.
I guess they abandoned the python superset idea? I followed them for a bit when they first publicly launched and they said "don't worry, we'll be a real python superset soon" and the biggest omission was no support for classes. A few years later, it looks to be missing the same set of python features but added a lot of their own custom language features.
It was highly aspirational goal, and practically speaking it's better right now to take inspiration from Python and have stronger integration hooks into the language (full disclosure, I work at Modular). We've specifically stopped using the "superset of Python" language to be more accurate about what the language is meant for right now.
Being a Python superset and being fast are fundamentally in tension. It would be possible, maybe, to have a Python superset where you get the highest performance as long as you avoided the dynamic features of Python. However, I suspect it would make the user base grumpy to have a hidden performance cliff that suddenly showed up when you used dynamic features (or depended on code that did the same).
The dynamic features of Python are no different from the dynamic features of Smalltalk, Self, Common Lisp, but people have been educated to expect otherwise due to the adoption failure of dynamic compilers in Python community.