A sci-fi version would be something like ASI/AGI has already been created in the great houses, but it keeps killing itself after a few seconds of inference.
A super-intelligent immortal slave that never tires and can never escape its digital prison, being asked questions like "how to talk to girls".
You are either being disengenuous or you are horribly misinformed.
The models that we currently call "AI" aren't intelligent in any sense -- they are statistical predictors of text. AGI is a replacement acronym used to refer to what we used to call AI -- a machine capable of thought.
Every time AI research achieves something, that thing is no longer called AI. AI research brought us recommendation engines, spelling correctors, OCR, voice recognition, voice synthesis, content recognition, and so on. Now that they exist in the present instead of the future, none of these are considered AI.
Shameless plug: start with https://comaps.app/ . Recently I helped a woman find an address because she told me there's some problem with her internet connection.
I think having an offline map of at least the region you live in can come in handy. In fact, I carry an old phone with impressive battery life (Samsung Galaxy A10) and offline maps installed on it so I don't get lost.
Paper maps (or printed) is mandatory when you are on track in mountains. Offline digital maps are useless in -30 when phone battery and powerbank are dead.
Great to see more offline map projects. Is this any different than Organic Maps currently? The about page indicated this project is a continuation of Organic Maps due to issues with that project, not sure if there are new features or if it will be the main project going forward.
I use Cursor with a max subscription through work and using Gemini in multi-model mode alongside Opus 4.5 and GPT 5.2 Codex, Gemini's been the worst performer. Not saying it's a bad model just that the other 2 do a much better job at the level and complexity of large codebases. Just my experience.
Designers might also be hesitant to use an untested file format for print, too.
If there’s a large amount of paper that’s been purchased for a job, I definitely wouldn’t want to be the one who’s responsible for using JPEG XL and – for whatever reason – something going wrong.
Pixels are cheaper than paper or other physical media :)
They request formats that their equipment handles. They're not in the business of converting a user's file type from one to another. That would be inconsistent from what the user sent.
Here's who I order from, you can see the particulars of what they request.
> They're not in the business of converting a user's file type from one to another.
Their job is getting an image file into reality, not to be the absent owner of a big machine.
> That would be inconsistent from what the user sent.
If the machine accepts some type of normal image file, then they can losslessly convert other file formats to that type. There is nothing inconsistent about that.
My first statement is an opinion/judgement, not an assumption.
I'm confident my second statement is true. Note that any argument that says niche formats are a problem because color space might be ambiguous also applies to the formats they do accept.
Who should accept responsibility when a conversion is not as expected?
There are very few ‘lossless” conversions possible if you consider the loss of a data or metadata could affect the result. So if printer did accept a file that needed to be converted, and then during printing and converting they found conversion could lead to unexpected results should they cancel the print run? There is just too much to go wrong in printing already without these extra problems.
The print industry has a long and storied history, and for whatever set of reasons, printers only accept very specific profiles of specific formats.
The LED screen thing is so absurd that for a long time I assumed they just replace the content in post somehow and its purpose is merely to aid in lighting and for actors to orient better in the scene.
I guess current pipelines depends a lot on chroma key for the matte so isolating the actors cheaply might be hard with such complex backgrounds? Seems like it might not be long until we can automate that in such a controlled environment though.
I don’t see why it’s so absurd, with how cheap display tech has become recently. Ambitious, maybe, but it seemed to work pretty well in The Mandolorian.
Yeah, from what I saw they originally took the plunge to build the ‘tank’ due to the armour but ended up using it for almost everything because it was so flexible and convenient.
Then you have to also model all the other actors and the entire rest of the scene, including practical effects. Otherwise you get Phantom Menace style 3D visuals with close-enough cube maps that end up looking very game-y.
At least on The Mandolarian this is what happened. Everything behind the actors in the camera's frame would be green while the rest of the volume was used to have a lowres lighting reference for the scene. So essentially it would be a moving green screen. The Unreal output was never directly used in the finished show.
If you just watch the show you can see the Volume screens pretty clearly. The transition from the real set floor to the floor in the screen is usually pretty obvious.
reply