In one of my work experiences, "titles" were used as opposed to (or with a meager) paycheck raises. The most ridiculous aspect was that we had a fair number of group leaders, each with a team of 1 (just themselves).
It was based on Slic3r, however I urge you to diff the sources to see how much has been rewritten and extended. Plain Slic3r is too far behind both PrusaSlicer and Cura nowdays.
Back before 2004, QNX could have still been relevant as there was still a lot of OS experimentation from the user/developer themsevels. They could have attracted enough people to carve a niche even in the desktop space at that time.
After beos failed, I played/developed with QNX until they pulled the rug. I was on it full time on my main dev machine. I loved it.
When they closed it I got severely burned to the point that I will not touch a any closed development platform. I see from the license they didn't change a bit.
Not that it matters anymore.. they're largely irrelevant today except for whatever existing markets they already have. It would be fooling to choose QNX today: we now have good alternatives, and all of them with open licenses.
I remember being really excited about QNX for a few months after BeOS closed up. It had a cool desktop environment and was mostly posix so it was more or less familiar coming from Be or Linux.
Some months ago I wanted to format/print some documents, and given the existing tooling I had I decided to try the html->pdf route. I fully agree is a shitshow. The way things break across pages is hard to fix even when hand-tuning the html itself (not just by working it around with css) to avoid content being cut across margins and pages no matter what. I've found chrome to be "less bad", but still unusable. Column handling is even a bigger joke.
In the end I exported the document to libreoffice, and got something way more usable in a few hours just by editing the styles than whatever I was able to do in days of fiddling with html+browser.
iBooks on apple might get a pass as it doesn't need to paginate, but truth be told it seems that epub/ebooks and ereaders in general are being targeted at novels and romance, where form factor, typesetting and formatting doesn't matter that much.
I have access to ebooks through my local library and there's no way I would use, let alone buy, any technical ebook.
Not to mention, I've seen a steady average decline in the quality of printed media in general over the last ~15 years. A lot less attention is put in the typesetting and layout. Even the print quality itself is lower, which I think is due to the smaller and cheaper print runs being done now also for more popular titles.
I thought book quality started going downhill circa 1990.
I am a fan of the old mass market paperbacks. These had a reputation of being low quality books back in the day because they are cheap and not super-durable but I think they are high quality from a Deming point of view because they are made by a process that is highly repeatable. Circa 2000 I thought my 1970s paperbacks were in great shape, but 2010 they were seriously yellowing.
I just looked at my bookshelf and found a '59 James Blish anthology that I bought for 50 cents maybe ten years ago, it is in "poor" condition and will probably crack if I read it without taking great care. Next to that I found a copy of Galbraith's The Affluent Society from 1958 which is perfectly usable except I'd be worried about the cover coming off. A Frank Herbert book from '68 is stained but in great shape other than the cover also being at risk. A '74 Herbert book is a touch discolored but has no problems at all.
(My collection includes not just science fiction of that era but also both self-help and serious books on psychology as well as books about science, politics, social sciences, etc. Government reports about inflation or race relations would be published as mass market paperbacks. You could get Plato and Sartre and Freud and the rest of the Western literary heavyweights)
The construction, materials, process, and such were repeatable enough that they even fail consistently. Not permanent, but 50 years is not bad. The right size to go in a purse or side pocket of a backpack (e.g. part of the loadout of a bibliomaniac who has 12 books in his backpack) I've got to find a good way to reinforce the cover (adhesive tape?)
Those are no longer produced, today it is trade paperbacks. There is wide variation in the dimension, construction, materials and processes for these. You sometimes find a trade paperback that is beautiful, strongly constructed and printed on acid free paper. Others you pay $50 for and the binding breaks the first time you lay the book open on the table.
Might depend on the age and or brand of the tape - I've seen old tape (30+ years maybe) that has yellowed. I have a 15 years old book at home with some tape and it's okay, except for the tape that wasn't in contact with the book (which is yellowed).
The more likely reality is that we have a lot of v4-only hw in place with lifespan of 20+ years. Those devices won't go away.
Heck, I work on embedded, and having a dual-stack system is just a PITA to deal with. If v6 would have been fully retro-compatible this wouldn't have been something to think about, but you can't drop v4 and there's no future in sight where v6 will be the only choice (we'll have dual-stack for a looooong time), so we just push the problem up the chain.
There are plenty of systems being developed _now_ which are still v4 only as a result.
Totally agree. I'm a little embarrassed by it tbh; to me it feels like a big failure of nerd governance. We should be able to manage this, but I think we're pretty close to having to admit that we can't.
Anyone with some critical experience also with the ipad pro pen?
I'd really like some comments there. There's a lot that goes into writing and drawing, and all the online reviews I've seen seem just to praise it.
I used most digital writing devices starting from wacom tables (first intuos series), to laptops with foldable screens and currently using the rm2/rm3.
I agree that nothing still has the precision of a real pen or pencil. I can lazily fill and shade even with a micron fineliner when I want, and simply can't replicate the same precision with anything else I tried. I could buy a lifetime supply of the best pens and paper with the cost of the rm3.
Writing is mostly fine, but when drawing I notice immediately the precision just isn't there. But still, at least on the rm (both 2 and pro), the digitizer is well calibrated, and the feel is good, the pen is actually like a pen and not the sucky abomination what wacom like to call "pens" or the tiny unusable styluses of the samsung "note" or lenovo yoga series. The show distance between tip and display is very good, and even though it seems ridicolous, the slighltly shorter one on the rm3 makes a difference. The rm2 is still requires a bit too much pressure for my taste (I have a light touch being used to mechanical pencils, fineliners and tech drawing); the rm3 seems slightly improved.
I can still tell instantly that lines are occasionally wobbly due to the digitizer's grid and pen position.
That being said I got the rm2 at some point, and it's the first e-notebook I actually stuck with because it's effectively "endless paper" and has reached the "good enough" feeling for me. I used to have tons of sheets of paper with notes, now I have somewhat less ;).
I'm a happy owner of the latest rm pro, but I was curious about the boox. You're actually saying the boox is even more muted and darker?
In the rm pro the colors are still what I consider to be pretty muted. I had a laugh of what "red" looks like then I tried it. I don't care too much about it, for the purpose it's a great addition, but it's also darker than the rm2, which instead turned out to bother me a lot.
I can use the rm2 everywhere, but the rm3 is only saved, in my eyes, by some amount of backlight, which brings it closer, but still not exacly equal to the rm2. And by the way the rm2 is also, by far, not "white". If I consider rm2 to be some shade of ivory, the rm3 is downright gray.
Grayscale rendering on the rm2 display is also better. I do notice the dithering on the rm pro, and there's some color fringing in the ghosting.
I use a Boox Note Air Plus 2, a colleague uses a Boox Note Air 3. My device is black and white, hers is in colour. The black and white device is excellent, I cannot remember the last time I turned on the backlight. The color device is so dark that it is unusable without the back light in normal office conditions. I have not seen it outdoors, however.
I also have the Note Air 2, and really don't feel a need for color at this point. It's my go-to e-reader, I have a little remote and a stand for it so I can just kick back in a chair or in bed and read. I couldn't be happier with it, the battery life is still amazing even after a LOT of use too.
If I need color for some reason I have a phone, tablet, laptop etc that do the job better than e-ink presently can.
It's great for page flips, turning the backlight on and off, opening the top menu. Sadly it doesn't work for turning power off and on, but I suspect that can be fixed.
Yes it can, but not in a way I'd ever want to do... it's very slow and halting. I would stick to using this remote primarily as a way to read books without needing hands on the device, for web browsing I suspect there are better options I'm unaware of.
If that matches my experience, outdoors helps a bit, but not much. Overall you get better contrast and brightness, but the fact that the background is "gray" and not white becomes even more obvious.
That being said the rm3 is usable without backlight indoors if the place is decently lit (for example, in most offices), but requires some backlight otherwise. The rm2 is usable also in poorly lit conditions.
Also running FF with strict privacy settings and several blockers. The annoyances are constantly increasing. Cloudflare, captchas, "we think you're a bot", constantly recurring cookie popups and absurd requirements are making me hate most of the websites and services I hit nowdays.
I tried for a long time to get around it, but now when I hit a website like this just close the tab and don't bother anymore.
Same, but for VPN (either corporate or personal). Reddit blocks it completely, requires you to sign-in but even the sign-in page is "network restricted"; LinkedIn shows you a captcha but gives an error when submitting the result (several reports online); and overall a lot of 403's. All go magically away when turning off the VPN. Companies, specially adtechs like Reddit and LinkedIn, do NOT want you to browse privately, to the point they rather you don't use their website at all unless without a condom.
> Companies, specially adtechs like Reddit and LinkedIn, do NOT want you to browse privately, to the point they rather you don't use their website at all unless without a condom.
That’s true in some cases, I’m sure, but also remember that most site owners deal with lots of tedious abuse. For example, some people get really annoyed about Tor being blocked but for most sites Tor is a tiny fraction of total traffic but a fairly large percentage of the abuse probing for vulnerabilities, guessing passwords, spamming contact forms, etc. so while I sympathize for the legitimate users I also completely understand why a busy site operator is going to flip a switch making their log noise go down by a double-digit percentage.
> Reddit blocks it completely, requires you to sign-in but even the sign-in page is "network restricted";
I've been creating accounts every time I need to visit Reddit now to read a thread about [insert subject]. They do not validate E-Mail, so I just use `example@example.com`, whatever random username it suggests, and `example` as a password. I've created at least a thousand accounts at this point.
Malicious Compliance, until they disable this last effort at accessing their content.
Most subreddits worth posting on usually have a minimum account age + minimum account karma. I've found it annoying to register new accounts too often.
I've created a few thousand accounts through a VPN (random node per account). After doing that, I found out Reddit accounts created through VPNs are automatically shadow banned the second time they comment (I think the first is also shadow deleted in some way). But they allow you to browse from a shadow banned account just fine.
I don’t follow the logic here. There seems to be an implication of ulterior motive but I’m not seeing what it is. What aspect of ‘privacy’ offered by a VPN do you think that Reddit / LinkedIn are incentivised to bypass? From a privacy POV, your VPN is doing nothing to them, because your IP address means very little to them from a tracking POV. This is just FUD perpetuated by VPN advertising.
However, the undeniable reality is that accessing the website with a non-residential IP is a very, very strong indicator of sinister behaviour. Anyone that’s been in a position to operate one of these services will tell you that. For every…let’s call them ‘privacy-conscious’ user, there are 10 (or more) nefarious actors that present largely the same way. It’s easy to forget this as a user.
I’m all but certain that if Reddit or LinkedIn could differentiate, they would. But they can’t. That’s kinda the whole point.
Not following what could be sinister about a GET request to a public website.
> From a privacy POV, your VPN is doing nothing to them, because your IP address means very little to them from a tracking POV.
I disagree. (1) Since I have javascript disabled, IP address is generally their next best thing to go on. (2) I don't want to give them IP address to correlate with the other data they have on me, because if they sell that data, now someone else who only has my IP address suddenly can get a bunch of other stuff with it too.
At the very least, they're wasting bandwidth to a (likely) low quality connection.
But anyone making malicious POST requests, like spamming chatGPT comments, first makes GET requests to load the submission and find comments to reply to. If they think you're a low quality user, I don't see why they'd bother just locking down POSTs.
Obviously. But I was responding to "what is sinister about a GET request". To put it a slightly different way, it does not matter so much whether the request is a read or a write. For example DNS amplfication attacks work by asking a DNS server (read) for a much larger record than the request packet requires, and faking the request IP to match the victim. That's not even a connection the victim initiated, but that packet still travels along the network path. In fact, if it crashes a switch or something along the way, that's just as good from the point of view of the attacker, maybe even better as it will have more impact.
I am absolutely not a fan of all these "are you human?" checks at all, doubly so when ad-blockers trigger them. I think there are very legitimate reasons for wanting to access certain sites without being tracked - anything related to health is an example.
Maybe I should have made a more substantive comment, but I don't believe this is as simple a problem as reducing it to request types.
It's equally easy to forget about users from countries with way less freedom of speech and information sharing than in Western rich societies. These anti-abuse measures have made it much more difficult to access information blocked by my internet provider during the last few years. I'm relatively competent and can find ways around it, but my friends and relatives who pursue other career choices simply don't bother anymore.
Telegram channels have been a good alternative, but even that is going downhill thanks to French authorities.
Cloudflare and Google also often treat us like bots (endless captchas, etc) which makes it even more difficult.
IP address is a fingerprint to be shared with third parties, of course it's relevant. It's not ulterior motive, it's explicit, it's not caring about your traffic because you're not good product. They can and do differentiate by requiring a sign-in. They just don't care enough to make it actually work. Because they are adtechs and not interested in you as a user.
> For every…let’s call them ‘privacy-conscious’ user, there are 10 (or more) nefarious actors that present largely the same way.
And each one of these could potentially create thousands of accounts, and do 100x as many requests as a normal user would.
Even if only 1% of the people using your service are fraudsters, a normal user has at most a few accounts, while fraudsters may try to create thousands per day. This means that e.g. 90% of your signups are fraudulent, despite the population of fraudsters being extremely small.
Was anybody stopped to do nefarious actions by these annoyances?
It's like at my current and previous companies. They make a lot of security restrictions. The problem is, if somebody wants to get data out, they can get out anytime (or in). Security department says that it's against "accidental" leaks. I'm still waiting a single instance when they caught an "accidental" leak, and they are just not introducing extra steps, when at the end I achieve the exact same thing. Even when I caused a real potential leak, nobody stopped me to do it. The only reason why they have these security services/apps is to push responsibility to other companies.
Heck, I cannot even pass ReCAPTCHA nowadays. No amount of clicking buses, bicycles, motorcycles, traffic lights, stairs, crosswalks, bridges and fire hydrants will suffice. The audio transcript feature is the only way to get past a prompt.
Just a heads up that this is how Google treat connections it suspects to originate from bots. Silently keeping you in an endless loop promising reward if you can complete it correctly.
I discovered this when I set up IPv6 using hurricane electric as a tunnel broker for IPv6 connectivity.
Seemingly Google has all HEnet IPv6tunnel subnets listed for such behaviour without it being documented anywhere. It was extremely annoying until I figured out what was going on.
Sadly my biggest crime is running Firefox with default privacy settings and uBlock Origin installed. No VPNs or IPv6 tunnels, no Tor traffic whatsoever, no Google search history poisoning plugins.
If only there was a law that allowed one to be excluded from automatic behavior profiling...
There's a pho restaurant near where I work which wants you to scan a QR code at the table, then order and pay through their website instead of talking to a person. In three visits, I have not once managed to get past their captcha!
(The actual process at this restaurant is to sit down, fuss with your phone a bit, then get up like you're about to leave; someone will arrive promptly to take your order.)
I’ve only seen that at Asian restaurants near a university in my city. When I asked I was told that this is a common way in China and they get a lot of international students who prefer/expect it that way.
The worst part is that a lot of it is mysteriously capricious with no recourse.
Like, you visit Site A too often while blocking some javascript, and now Site B doesn't work for no apparent reason, and there's no resolution path. Worse, the bad information may become permanent if an owner uses it to taint your account, again with no clear reason or appeal.
I suspect Reddit effectively killed my 10+ year account (appeal granted, but somehow still shadowbanned) because I once used the "wrong" public wifi to access it.
Same here. I occasionally encounter websites that won't work with ad blockers, sometimes with Cloudflare involved, and I don't even bother with those sites anymore. Same with sites that display a cookie "consent" form without an option to not accept. I reject the entire site.
Site owners probably don't even see these bounced visits, and it's such a tiny percentage of visitors who do this that it won't make a difference. Meh, it's just another annoyance to be able to use the web on our own terms.
It's a tiny percentage of visitors, but a tech savvy one, and depending on your website, they could be a higher than average percentage of useful users or product purchasers. The impact could be disproportionate. What's frustrating is many websites don't even realise it is happening because the reporting from the intermediate (Cloudflare say) is inaccurate or incorrectly represents how it works. Fingerprinting has become integral to bot "protection". It's also frustrating when people think this can be drop in, and put it in front of APIs that are completely incapable of handling the challenge with no special casing (encountered on FedEx, GoFundMe), much like the RSS reader problem.
Hey, same here! For better or worse, I use Opera Mini for much of my mobile browsing, and it fares far worse than Firefox with uBlock Origin and ResistFingerprinting. I complained about this roughly a year ago on a similar HN thread, on which a Cloudflare rep also participated. Since then something changed, but both sides being black boxes, I can't tell if Cloudflare is wising up or Mini has stepped up. I still get the same challenge pages, but Mini gets through them automatically now, more often than not.
But not always. My most recent stumbling block is https://www.napaonline.com. Guess I'm buying oxygen sensors somewhere else.
Same. If a site doesn't want me there, fine. There's no website that's so crucial to my life that I will go through those kinds of contortions to access it.
For some it was effective.
This isn't reflecting the OP case though.