Hacker Newsnew | past | comments | ask | show | jobs | submit | MartinMcGirk's commentslogin

I’ll take the other side of that argument. Without human space flight inspiring the public by pushing the boundaries of what humans can achieve, you would never get the public on board to get the political buy-in to send unmanned craft to anywhere.

If you didn’t have human Spaceflight you’d get the budget for gps, military, and maybe weather satellites and not a whole lot else.


What I have found is that using LLM’s to do the same stuff I already knew how to do is not super enjoyable. I know web application development, and having some agent build it for me is just a productivity gain with no job satisfaction. So I’ve been where you are recently.

But on the flip-side, using the AI to help me learn the bits of programming that I’ve spent my whole career ignoring, like setting up DevOps pipelines or containerisation, has been very enjoyable indeed. Pre-AI the amount of hassle I’d have to go through to get the syntax right and the infrastructure set up was just prohibitively annoying. But now that so much of the boilerplate is just done for me, and now that I’ve got a chat window I can use to ask all my stupid questions, it’s all clicking into place. And it’s like discovering a new programming paradigm all over again.

Can 100% recommend stepping outside your comfort zone and using AI to help where you didn’t want to go before.


I'm seeing a lot of negativity here so I'll take the positive view.

I regularly use ChatGPT for product advice. The other day I was replacing a wireless router, and I needed a WiFi 6 vs 6n vs 7 vs mesh vs not-mesh breakdown. It was great to have the AI explain all that, and it would recommend products, but there were no pictures, and no links, and definitely no price comparisons.

So I had to copy and paste product names from my GPT chat app into google, and then click around a bit, and repeat across 3 or 4 products. And then remember all that. I have no objections with ChatGPT showing me products I've asked about, and doing a price compare, and linking me. And I don't even have a problem with them wrapping those links in affiliate links if they want to.

As long as the suggestions they give aren't based on who pays them the most money then I think it's a value add.


> As long as the suggestions they give aren't based on who pays them the most money then I think it's a value add.

That’s exactly what this announcement is leading up to. OpenAI is copying everything Google does - search, ads, IDE


Actually it seems like region-specific copyright deals are still very much in play. If I visit that playlist from Australia then 14 of the full movies are unavailable and hidden. But VPN'ing through the US shows me the whole set.


Yes, YouTube fully supports region specific availability and has for a very very long time.


Yes however, the original broadcast contracts may not have clauses in them for streaming services so having to revisit those contracts would be a costly process.


I feel like this is heavily connected with the idea of legacy. I grew up in Scotland, and lots of the buildings, culture, etc, have been around for hundreds of years. It would be nice to feel like something I was building would last as long and outlive me. It doesn’t though.

Sometimes I think that this is just the nature of software development. Most of the stuff I build is built to solve an immediate business problem. It probably lasts 5/10 years and then someone rewrites it in a new language, or more often the task isn’t relevant anymore so the code gets deleted.

I find myself thinking that maybe if I’d been in civil engineering or something then I’d be building stuff that lasts, but speaking to people who’ve worked a long time in construction has taught me that it’s the same there. Most of the buildings that go up, come down again in a few decades once regulations/fashions change or the new owner of the site wants something else.

Every so often something like a Cathedral gets built, and those get built to last. But most people don’t get to work on those. If there’s a software equivalent of a Cathedral then I still haven’t found it.


People are commenting core OS libraries and kernels, but the Space Jam movie website from 96 is still up. I bet the guy that wrote that didn't think it'd be around nearly 20 years later, I hope it never goes down.

https://www.spacejam.com/1996/jam.html


It's comforting to think that 2000 years from now, all that may remain of the early web and modern culture is the Space Jam website. Maybe Space Jam the movie will be looked at as our Gilgamesh or Iliad.


And the Galaxy Quest site -- will people in the future know it is a parody?

http://www.questarian.com/


Well that's just disappointing:

> Greater then [an error occurred while processing this directive] Pages requested since December 28, 1999


yes, because an LLM just trained on this conversation.

gazelle battery chair figment


> nearly 20

Nearly 30. Sorry :-)


From the outset, the framing of a Web site as staying "up" (or going "down") is definitely an accessory to the mindset that leads to the effect observed. Rather than regarding Web sites as collections of hypertext-enabled publications like TBL spelled out in "Information Management: A Proposal", we have this conception of a Web page being something like one part software, two parts traditional ephemera. But they're documents. Merely observing this difference in framing and being conscious of the contrast between intent/design and practice can go a long way to addressing the underlying problems that arise from the practice.


Funny that the original Space Jam website can be considered the internet's equivalent to a cathedral.


That's until there is someone to keep it up. Will disappear the moment the domain isn't renewed.


https://imdb.com started in 1990.


Sqlite intends to keep their cathedral intact until 2050.

https://sqlite.org/lts.html


I never saw this page before. Brilliant! Like catnip for nerds.

    Disaster planning → Every byte of source-code history for SQLite is cryptographically protected and is automatically replicated to multiple geographically separated servers, in datacenters owned by different companies. Thousands of additional clones exist on private servers around the world. The primary developers of SQLite live in different regions of the world. SQLite can survive a continental catastrophe.
SQLite can survive a continental catastrophe!!!


SQLite is awesome. TIL that its database structure is robust enough to have been chosen as one of only 4 Recommended Storage Formats for long term data preservation[0].

> "Recommended storage formats are formats which, in the opinion of the preservationists at the Library of Congress, maximizes the chance of survival and continued accessibility of digital content.

[0] https://www.sqlite.org/locrsf.html


Along with... xls.


I'd say something like TeX might be a software cathedral, and even that one isn't going to last much longer than Knuth himself (almost everyone today doesn't run TeX, they run a compatible software platform, some are even entirely different).

But even a Cathedral changes over time, and your work may not last; but all human work is a shrill scream against the eternal void - all will be lost in time, like tears in rain. The best we can do is do the best with what we have in front of us. And maybe all the work you did to make sure your one-off database code correctly handled the Y38 problem back in 2000 will never be noticed; because your software is still running and didn't fail.


pretty sure most latex papers and chapters are formatted with tex82, though translated from pascal to c


Most people use pdfLaTeX or XeTeX, which may have some basis in the original Tex82 but have since moved on, at least in code.


aren't pdftex and xetex just patched versions of tex82


> pdfTEX is based on the original TEX sources and Web2c

So it at least ties back to original code. XeTeX appears to be similar but with even more extensions, but other of the "TeX" tools are complete rewrites.


Maybe, but I've moved on to tectonic. Which surely isn't in the same language.


Last I checked (though it's been a while), I think tectonic basically wrapped the xetex code.


Externally visible stuff: definitely. You really need something like TeX or by now Linux and some GNU Tools etc. to stand the test of time.

My very first job was to work on the backend of some software for internal use. You have probably all bought products that were "administered" in said software. When I worked on it it was 15 years old. It evolved over this time of course but original bits and pieces were all around me. And I evolved bits and pieces of it as did others. By now it's been another 15 years and while I know they did a rewrite of some parts of the system I bet that some of both my code and the code I found to have been written almost 15 years before I started there is still around and you and me keep buying products touched by that software without knowing. The rewrite was also done by at least one person that was around when the original was built. He learned a new language to implement it.ut he took all of his domain and business logic knowledge over.

Its even more funny because I knew the guy that wrote some of the code I worked with directly from a different company but I had no idea he worked there or that he worked in that project until I saw his name in the CVS history. Yes CVS. I personally moved it to SVN to "modernize". I hope that was recognized as "technical debt" and they moved to git but I wouldn't know.


I think this is by far the most common outcome for work. A chefs product only lasts an hour at most. Most jobs don’t create anything other than a temporary service.


That’s because most of the things we need are temporary services, of course. You need dinner. And gas to get to work. And a roof that will last 10 years. Etc.


> A chefs product only lasts an hour at most.

A chef's recipe, which is also something the chef creates, may last hundreds of years.


That's a very romantic view of the life of a chef but I'm afraid that by an overwhelmingly enormous margin the output of a chef is servings of food not century-spanning recipes.

The comparison between developer and chef is kind of a stretch but there is a similarity of sorts. It could be argued that the recipes are analogous to the algorithms or patterns that we use day-to-day in software development, and that the servings of dinner are analogous to the applications we build. The algorithms/patterns and recipes might persist for a while, the apps and food have a shorter lifetime.

I'm not advocating for throwaway or disposable code (though I'm not above implementing a quick hack, personally) but I don't think we need to think less of ourselves or our profession because we're producing things which currently have a shelf-life of years or decades at most.


But tbf, that happens once in a billion recipes. 99% of new recipes are forgotten, often after a week or two.


Even some of the most famous recipes can change over time. I’m sure that things like McDonald’s burgers are slightly different now.

Perhaps the most enduring a chef can do is invent a new technique.


The Youtuber Max Miller's channel "Tasting History with Max Miller" has a number of good examples of old recipes for now-familiar foods. His Semlor episode[1] compares a recipe from 1755 and one from more modern times, and there are substantial changes.

[1] https://www.youtube.com/watch?v=0Ljm5i5N6WQ


The chicken nugget and McChicken batter are different from when I worked at McDonald's as a kid. Naturally it was better back then...


IIRC they switch from frying the fries in beef oil to using vegetable oil and the fries have never been quite as good.


When I was young I spent a year framing houses. I reflect a lot on how much longer those have lasted then so much of what I built software wise along the way.


Very true, I'd expect a house to outlast almost all of our software. There's really not much permanence in this industry.

I've built IKEA bookcases that have outlasted most of the stuff I have written.


It isn't necessarily software, but algorithms will likely last a long time. Euclid's algorithm and the sieve of Eratosthenes are still around. Researchers are still developing new ones. Just like building techniques may outlive the buildings that were created in their shadows.


100% agree.

But I like to think that ideas and solutions and products can be legacies.

It's semi-uncommon to write code that legitimately lasts 5+ years.

But it's very common to work on projects/products/companies that last 15+ years.

And I have to be content with that.


Most code that survives 5 years survived for all the wrong reasons. That said, most code that's survived five years is often a profitable product with a reasonable human at the helm telling us needs to not to touch it


> If there’s a software equivalent of a Cathedral then I still haven’t found it.

Probably like, some parts of Windows or Linux, or GNU tools, or some thing like that that, while still being updated, also has ancient components hanging around.


Good point about GNU coreutils: https://www.gnu.org/software/coreutils/

Also: That includes the man pages. They should be around in 50 years.


> I find myself thinking that maybe if I’d been in civil engineering or something then I’d be building stuff that lasts, but speaking to people who’ve worked a long time in construction has taught me that it’s the same there.

Right. People who assume otherwise aren't spending much time browsing the relevant subjects on Wikipedia or historical registers or just paying attention to their municipality. Simple demonstration: look into how many Carnegie libraries that were built are now gone versus how many are still around.


> If there’s a software equivalent of a Cathedral then I still haven’t found it.

Actually you have. It’s HN! The website that hasn’t changed in decades… used by the most tech savvy people in the world!


Consider constructions that have survived thousands of years such as the Tarr Steps or Stonehenge, or more recent constructions such as roman roads, Dunfermline Abbey or St Bartholemew's Hospital.

These sorts of constructions have been repaired and re-set hundreds of times over their existence, and have sometimes gone through periods of destruction during war and natural disasters, disrepair then subsequent periods of restoration and reuse. At a certain point, very little or nothing of the original construction really remains, but you can nevertheless draw a line through hundreds or thousands of years of history.

Software may be more like this: continually rebuilt and maintained, but still physically or philosophically related back to some original construction. Nobody uses Multics any more, but almost everything in use today is derived from it in some way.


Why though? Does it really matter if something outlives you by 200 years? 500? On a not-that-long timeline like 10,000 years, nothing lasts. The "cathedral-like" timeline is completely arbitrary I think.

Imho, there’s freedom in accepting that nothing I produce will last a long time.


Games seem to have a longer shelf-life, or at least tend to be passed around for longer. Some of my Flash games from 15 years ago are still being passed around game torrents and still playable on Newgrounds, and the console games I worked on are still playable in emulators and part of rom collections (and the one physical game I worked on, a PSP game, is still available to buy used on Amazon).

Now, how many people are actively playing those games? Probably very few people. But at least it's still there when people get the urge, or decide to play through a collection.


I agree with your takeaway message, but the timeline isn’t completely arbitrary. From the perspective of humans appreciating things, there’s a difference between something that endures for .01x vs. 10x a person’s expected lifespan.


That is when you are shooting a car into space. ;)


Because what matters is not the number you can think of, but the scale proportionate to human life / needs.

For somewhat related context: all my life until recently I rented, but now had become a house owner. My strategy with things I bought for daily use was that they were meant to survive until the next move. For many things it wasn't economical to take them with me to the new place. So, for example, I didn't want to buy a "proper" pan / skillet, and was happy with a teflon one just because it would wear off about the time I needed to move again. I could just throw it away and get a new one. Now that I don't intend to move, I'd choose to buy things that last, because cost of moving vs cost of garbage cleanup changed.

Now, when it comes to software, it looks like the industry is overwhelmingly motivated by short-term benefits, where, in principle, it shouldn't have been. And this is the surprising part. Lack of vision / organization leads to the situation where we never even try to make things that last. There are millions of e-commerce Web sites in the world, but they are all trash because nobody could spend enough time and effort to make one that was good (so that others could replicate / build on top of it, and also have a good product). We have the most popular OS, that's absolute garbage, both the core and the utilities around it, which is due to shortsightedness and even unwillingness to plan on the part of the developers. Same thing with programming languages, frameworks etc.

So, looping back to your question: why the magnitude matters? -- what matters is how your own planning compares to the lifespan of your product. Many times in my career I was in the situation where I was building an already broken thing from a design that was known to be broken from the start or very soon afterwards. And that had nothing to do with changing requirements, it had all to do with the rat race of getting a product to the market before someone else does / investor's money dries out. The idea behind "things that last" is that at least at the time you are making them you cannot see how whatever you are making can be done better (as in more reliable / long-lasting).

In the end of the day, what happens now in programming is that things that shouldn't outlive their projected deadlines do, or, predictably, die very fast. But we don't have any section in the industry that builds things to last. We just keep stuffing the attic with worn-out teflon pans.

Compare this to, for example, painters, who despite knowing that most of their work will likely be lost and forgotten, still aspire to produce "immortal" works of art, and if a picture takes a lifespan or more to make, then so be it. (In the past, some works of art were carried out by generations of artists, especially, when it comes to books where scribes were succeeded by their children who'd continue writing the book after their parents death).


To your point about buildings, I wish we considered the "technical debt" of our society's built infrastructure more. It seems we went very wrong with this habit of building and rebuilding on such short cycles, especially on large projects that have much longer lasting consequences on the more important infrastructure of our natural ecosystems. All that carbon burned to extract, produce, and transport building materials. All those sprawling roadways built over habitats and forcing people into unsustainable patterns of living, burning more carbon in their day to day to get around. This debt needs to be measured and it needs to be addressed with high priority.


But the thing is, that the ghost of the code lives on. Many times I saw specific database designs, and technical decisions all taken to accommodate a solution that didn't exist for a decade or so.


Core Libraries/Kernels, like LibC or The Linux/NT Kernel.


To invoke the RIIR train, probably a lot of fertile ground in being the definitive Rust implementation of "solved" foundational libraries: zlib, libpng, libjpeg, etc. Something ubiquitously used which has very minimal/no churn. As Rust usage grows, dependency on the original C implementation will diminish.


It will diminish, but never go away until POSIX foundations or graphical programming standards get replaced (most of them defined via C ABIs).

When I was mostly doing C++ and C on day job, 20 years ago, they were the languages to go when doing any kind of GUI or distributed computing, nowadays they have been mostly replaced for those use cases.

Yet they are still there, as native libraries in some cases, or as the languages used to implement the compilers or language runtimes used by those alternatives, including Rust.


Even those change substantially over time, even if they're not directly rewritten, things get updated and relocated.

It's like how the streets in Rome have been the same for much longer than many of the buildings have been standing, even though the buildings are hundreds of years old.


Software Package of Theseus.


>If there’s a software equivalent of a Cathedral then I still haven’t found it.

That one old file dialog window that still somehow shows up in windows 11 from time to time?

Or the Linux kernel.


This is completely random, but you remind me of something that makes me laugh every time I use Google Maps navigation.

Last year I was messing around with my phone's text to speech settings, and I selected a male voice but cranked the pitch setting to the max. I proceeded to forget about it. For some reason when navigating, the voice is still the default pitch. Maybe about 1 in 20 turns at random though, the voice has the pitch cranked up to the max. It's rare enough that my 4 year old and I always burst out laughing when it happens.


I expect few will be using/working on the linux kernel in 30-50 years.


I'll take the other, more optimistic side of this.

The reason they're spending so much time on reinventing meetings is because they see that the killer use case for VR/AR headsets is to replace your laptop with a device that shows you a screen, or multiple screens, wherever you go, whenever you want it there.

If you start from the premise that eventually people won't huddle over their 16" laptop screens to make calls, but will instead be wearing a headset that shows them as many giant monitors as they like, then everything else flows from that. How do you handle calls in a world where every user is essentially wearing a visor? How do you collaborate? These aren't solved problems yet, but I'm a firm believer that the headset will replace the laptop, and if that happens I'll be glad somebody has put the effort in to make everything else around that as seamless as it can be.


I really cant understand why people want to spend 8 hours a day with a VR headset on. I have a Rift S and after a couple of hours I cant wait to take it off, I'm sure as hell not going to wear one all day just for my job no matter how good the software is.


We don't, not with our current headsets, but also Rift S has been discontinued for awhile and the new Meta Quest Pro headset is significantly more comfortable and capable than Quest 2 and far better than the Rift S.

Within a few years, very comfortable lightweight goggles or glasses will have capabilities that can replace phone and PC interfaces.

Your comment is the equivalent of someone who bought an early airplane and based on that decides that commercial jets are impractical for long flights. Or someone who saw the early CRTs and decided that since they preferred reading paper teletype output that computer monitors would never be usable.


I have yet to see any proof that the glasses style of VR is anywhere near being feasible. I have been an adopter of VR since 2014 when I bought the rift DK2 and the form factor is still practically the same, despite the costs being a lot higher for these new models.

I don't think your analogies work at all. I think VR is great for games, which is what occulus originally marketed it for, it's only since Facebook bought them that the direction has totally changed.


While it doesn't have a ton of power, the Vive Flow[0] seems like a great form factor. A few more years of miniaturization and advancements, you will probably see some much more capable VR headsets in a more glasses-like form. They weigh 1/4 that of the Quest Pro.

[0] https://www.vive.com/us/product/vive-flow/overview/


Meta Quest Pro form factor is a very significant decrease. Multiple AR/VR glasses such as Leap 2 and Spectacles work great, they just need a wider field of view.


We'll figure out how to interface with the optic nerve eventually.


I'm happy spending all day with sunglasses on. Before too long, this'll not be much different an experience.

Already Lenovo and others have pretty lightweight glasses that provide a basic (1080p or similar) screen, the Quest Pro looks like a definite step forward from prior VR headsets, Magic Leap and HoloLens are on their second iterations, micro-OLEDs are coming, Apple is working on something, etc…

If you see the direction travel then I'd think you may at least admit that your Rift S experience (which is long in the tooth even prior to the Quest Pro announcement) is no useful guide to what the VR/AR future people are excited about will actualy be like.


I think if we change our assumptions of a headset from what it is today to where it could get to - something like eating glasses every day (seems like a long way away) then this sounds very plausible to me. It’s a race to work out how convenient we can make these headsets.


Of course. Pop your VR headset on at a café, I'm sure it'll work out great. Bash people with your wide arm movements at Starbucks and look like a clown.

Those meetings that could have been an email? Now they're the worst of both worlds, remote but still forced to have this meeting in VR.

Instead of having a screen (or multiple screens), force people in a shitty virtual reality where you have to walk around still, but this time it's either by teleporting with joysticks or making you puke with smooth movement.


> Pop your VR headset on at a café, I'm sure it'll work out great. Bash people with your wide arm movements at Starbucks and look like a clown.

Could you be strawmanning any harder?

I've been now full time remote working for 3 years and I have done exactly zero work from any kind of cafe. I have however done one day of work in a virtual desktop, but currently (as of couple years back) I didn't find it any better than just using normal desk and monitor setup.

>Those meetings that could have been an email?

This is complete a company issue. This VR tech is meant for actual meetings.

>Instead of having a screen (or multiple screens), force people in a shitty virtual reality where you have to walk around still,

You are completely misunderstanding the tech. You can literally have infinite amount of screens in a VR environment if you still want to cling to the notion of screens.

This all being said. I doubt this will catch on, however the facial expression capturing tech will be nice for future gaming and chatting applications.


I misunderstand the tech so much that I have a Valve Index. Working with multiple screens on that is absolute hell, taking up your entire view, locking you into a bubble. Enjoy your neck pain when you need to turn your head every other second. Oh and good luck with using both a pointer and a keyboard while in VR, I hope you remember where your keys are.

Facial capture for gaming will be absolutely worthless, and so will it be for chatting. Firstly because it already exists without wearing an expensive helmet that makes you sweat and is uncomfortable, secondly because nobody wants to wear a damn helmet so that people can see you smile ingame, and thirdly because nobody wants to put 1k+ on a device whoses uses are extremely limited.


>Working with multiple screens on that is absolute hell, taking up your entire view, locking you into a bubble. Enjoy your neck pain when you need to turn your head every other second.

How is this any different from having multiple screens?

>Oh and good luck with using both a pointer and a keyboard while in VR, I hope you remember where your keys are.

What? Remember what keys? Like keyboard keys?

>Facial capture for gaming will be absolutely worthless, and so will it be for chatting.

Based on what? Just because you don't want them? How many people are cybering in VRchat at this very moment? You don't think these people would very much like to have a way to show facial features as they are performing them?

>Firstly because it already exists without wearing an expensive helmet that makes you sweat and is uncomfortable

Citation needed. What tech already does this? Also I don't sweat much with my Oculus Quest headset, why you think this is any different?

>secondly because nobody wants to wear a damn helmet so that people can see you smile ingame

I would very much like this. It would make VR games even more immersive if expressions would be captured and translated to the in-game models.

>and thirdly because nobody wants to put 1k+ on a device whoses uses are extremely limited.

Why is the price an issue? You said that you had Valve Index, looking at the price the headset with controllers and the tracking stations cost over 1k. Did you lie about having Valve Index or where the disconnect is? Anyways, yes, 1.7k for VR headset is a lot, but the tech will find its way to cheaper headsets as well, so price argument seems very weird to me.


Businessy apps like immersed already have KB/M passthrough. How it works there is you define a box with the location of your KB/M and can see it all the time. With the better cameras of the Pro I imagine it'll be even better. Some popular keyboards like the Apple Magic keyboard are also supported for passthrough in a fancier way.

If the resolution was bumped high enough you could probably use 3 monitors like you do in real life at a similar distance with similar resolution.

Wouldn't facial capture make chat in an MMO better too? Certainly not a requirement but it could make it more immersive and enjoyable?


What if the headset just projects a display into your view and you can set where it is in relation to the real world AR-style?

Then you just pop your displayless laptop down on the café table and start typing away like you would with a normal laptop, except you're the only one who can see your screen.

No need to wave around any controllers, no danger of accidentally smacking someone's grande venti mocha latte.


So, instead of carrying around a small laptop that is a square and fits in pretty much any bag, you're carrying a keyboard, maybe a mouse, and a full on head sized VR device that only fits in a backpack or a tote bag? Along with lacking the storage that a laptop offer, the offline capacity of it, the battery life of it?

The future sounds _great_.

In addition, I will not let you besmirch the name of the venti mocha latte. Sometimes all you need is three times your daily sugar intake in a single, deadly, way too hot cup of very average coffee.


The "keyboard" can just be a regular laptop with no display. All the storage and power you need.


This all reminds me of the tech communities reaction to the first iPod and iPhone.


The reaction to the iPhone was incredibly positive, when comparing it to the closest competitor. It was obviously, immediately, that it was a revolutionary change in UI for phones.

Yes, the reaction to the iPod was negative, but honestly, at the time, the iPod wasn't dramatically and obviously better than the competition. It took a couple generations to show that it was not only better, but it was getting better considerably faster than the competition.



If there, at some point, is a wireless capable VR/AR headset that can replace a ~32" 1440p monitor I'll be the first in line to buy one if it's priced under 1k. Maybe even under 2k.

I've got so much more I want to use my desk space for than a monitor =)


Not that I'm sticking up for Russia here, but I think that assumes that there is a universal canonical "right" and a canonical "wrong", which I am not convinced about.


It was wrong for the West to taunt Putin with baby steps to perhaps, maybe, someday, having Ukraine in NATO. It was wrong for Zelensky to see the build up of troops on his border and do nothing for a good solid three months leading up to the invasion.

But none of that makes Russian troops invading, shelling and bombing hospitals, violating evacuation corridor agreements by firing on fleeing civilians, or any of the other senselessly hostile things they've done OK. They are wrong, their actions are evil.

I realize you're not sticking up for Russia, but I do think that if we're talking morals, it doesn't require moral absolutes (which I do think exist, but that's another discussion) to be able to judge the current situation. That is, I don't think condemning Russia's actions requires a priori agreement to a universal canonical moral standard.


There's an arrow button at the bottom right of the page, that if you click it reveals a navbar with some links to a careers page and so on.

You can also click and drag to rotate the cube. But yeah, it doesn't do anything particular spectacular.


The “Navbar” links don’t work. At least not on my iPhone.


Thanks! No navbar on my browser, just a cube toy.


If you scroll down (at least on my phone) the cube rotates for a while and then the navbar appears.


Not to flog a dead horse here, but your argument is that without the extension people will try it a couple of times and then forget about you.com, and go back to default.

But your target demographic is here telling you that with the extension requirement, they will try it zero times, because they don't want to change their default to a service they haven't tried yet.

A better idea would be to unblock search, show results to everyone regardless of browser defaults, and push hard for the extension install on the search results page once people can see the value prop you're offering.


I can see why you’d pay a remote staff member differently from an on-prem staff member. As others have said if you’re hiring remote then your talent pool is effectively the whole world, so price competition is fiercer.

But if everyone is remote or everyone is on-prem, then location-dependent pricing is complete nonsense. The employees are all still providing the same value to the company. Claiming that you can’t pay someone as much because they live in some rural town somewhere is just the company finding an excuse to keep a larger share of the value created by that employee.


Not obviously one way. There's more staff globally but there's also more places they can work.

This seems like it will turn out in favour of employers with the world being the way it is, with a lot of people in poor places.

But I've been a remote hiring manager before, and it happens that people turn down an offer in favour of another remote firm.


Isn't Google pushing for a return to on-prem though? So it's the first situation that you mentioned.


Yeah my comment is kind of tangential to the post and aimed more at the general idea of location-dependent salaries.

I don’t argue with the concept of saying “we want you back in the office, if you don’t want to come to the office then take a pay cut” like is going on here.


If you remove location-dependent pricing, which location do you align the new single price on? Mountain View? London? Hyderabad?


You don't align the price for remote work to a location. You set it to whatever it takes to get enough qualified people to do the work you need done.


Fair enough, but that basically means no more employees in the US


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: