Hacker Newsnew | past | comments | ask | show | jobs | submit | p2detar's commentslogin

I’m pleasantly surprised that Tyrannosaurus rex’s tiny hands were depicted so accurately. As far as I recall, scientists are still puzzled about why it even had hands. Apparently, they were too small to be useful for anything, not even scratching its face.

Not sure if you're joking or not, but I have to deal with this upcoming change at some point and still haven't read in detail why they decided to do this.

Could anyone clarify?


Hi there, ISRG co-founder and current board member here. In brief, shorter lifetimes force people to automate (which, e.g., avoids outages from manual processes) and mitigates the broken state of revocation in the Web PKI. That latter point especially is what I understand to be driving the Web PKI toward ever-shorter lifetimes.

I actually remember the discussion we had in ~2014 about what the default certificate lifetime should be. My opening bid was two weeks -- roughly the lifetime of an OCSP response. The choice to issue certificates with 90 day lifetimes was still quite aggressive in 2015, but it was a compromise with an even more aggressive position.


With the move to ever shorter certs the risk to letsencrypt having an outage is higher.

It would be nice to read more about what the organization is doing around resilience engineering so we can continue to be confident in depending on it issuing renewals in time.

Do you publish any of this? DR plans? Etc.

I don't mean for this to be a negative - really impressed by LE - but we've had a lot of Cloudflare outages recently and my mind is on vendor reliability & risk at the moment.


I'm the technical lead for Let's Encrypt SRE.

Publishing more about our resilience engineering sounds like a great idea!

I'll get that on our blogging schedule for next year


Considering how many ACME clients are available today with all sorts of convenient features, and that many web servers nowadays have ACME support built in (Caddy, Apache mod_md, and recent Nginx), I believe that people who don't automate ACME certificates are the people who get paid hourly and want to keep doing the same boring tasks to get paid.

Because big companies have a habit of growing layers of bureaucracy. If a cert is valid for three years, a decent bunch of them will invent a three-month process around cert renewal, involving two dozen stakeholders, several meetings, and sign-off from the CTO.

The side-effect of this is that they become incapable of doing it any faster during an emergency. Private key compromised? Renewal takes two months, so better hope the attackers can't do too much damage before that. CAs in turn have large (=profitable) customers which such processes who they really don't want to lose, so historically when they've failed to renew in time during incidents CAs have granted those customers exceptions on the revocation rules because they are "business critical" and doing it by-the-book would cause "significant harm". No CA is willing to be strict, because they'd lose their most valuable customers to their competition.

The only way to solve this is to force companies into adopting efficient renewal processes via an industry-wide reduction of certificate validity time. When you have to renew once a month you can't afford to have a complicated process, so you end up automating it, so there's no reason for CAs to delay cert revocation during incidents, so the internet is more secure. And because every CA is doing it, companies don't gain anything by switching to more lenient CAs, so the individual CAs have no incentive to violate the industry rules by delaying revocation.


Lets Encrypt are doing is because of the decision that CAs and browser makers made that it needs to be reduced (browsers have been reducing the length of certs that they trust).

The why is because it's safer: it reduces the validity period of private keys that could be used in a MITM attack if they're leaked. It also encourages automation of cert renewal which is also more secure. It also makes responding to incidents at certificate authorities more practical.


> it reduces the validity period of private keys that could be used in a MITM attack if they're leaked

If a private key is leaked, 45 days is sufficient to clean-out the accounts of all that company's customers. It might as well be 10 years.

If cert compromise is really common enough to require a response then the cert lifetime should be measured in minutes.


It says on the home page it’s under development. I wouldn’t expect any games made with yet.

> The engine is not released and is under heavy development.


Every major established engine is under development yet has many games to show.

Development includes testing. A game engine's test is games. Lack of games speaks volumes.


While this is true, I think some fields like game development may not always have this problem. If your goal is to release a non-upgradable game - fps, arcade, single-player titles, maintenance may be much less important than shipping.

edit: typos


I'm trying to understand where this kind of thinking comes from. I'm not trying to belittle you, I sincerely want to know: Are you aware that everyone writing software has the goal of releasing software so perfect it never needs an upgrade? Are you aware that we've all learned that that's impossible?

> I'm trying to understand where this kind of thinking comes from.

I used to be a game developer.


this was basically true until consoles started getting an online element. the up-front testing was more serious compared to the complexity of the games. there were still bugs, but there was no way to upgrade short of a recall.

And why did we abandon this model?

Also, computer games existed at the same time as consoles. People were playing games loaded from floppy disks on computers back in the early 1980's


I'm not saying that this model is profitable in the current environment, but it did exist in a real world environment at one point, making the point that certain processes are compatible with useful products, but maybe not leading edge competitive products that need to make a profit currently.

I think that is an applicable domain, but the problem is that every gamer I know who is not in the tech industry is vehemently opposed to AI.

Well, they just love complaining. You won't find many who profess to like DLC, yet that sells.

Nobody wants to ship that! They want perpetually upgraded live service games instead, because that's recurring revenue.

This resonates with me as well. This money will increase attention and expectedly contributions to OSS, which will also be of benefit to other entities implementing the same model later on. That’s the way to go towards sovereignty in software.

> Would there be enough independent developers to review millions of lines of code, patch out any back doors, or fork and maintain an entirely separate projects, since none of the government protects can be trusted

That’s not far from how it is right now in OSS, even without governments in the chain. For example: how the xz back door was found: https://en.wikipedia.org/wiki/XZ_Utils_backdoor


I don’t think it will get blocked, but I do hope so. Seeing the damage mainstream social media causes to friends and family members, I believe nobody loses if it just gets blasted away.

Amazing idea! I managed to win once by not trying to pressure the opponent but just looking for the most free to move sectors on the board.

Well done :) the "ai" for this game is really basic, it just tries to find any free move and take it. The other games do a basic minmax to look ahead a bit.

Glad you enjoyed it :)



There have been complaints about it on Reddit as well. I registered an account recently and to me the annoying thing is the constant "making sure you are not a bot" check. For now I see no reason to migrate, but I do admit Forgejo looks very interesting to self-host.

https://tangled.org/ is building on ATProto

1. use git or jj

2. pull-request like data lives on the network

3. They have a UI, but anyone can also build one and the ecosystem is shared

I've been considering Gerrit for git-codereview, and tangled will be interesting when private data / repos are a thing. Not trying to have multiple git hosts while I wait


I, too, am extremely interested in development on Tangled, but I miss two features from GitHub - universal search and Releases. the web frontend of Tangled is so fast that I am still getting used to the speed, and jj-first features like stacked PRs are just awesome. kinda reminds me of how Linux patch submitting works.

It's fast because it lacks features

I'm more interested in gerrit/git-codereview for stacked commits than jj. A couple extra commands for new folks, not a completely new tool and lexicon


3 of the most exciting decentralized GitHub alternatives being developed today:

  Tangled (2024, ATP)
  Radicle (2019, IPFS) 
  Codeberg (2018, Gitea fork which supports decentralized protocols)

Which decentralized protocols does Codeberg support?

Codeberg doesnt currently support any, but Forgejo, the software it runs on, is implementing support for ActivityPub. Codeberg will likely enable it once support is stable.

> but I do admit Forgejo looks very interesting to self-host.

I've been self-hosting it for a few years now and can definitely recommend. It has been very reliable. I even have a runner running. Full tutorial at https://huijzer.xyz/posts/55/installing-forgejo-with-a-separ....


I moved (from selfhost gitlab) to forgejo recently, and for my needs it's a lot better, with a lot less hassle. It also seems a lot more performant (again probably because I don't need a lot of the advanced features of gitlab).

I've been contemplating this for almost two years. Gitlab has gotten very bloated and despite disabling a number of services in the config, it continues to require increasingly more compute and RAM; we don't even use the integrated Postgres database.

There are a few things that keep me on Gitlab, but the main one is the quality of the CI/CD system and the gitlab runners.

I looked at Woodpecker, but it seems so docker-centric and we are, uh, not.

The other big gulf is issues and issue management. Gitlab CE is terrible; weird limitations (no epics unless you pay), broken features, UX nightmares, but from the looks of it Forjego is even more lacking in this area? Despite this seeming disdain, the other feature we regularly use is referencing issue numbers in commits to tie work together easily. On this one, I can see the answer as "be the change - contribute this to Forgejo" and I'm certainly willing. Still, it's currently a blocker.

But my hopes in putting this comment out there is that perhaps others have suggestions or insight I'm missing?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: