I’m pleasantly surprised that Tyrannosaurus rex’s tiny hands were depicted so accurately. As far as I recall, scientists are still puzzled about why it even had hands. Apparently, they were too small to be useful for anything, not even scratching its face.
Not sure if you're joking or not, but I have to deal with this upcoming change at some point and still haven't read in detail why they decided to do this.
Hi there, ISRG co-founder and current board member here. In brief, shorter lifetimes force people to automate (which, e.g., avoids outages from manual processes) and mitigates the broken state of revocation in the Web PKI. That latter point especially is what I understand to be driving the Web PKI toward ever-shorter lifetimes.
I actually remember the discussion we had in ~2014 about what the default certificate lifetime should be. My opening bid was two weeks -- roughly the lifetime of an OCSP response. The choice to issue certificates with 90 day lifetimes was still quite aggressive in 2015, but it was a compromise with an even more aggressive position.
With the move to ever shorter certs the risk to letsencrypt having an outage is higher.
It would be nice to read more about what the organization is doing around resilience engineering so we can continue to be confident in depending on it issuing renewals in time.
Do you publish any of this? DR plans? Etc.
I don't mean for this to be a negative - really impressed by LE - but we've had a lot of Cloudflare outages recently and my mind is on vendor reliability & risk at the moment.
Considering how many ACME clients are available today with all sorts of convenient features, and that many web servers nowadays have ACME support built in (Caddy, Apache mod_md, and recent Nginx), I believe that people who don't automate ACME certificates are the people who get paid hourly and want to keep doing the same boring tasks to get paid.
Because big companies have a habit of growing layers of bureaucracy. If a cert is valid for three years, a decent bunch of them will invent a three-month process around cert renewal, involving two dozen stakeholders, several meetings, and sign-off from the CTO.
The side-effect of this is that they become incapable of doing it any faster during an emergency. Private key compromised? Renewal takes two months, so better hope the attackers can't do too much damage before that. CAs in turn have large (=profitable) customers which such processes who they really don't want to lose, so historically when they've failed to renew in time during incidents CAs have granted those customers exceptions on the revocation rules because they are "business critical" and doing it by-the-book would cause "significant harm". No CA is willing to be strict, because they'd lose their most valuable customers to their competition.
The only way to solve this is to force companies into adopting efficient renewal processes via an industry-wide reduction of certificate validity time. When you have to renew once a month you can't afford to have a complicated process, so you end up automating it, so there's no reason for CAs to delay cert revocation during incidents, so the internet is more secure. And because every CA is doing it, companies don't gain anything by switching to more lenient CAs, so the individual CAs have no incentive to violate the industry rules by delaying revocation.
Lets Encrypt are doing is because of the decision that CAs and browser makers made that it needs to be reduced (browsers have been reducing the length of certs that they trust).
The why is because it's safer: it reduces the validity period of private keys that could be used in a MITM attack if they're leaked. It also encourages automation of cert renewal which is also more secure. It also makes responding to incidents at certificate authorities more practical.
While this is true, I think some fields like game development may not always have this problem. If your goal is to release a non-upgradable game - fps, arcade, single-player titles, maintenance may be much less important than shipping.
I'm trying to understand where this kind of thinking comes from. I'm not trying to belittle you, I sincerely want to know: Are you aware that everyone writing software has the goal of releasing software so perfect it never needs an upgrade? Are you aware that we've all learned that that's impossible?
this was basically true until consoles started getting an online element. the up-front testing was more serious compared to the complexity of the games. there were still bugs, but there was no way to upgrade short of a recall.
I'm not saying that this model is profitable in the current environment, but it did exist in a real world environment at one point, making the point that certain processes are compatible with useful products, but maybe not leading edge competitive products that need to make a profit currently.
This resonates with me as well. This money will increase attention and expectedly contributions to OSS, which will also be of benefit to other entities implementing the same model later on. That’s the way to go towards sovereignty in software.
> Would there be enough independent developers to review millions of lines of code, patch out any back doors, or fork and maintain an entirely separate projects, since none of the government protects can be trusted
I don’t think it will get blocked, but I do hope so. Seeing the damage mainstream social media causes to friends and family members, I believe nobody loses if it just gets blasted away.
Well done :) the "ai" for this game is really basic, it just tries to find any free move and take it. The other games do a basic minmax to look ahead a bit.
There have been complaints about it on Reddit as well. I registered an account recently and to me the annoying thing is the constant "making sure you are not a bot" check. For now I see no reason to migrate, but I do admit Forgejo looks very interesting to self-host.
3. They have a UI, but anyone can also build one and the ecosystem is shared
I've been considering Gerrit for git-codereview, and tangled will be interesting when private data / repos are a thing. Not trying to have multiple git hosts while I wait
I, too, am extremely interested in development on Tangled, but I miss two features from GitHub - universal search and Releases. the web frontend of Tangled is so fast that I am still getting used to the speed, and jj-first features like stacked PRs are just awesome. kinda reminds me of how Linux patch submitting works.
Codeberg doesnt currently support any, but Forgejo, the software it runs on, is implementing support for ActivityPub. Codeberg will likely enable it once support is stable.
I moved (from selfhost gitlab) to forgejo recently, and for my needs it's a lot better, with a lot less hassle. It also seems a lot more performant (again probably because I don't need a lot of the advanced features of gitlab).
I've been contemplating this for almost two years. Gitlab has gotten very bloated and despite disabling a number of services in the config, it continues to require increasingly more compute and RAM; we don't even use the integrated Postgres database.
There are a few things that keep me on Gitlab, but the main one is the quality of the CI/CD system and the gitlab runners.
I looked at Woodpecker, but it seems so docker-centric and we are, uh, not.
The other big gulf is issues and issue management. Gitlab CE is terrible; weird limitations (no epics unless you pay), broken features, UX nightmares, but from the looks of it Forjego is even more lacking in this area? Despite this seeming disdain, the other feature we regularly use is referencing issue numbers in commits to tie work together easily. On this one, I can see the answer as "be the change - contribute this to Forgejo" and I'm certainly willing. Still, it's currently a blocker.
But my hopes in putting this comment out there is that perhaps others have suggestions or insight I'm missing?
reply