I'm working on a new 1v1 scrabble/wordle style game - iOS and Android versions are cooking as well, thanks to Expo.dev. A friend described it as "scrabble that doesn't drag", and I've had a few friends and family members playing hundreds of games (and especially the daily game) over the last few weeks, which has been really encouraging. Play here:
1 hour 29 minutes seems excessive to extract the astronauts; if any of them _did_ have a medical issue they'd be in for a long wait.
The commentary said that the initial problems with the boats approaching Integrity was due to an unexpected swell. Unexpected, in the Pacific?
Edit: all of the Apollo missions, except 8, had their stabilization collars inflated in under 20 minutes. With Integrity today it took nearly an hour more.
I also like how they waffled on about how winching them up to a helicopter was the fastest option, when they obviously could have shaved an hour off the recovery time by simply having them step out onto the waiting boats!
Having worked for various government agencies for a while I've learned to recognise the signs of the "We're following the procedure whether it makes sense or not, dammit!" attitude you get with large bureaucracies.
I wondered about that. Winching someone who can barely walk and is wearing a spacesuit into a helicopter over choppy water is safer and quicker than parking them on a motor boat and sailing back to the mothership?
What was the real reason? Tradition? Lack of imagination? Photo opportunities?
To play devil's advocate against my own argument: The nearest ship was about 5 km away, which is a decently long boat ride. In choppy waters with a small boat that could be less than ideal for someone who may be injured, weak from an extended stay in microgravity, etc. I assume the plan -- written months or years before the landing -- also had to factor in the possibility that the ships wouldn't have been so close. They did mention several times that the landing was unusually accurate, so it is entirely possible that their pre-planned helicopter ride would have made a lot more sense if they were, say, 20+ km away instead. You don't want dozens of people improvising the procedure in the middle of choppy waters with bad comms, so the best thing to do is to just follow the plan, even if it looks a bit absurd on camera.
100%. Easy to criticize this but you have to remember these are the people that planned and executed a successful moon mission. Pretty sure they know what they are doing and have thought about things in more that just a passing way.
“Stepping” from one vessel to another in the middle of the ocean is not like getting on your buddy’s sailboat at the marina even if you have your sea legs. Astronauts don’t even have their earth legs when they splash down; when they return from ISS they can’t even walk right away, though Artemis was a shorter duration mission than that.
I imagine if there was a medical emergency they'd worry less about capsule recovery and safe shutdown. IIRC because the sat phone wasn't working, they had to wait an extra 15 mins to power down the capsule (I guess so they could use its radios?). In an emergency I imagine they'd just leave it as-is
They don't allow embroidered company logos unless you're a B-corp or a non-profit, IIRC. If you see someone wearing tech swag with logos on Patagonia, it's either old gear or someone in their company is a major asshole.
Allegedly it was to keep people from trashing the garments when they left the company. I have a few nice jackets I really don't wear because I don't represent that organization, and they are way less valuable at thrift.
The rules are if you want to get bulk discounts from Patagonia, you're right that you can just buy the clothes and do whatever to them, you just pay retail.
The assholery aspect is more personal to me I think. I like that even though they're not exactly a grassroots cottage gear maker anymore, that Patagucci actually enable and encourage secondhand use via their worn gear and repair programs and messaging. I try to give space to people and orgs who are trying to do things thoughtfully, so if someone goes out of their way knowingly to disrespect that thoughtfulness, yeah I find that distasteful.
I'm not sure if that put a dent in the finance bros' style. Finance bros can of course still buy a bunch of the vests and have a third party do the custom logo for them.
One of my favorite youtube channels right now is What's Going On With Shipping, hosted by a former merchant mariner. Here's a 101 primer if you are learning too:
Sounds like DynamoDB is going to continue to be a hard dependency for EC2, etc. I at least appreciate the transparency and hearing about their internal systems names.
I think it's time for AWS to pull the curtain back a bit and release a JSON document that shows a list of all internal service dependencies for each AWS service.
I don’t use AWS or any other cloud provider. I use bare metal since 2012. See, in 2012 (IIRC), one fateful day, we turned off our bare metal machines and went full AWS. That afternoon, AWS had its first major outage. Prior to that day, the owner could walk in and ask what we were doing about it. That day, all we could do was twiddle our thumbs or turn on a now outdated database replica. Surely AWS won’t be out for hours, right? Right? With bare metal, you might be out for hours, but you can quickly get back to a degraded state, no matter what happens. With AWS, you’re stuck with whatever they happen to fix first.
Meanwhile I've had bare metal be a complete outage for over a day because a backhoe decided it wanted to eat the fiber line into our building. All I could do was twiddle my thumbs because we were stuck waiting on another company to fix that.
Could we have had an offsite location to fail over to? From a technical perspective, sure. Same as you could go multi-region or multi-cloud or turn on some servers at hetzner or whatever. There's nothing better or worse about the cloud here - you always have the ability to design with resilience for whatever happens short of the internet on the whole breaking somehow.
+1, SREs can spend months during their onboarding basically reading design docs and getting to know about services in their vicinity.
Short of publicly releasing all internal documentation, there's not much that can make the AWS infrastructure reasonably clear to an outsider. Reading and understanding all of this also would be rather futile without actual access to source code and observability.
They should at least split off dedicated isolated instances of DynamoDB to reduce blast radius. I would want at least 2 instances for every internal AWS service that uses it.
I mean, something has to be the baseline data storage layer. I’m more comfortable with it being DynamoDB than something else that isn’t pushed as hard by as many different customers.
I grew up close to Niagara Falls and my dad was a firefighter there for ~30 years who had to practice rappelling down the gorge to save people (which sadly happens too frequently).
His favorite story from the last time they "shut the falls off" was that they found tons of loose change in the rocks around the rapids - people were racing to get it and bringing back buckets of money. (Of course, they also found a few bodies as well...)
https://wordtrak.com/
If you're enjoying it, please leave me some feedback: https://discord.gg/pFjEcbQsv
reply