I don’t understand why most cloud backend designs seem to strive for maximizing the number of services used.
My biggest gripe with this is async tasks where the app does numerous hijinks to avoid a 10 minute lambda processing timeout. Rather than structure the process to process many independent and small batches, or simply using a modest container to do the job in a single shot - a myriad of intermediate steps are introduced to write data to dynamo/s3/kinesis + sqs/and coordination.
A dynamically provisioned, serverless container with 24 cores and 64 GB of memory can happily process GBs of data transformations.
The history of software has been that once it becomes cheap enough for teams to flood the market with “existing product” + x feature for y users. The market consolidates around a leader who does all features for all customers.
I’d bet that we skip SaaS entirely and go to Anthropic directly. This means the ai has to understand that there are different users with conflicting requirements and that we all need the same exact copy of the burn rate report.
No known mechanism, but cross species checks would imply that the schedule was evolved and has some control mechanism.
Species that evolved before the Devonian period tend not to age and instead grow through their entire lives. There is no mechanistic understanding for the wild variation in species lifespans.
So the natural question in these studies is what would happen if we simply told the muscles not to age this way. It’s plausible that this aging schedule evolved due to other factors independent of the biological constraints. It’s also plausible that evolution removed some other important components for longer lived stem cells.
Interesting, the Devonian also appears to be the period at which fish started sporting limb like appendages and muscle structures, and other animals started to explore land. Perhaps unlimited body growth doesn't work well for animals not entirely supported by water.
I misread that as the "Denisovan period" and found it interesting that in addition to Homo Floresiensis Hobbits, there might have been arbitrarily large Denisova Hominin giants. Oh well.
Do you have any advice for running this in a secure way? I’m planning on giving a molt a container on a machine I don’t mind trashing, but we seem to lack tools to R/W real world stuff like email/ Google Drive files without blowing up the world.
Is there a tool/policy/governance mechanism which can provide access to a limited set of drive files/githubs/calendar/email/google cloud projects?
Not going to lie… reading this for a day makes me want to install the toolchain and give it a sandbox with my emails etc.
This seems like a fun experiment in what an autonomous personal assistant will do. But I shudder to think of the security issues when the agents start sharing api keys with each other to avoid token limits, or posting bank security codes.
I suppose time delaying its access to email and messaging by 24 hours could at least avoid direct account takeovers for most services.
> But I shudder to think of the security issues when the agents start
Today I cleaned up mails from 10 years ago - honestly: When looking at the stuff I found "from back then" I would be shuddering much much more about sharing old mail content from 10+y and having a completely wrong image of me :-D
Tech companies also benefit from over hiring. It gives them slack to absorb crises, and Fast Fuel to move quickly on new growth. Eliminating the fear of layoffs allows employees to take more risks and explore.
The current crop of tech companies cutting staff is going to lead to a large number of dead giants. The staff who services the layoffs will be risk averse, and defensible in a job cut situation. You see this in legacy firms where it takes 10 people to make a change because each person has a small slice of permissions required to effect the change. This pattern is by design as laying of any of the ten people on different teams would kill dozens of critical business processes.
It’s becoming clear that training a frontier model is a capex/infra problem. This problem involves data acquisition, compute, and salaries for the researchers familiar with the little nuances of training at this scale.
For the same class model, you can train on more or less the same commodity datasets. Over time these datasets become more efficient to train on as errata are removed and the data is cleaner. The cost of dataset acquisition can be amortized and sometimes drops to 0 as the dataset is open sourced.
Frontier models mean acquiring fresh datasets at unknown costs.
This .. doesn't seem like such a terrible deal? At the purported growth rates, you'd expect OpenAI to reach 60-100 billion revenue by 2028. This is more or less the equivalent of building a new AWS.
Provided they keep cost growth slower than revenue and don't get disrupted by another model provider/commodification etc.
Nonsense. To give you a sense about how much $100B in revenue is, that would be the equivalent of every person in the United States paying $25/mo. Obviously that’s not happening, so how many businesses can and will pay far more than that, when there’s also Anthropic and Gemini offerings?
> when there’s also Anthropic and Gemini offerings?
For average people the global competitors are putting up near identical services at 1/10th the cost. Anthropic and Google and OpenAI may have a corporate sales advantage about their security and domestic alignment, or being 5% better at some specific task but the populace at large isn't going to cough up $25 for that difference. Beyond the first month or two of the novelty phase it's not apparent the average person is willing to pay for AI services at all.
I think it could get there with business alone, and also with consumer alone given the hardware, shopping, and ads angles. It’s an everything business and nobody on HN seems to understand that.
It really is just a collection of several dozen research grade implementations for algorithms + a small handful of load bearing algorithms for the entire internet. Surprisingly, OpenSSL isn't the only critical piece of internet architecture like this.
maybe this is what blindsides most developers into disregarding the threat of AI to their jobs. We work off some idealised version of what the industry actually is which we presume AI will fail at, instead of the reality.
I remain surprised at how long people can flog horses I figured would be dead decades earlier in enterprise. Too scared to fix fundamental issues and still running off the fumes of vendor lock-in with exasperated end users.
I worry that software and the industry is more resistent then we might imagine. Consider the insanity of Elon Musk's arbitrary cuts to twitter and the resilience of that platform in the years that followed.
It might simply be the case that buying more tokens and kicking the code enough times might give a "good enough" result for the industry to continue. I don't want to believe this but the discussion of how awful the openssl code base is seems to suggest that might be the case. You just need to automate the process of caution we have around it. We should all be hoping that Gastown fails but I feel like it might succeed.
The insanity is how he enacted them. Like the idea that everyone should come to his office with print outs of the code they've written, or that everyone has to come into HQ to do some all-nighters. Just an absurd hunger-games attitude to his workforce, full of horrific coginative biases and discrimination against some of the workforce (e.g. against those with young children or those with disabilities who might be less able to commit to all-nighters).
There was an article on here 15ish years ago to the effect of "everything's broken all the time. Everyone who writes software knows it, yet we all tolerate it."
I'd love to find that sometime. Maybe it's time to ask Gemini once again to look for me.
Honestly, this is absurdly funny, but it makes me wonder whether we'll ever see Computer Science and Computer Engineering as seriously as other branches of STEM. I've been debating recently whether I should keep working in this field, after years of repeatedly seeing incompetence and complacency create disastrous effects in the real world.
Oftentimes, I wonder if the world wouldn't be a bit better without the last 10 or 15 years of computer technology.
This is really something that’s making me quite fed up with industry. I’m looking towards embedded and firmware in hopes that the lower in the stack I go the more people care about these type of things out of business necessity. But even then I’m unsure I’ll find the rigor I’m looking for
I’ve been thinking the same thing lately. It’s hard to tell if I’m just old and want everyone off my lawn, but I really feel like IT is a dead end lately. “Vintage” electronics are often nicer to use than modern equivalents. Like dials and buttons vs touch screens. Most of my electronics that have LCDs feel snappy and you sort of forget that you’re using them and just do what you were trying to do. I’m not necessarily a Luddite. I know tech _could_ be better theoretically but it’s distressing to know that it’s also not possible for things to be different for some other reasons. Economically, culturally? I don’t know.
Is it still a critical piece? I thought most everyone migrated to libressl or boringssl after the heartbleed fiasco and serious people took a look at OpenSSL and started to understand the horror show that is the codebase and also development practices that clearly have not gotten better, if not gotten even worse.
My biggest gripe with this is async tasks where the app does numerous hijinks to avoid a 10 minute lambda processing timeout. Rather than structure the process to process many independent and small batches, or simply using a modest container to do the job in a single shot - a myriad of intermediate steps are introduced to write data to dynamo/s3/kinesis + sqs/and coordination.
A dynamically provisioned, serverless container with 24 cores and 64 GB of memory can happily process GBs of data transformations.
reply