i really think this is part of the pitch deck for bun's funding. that a bigger company would acquire it for the technology. the only reason an AI company or any company for that matter would acquire it would be to:
i like how claude code currently does it. it asks permission for every command to be ran before doing so. now having a local model with this behavior will certainly mitigate this behavior. imagine before the AI hits the webhook.site it asks you
AI will visit site webhook.site..... allow this command?
1. Yes
2. No
a concern i have is that it's only a matter of time before a similar attack is done to electron based apps (which also have packages installed using npm). probably worse because it's installed in your computer and can potentially get any information especially given admin privileges.
I’m starting an electronjs project in a few weeks and have been reading up on it. They make a big deal about the difference between the main and renderer processes and security implications. The docs are there and the advice given but it’s up to the developers to follow them.
That leads me to another point. Devs have to take responsibility for their code/projects. Everyone wants to blame npm or something else but, as software developers, you have to take responsibility for the systems you build. This means, among may other things, vetting code your code depends on and protecting the system from randomly updating itself with code you haven’t even heard about.
We wanted to make concept for an app using all local models for chat (llama 3.1 8B) and voice (whisper). Deployed using kubernetes and easily scalable not to mention fully open source!
In practice, it’s been written as plain JS with a tiny bit of gratuitous Vue and SCSS bolted on (see even how Vue’s onMounted and onBeforeUnmount are fed callbacks that just run the actual initOGL and destroy functions). It would have been easier and shorter to write without Vue and SCSS than with them! What’s currently spread across index.html, src/styles.scss, src/main.js and src/App.vue would have worked better all in index.html, or if you really wanted to, you could still split the CSS and JS into files src/styles.css and src/main.js.
The bulk of it is WebGL. Vue is doing very little here. Since it's a single static page rendering to canvas, it really doesn't need a framework like Vue or React.
I was using React at work and looking at the Vue manual which at first looked good to me because Vue has first-class lists, it fit my model of web applications better. Than I saw three.js and other things were people used React to render things that weren’t ordinary web apps and I realized I could draw anything I could imagine with React but not with Vue.
I like the idea of svelte but for apps that are small enough that svelte has a bundle size advantage the bundle size difference isn’t decisive (users won’t usually notice or care) and if your app is huge enough that the bundle size is a problem you have problems bigger than your choice of framework.
I helped port a Vue 2 project to Vue 3, and then I worked on a Vue 3 project we’ve slowly been rewriting in a greenfield Nuxt 3 project. Vue 2 and the options API were just difficult in all senses - even Vue3 with Options feels bad. I really enjoy 3 with the composition API, and I have always had a difficult time reasoning about React personally.
While I will continue to probably promote Vue where it makes sense, I’m honestly more inclined towards learning Svelte, HTMX, and other less arduous frameworks.
but seriously, I'm very interested to hear your gripes with Vue that were solved by react, since the latter feels much worse DX-wise than both Vue or Svelte, notwithstanding worse performance as well.
Vue 2 had really bad support for static typing. It's improved in Vue 3 but still not as good as React. TSX is especially good.
But the main issue is the automatic reactivity. It's difficult to reason about and leads to spaghetti code. We also had occasional issues where people would put objects in properties that had some tenuous link to a database object or something, and Vue recursively infects the entire object with getters and setters to make the reactivity work. Sometimes we didn't even notice but it makes everything way slower.
I haven't tried Svelte so I'll take your word for it!
Also this was 3 years ago so I may have misremembered some details. No nitpicking!
Came here to say this. What makes a great general purpose language is not only in the programming part but also in transportability. I think Go takes the cake here. If only Elixir apps could be compiled and transported as a binary to similar systems and it works, then it would make it a great general purpose language. Until then, it's really only a good client/server language as it was intended to be.
Such a great quote. Mostly true if viewed especially from a business standpoint. I for one also see code as creative expression, a form of art. I like coding because I can express a solution in a way that is elegant and nice to read for myself and others. A bit shallow but If you've read code that is written elegantly, you'll know that immediately.
My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.
Must be my ignorance but everytime I see explainers for LLMs similar to the post, it’s hard to believe that AGI is upon us. It just doesn’t feel that “intelligent” but again might just be my ignorance.
It's never going to be AGI, because we're still stuck in the static weights era.
Just because it is theoretically possible to scale your way through sheer brute force alone using a trillion times the compute doesn't mean that you can't come up with a better compute scaling architecture that uses less energy.
It's the same as having a turing machine with one tape vs multiple tapes. In theory it changes nothing, in practice having even the simplest algorithms be quadratic is a huge drag.
The problem with previous AI approaches is that humans wanted to make use of their domain expertise and ended up anthropomorphizing the ML models, which resulted in them being overtaken by people who invested little in domain expertise and more into compute scaling. The quintessential bitter lesson. With the advent of the bitter lesson, people who don't understand anything at all except the concept "bigger is better" arrived, and they think that they can wring out blood from a stone. The problem they run into is that they are trying to get something out of compute scaling that you can't get out of compute scaling.
What they want to do is satisfy a problem definition using an architecture that is designed to solve a completely different problem definition. The AGI compute scaling crowd wants something that is capable of responding and learning through experience, out of something that is inherently designed and punished to not learn through experience. The key aspect "continual learning" does not rely on domain knowledge. It is a compute scaling paradigm, but it's not the same compute scaling paradigm that static weights represent. You can't bet on donkeys in a horse race and expect to win, but since everyone is bringing donkeys to the race it sure looks like you can.
My personal bet is that we will use self referential matrices and other meta learning strategies. The days of hand tuning learning rates to produce pre-baked weights should be over by the end of the decade.
Because LLMs successfully emulate a subset of our brain's functions: memory and imagination (the generative/mixing function). What's missing is our brain's ability to validate the generative output against a model of the environment described by memory and output (the real world), which is built on sensory input. In short, we have a concept of true/false, LLMs don't.
LLMs emulate language by following intricate links between tokens. This is not meant to emulate memory or imagination, just transforming a list of tokens into another list of tokens, generating language. And language is a huge part of the intelligence puzzle so it looks smart to people despite being quite mechanical.
A next step could be to create a mind, with a piece that works similar to the paretial lobe to give it a sense of self or temporal existence.
> it looks smart to people despite being quite mechanical
Note that brains themselves are also "quite mechanical", as is any physical system or piece of software. "Looks smart", in the limit, reduces to "is smart".
Brains themselves have a lot more mechanisms to cause emergent behavior what with all the adaptive organic layers so I can't really compare the two 1-1.
eh, transformers are universal differentiable layered hash tables. that's incredibly powerful. most logic is just pulling symbols and matching structures with "hash"es.
if intelligence is just reasonable manipulation of logic it's unsurprising that an LLM could be intelligent, what maybe is surprising is that we have ~intelligence without going up a few more orders of magnitude in size, what's possibly more surprising is that training it on the internet got it doing the things it's doing
As the author of the original post above, let me say that if that's word salad, it's a Michelin star salad. Just the right mix of lettuce and tomato, and the dressing is spot on :-)
Seriously, though, differentiable hash tables is an awesome way to look at them, I wish I'd heard it before.
Any arbitrarily complex system must be made of simpler components, recursively down to arbitrary levels of simplicity. If you zoom in enough everything is dumb.
Biological Neuron: Processes information through complex, nonlinear integration of thousands of excitatory and inhibitory inputs across dendritic trees, producing spiking outputs with rich temporal patterns. It adapts dynamically via synaptic plasticity, neuromodulation, and structural changes, operating in a probabilistic, energy-efficient manner within oscillatory networks.
Artificial Neuron: Performs simple, linear summation of weighted inputs, applies a static activation function, and produces a single scalar output. It lacks temporal dynamics, local plasticity, or neuromodulation, operating deterministically with high computational cost and fixed connectivity.
"Dendrites can implement non‑linear sub‑units and even logic‑gate‑like behavior before the soma integrates them, whereas the standard artificial neuron uses a plain weighted sum."
"Neurotransmitter diversity (e.g., glutamate, GABA, dopamine) allows different semantics on each connection. An artificial edge conveys only a signed scalar."
1. acquire talent.
2. control the future roadmap of bun.
i think it's really 1.
reply