I think what I will miss most about TikTok is its ability to teach me things that I would otherwise never have interest in. I've discovered so many musicians, writers, historians and niche curators it's such a shame that this is going away. I'm still surprised somehow no other company has been able to compete with how well TikTok has done discovery.
Well, to make local happen I'd have to learn more about local app development.
I'd also be worried about having to support a bunch of different platforms, and being beholden to ever changing rules made by App Stores and OS makers. I actually work on a 2015 Mac with a 2019 operating system. There are many great looking AI apps that I'd love to run but can't.
Besides, it seems to me that making this centralized makes economic sense. I can just keep the GPU busy with lots of videos from many customers. I'm sure that's what most people think who build something: "The world would be so much better if everyone just came here and used this." :)
Second this. I've written my own private Google Apps script specifically for this view and I wish more calendar apps would do the same. Maybe I haven't looked hard enough.
Did we hit some sort of technical inflection point in the last couple of weeks or is this just coincidence that all of these ML papers around high quality procedural generation are just dropping every other day?
From the abstract: “We introduce a loss based on probability density distillation that enables the use of a 2D diffusion model as a prior for optimization of a parametric image generator. Using this loss in a DeepDream-like procedure, we optimize a randomly-initialized 3D model (a Neural Radiance Field, or NeRF) via gradient descent such that its 2D renderings from random angles achieve a low loss.”
This seems like basically plugging a couple of techniques together that already existed, allowing to turn 2D text-to-image into 3D text-to-image.
> This seems like basically plugging a couple of techniques together that already existed [...]
In his Lex Fridman interview, John Carmack makes similar assertions about this prospect for AGI: That it will likely be the clever combination of existing primitives (plus maybe a couple novel new ones) that make the first AGI feasible in just a couple thousand lines of code.
That was a great interview. I really liked his perspective on how close we are to having AGI. His point is that there's only a few more things we need to figure out and then it will basically happen.
I also liked the analogy he made with his earlier work on 2D and 3D graphics engines where taking a few short cuts basically got him on a path to success. For a while we had this "almost" 3D capability long before the hardware was ready to do 3D properly. It's the same with AGIs. A few short cuts will get us AI that is pretty decent and can do some impressive things already - as witnessed by the recent improvements in image generation. It's not a general AI but it has enough intelligence that it can still do photo realistic images that make sense to us. There's a lot of that happening right now and just scaling that up is going to be interesting by itself.
That's a great example that reminds me of another one: there was nothing new about Bitcoin conceptually, it was all concepts we already had just in a new combination. IRC, Hashing, Proof of Work, Distributed Consensus, Difficulty algorithms, you name it. Aside from Base58 there wasn't much original other than the combination of those elements.
Hello Stavros, I agree. When I look at the goals that base58 sought to achieve, (eliminating visually similar characters) I couldn't help but wonder why more characters were not eliminated. There is quite a bit of typeface androgyny when you consider case and face.
Billions of creatures with stronger neutral networks, more parameters, better input have lived on earth for millions of years, but only now something like humans showed up. I fully expect AI to do everything animals can do pretty soon, but since whatever it is that differentiates humans didn't happen for million of years, there's good chance AGI research will get stuck at a similar point.
Nature has the advantage of self organisation and (partially because of that) parallelism, that's proved hard to mimic in man made devices. But on the other hand, nature also has obstacles such as energy consumption, procreation & development, and survival, that AI doesn't have to worry about.
I think finding a niche for humans has proved difficult especially because of those reasons, and AI can take those hurdles much easier.
It takes nature thousands of years to create a rock that looks like a face, just by using geology. A human can do that in a couple hours. And then this AI can generate 50 3d human faces per second (assuming enough CPU).
It could be that an AGI is around the corner, as they say. We might not be machines, but are way faster than nature at reaching places. We don't have the option of waiting for thousands of years.
True (I made such a proposal myself a few hours ago, albeit in vaguer terms). The thing is deployment infrastructure is good enough now that we can just treat it as modular signal flows and experiment a lot without having to engineer a whole pile of custom infrastructure for each impulsive experiment.
Same as it ever was, scientific revolutions arrive all at once, punctuating otherwise uneventful periods. As I understand, the present one is the product of the paper "Attention is all you need": https://arxiv.org/pdf/1706.03762.pdf.
Time and time again these ML techniques are proving to be wildly modular and pluggable. Maybe sooner or later someone will build a framework for end to end text-to-effective-ML-architecture that will just plug different things together and optimize them.
"You're posting too fast" is a limit that's manually applied to accounts that have a history of "posting too many low-quality comments too quickly and/or getting into flame wars". You can email dang (hn@ycombinator.com) if you think it was applied in error, but if it's been applied to you more than once... you probably have a continuing problem with proliferating flame wars or posting otherwise combative comments.
It has become clear since alphaGo that intelligence is an emergent property of neural networks. Since then the time and cost requirements to create a useful intelligence have been coming down. The big change was in August when Stable Diffusion was able to run on consumer hardware. Things were already accelerating before August, but that has really kicked up the speed because millions of people can play around with it and discover intelligence applications, especially in the latent space.
The numbers of researchers and research labs scaled up that there is now many well funded teams with experience.
Public tooling and collaboration has reached a point where research happen across the open internet between researchers at a pace that wasn't before possible. Common Crawl, stable diffusion, hugging-face, etc...)
All the techniques that took years in small labs to prove as viable are now getting scaled up across data and people in front of our eyes.
My hot take is that we're merely catching up on until recently unutilized hardware improvements. There's nothing 'self-improving', it's largely "just" scaled up methods or new, clever applications of scaled up methods.
The pace at which methods scale up is currently a lot faster than hardware improvements, so unless these scaled up methods become incredibly lucrative (not impossible), I think it's quite likely we'll soon-ish (a couple years from now) see a slowdown.
Whose full paper submission deadline was also 2 days ago.
This should be further up than all the speculation about AI accelerationism. There's a very simple explanation why a lot of awesome papers come out right now, it's prestigious conference paper deadlines.
This isn't what is usually meant by "technological singularity". It is an inflection point where technology growth becomes incontrollable and unpredictable, usually theorized to be cause by a self improving agent (/AI) that becomes smarter with each of its iterations. This is still standard technological progress, human control, even if very fast
It’s basically when AI starts self-improving. I think this started with large language models. They are central to these developments. Complete autonomy is not required for AGI-nor therefore for the singularity.
It's not really "human controlled". It's an evolutionary process, researchers are scanning the space of possibilities, each with a limited view, but in aggregate it has an emergent positive trend.
The other part is the community. People who had backed up their libraries. Devices found on eBay that still had a bunch of apps on them. People who had hoarded IPKs and had them saved on hard drives and were willing to share. That and lots of scraping Google by filename...
I never thought I'd see Vaporwave mentioned on HN. Anybody fans of Bandcamp here? I really feel like it's helped keep a lot of these microgenres going even more than Spotify.
My response to this will always be: whoever solves live music performance (as in two or more musicians in two different locations happily playing together over the internet) will win the video conferencing wars.
It's not just latency, it's the non-verbal cues, dynamics (including silence that often represents acknowledgement), "energy" and rhythm that I miss about in-person communication.