Hacker Newsnew | past | comments | ask | show | jobs | submit | jianshen's commentslogin

I think what I will miss most about TikTok is its ability to teach me things that I would otherwise never have interest in. I've discovered so many musicians, writers, historians and niche curators it's such a shame that this is going away. I'm still surprised somehow no other company has been able to compete with how well TikTok has done discovery.


Why do it when they can't monetize it?


Wow this is amazing. If there was a locally running version available, I would gladly pay money for it.


Thanks!

Well, to make local happen I'd have to learn more about local app development.

I'd also be worried about having to support a bunch of different platforms, and being beholden to ever changing rules made by App Stores and OS makers. I actually work on a 2015 Mac with a 2019 operating system. There are many great looking AI apps that I'd love to run but can't.

Besides, it seems to me that making this centralized makes economic sense. I can just keep the GPU busy with lots of videos from many customers. I'm sure that's what most people think who build something: "The world would be so much better if everyone just came here and used this." :)


Second this. I've written my own private Google Apps script specifically for this view and I wish more calendar apps would do the same. Maybe I haven't looked hard enough.


I'm pleasantly surprised to see his name on HN. He had a table at NYCC this week and people have been leaving him gifts and messages. It's so sad.

https://pbs.twimg.com/media/FeZt8yNWIAE0HEZ?format=jpg&name=...

Edit: Photo Credit to @Leah617


Did we hit some sort of technical inflection point in the last couple of weeks or is this just coincidence that all of these ML papers around high quality procedural generation are just dropping every other day?


From the abstract: “We introduce a loss based on probability density distillation that enables the use of a 2D diffusion model as a prior for optimization of a parametric image generator. Using this loss in a DeepDream-like procedure, we optimize a randomly-initialized 3D model (a Neural Radiance Field, or NeRF) via gradient descent such that its 2D renderings from random angles achieve a low loss.”

This seems like basically plugging a couple of techniques together that already existed, allowing to turn 2D text-to-image into 3D text-to-image.


> This seems like basically plugging a couple of techniques together that already existed [...]

In his Lex Fridman interview, John Carmack makes similar assertions about this prospect for AGI: That it will likely be the clever combination of existing primitives (plus maybe a couple novel new ones) that make the first AGI feasible in just a couple thousand lines of code.


That was a great interview. I really liked his perspective on how close we are to having AGI. His point is that there's only a few more things we need to figure out and then it will basically happen.

I also liked the analogy he made with his earlier work on 2D and 3D graphics engines where taking a few short cuts basically got him on a path to success. For a while we had this "almost" 3D capability long before the hardware was ready to do 3D properly. It's the same with AGIs. A few short cuts will get us AI that is pretty decent and can do some impressive things already - as witnessed by the recent improvements in image generation. It's not a general AI but it has enough intelligence that it can still do photo realistic images that make sense to us. There's a lot of that happening right now and just scaling that up is going to be interesting by itself.


That's a great example that reminds me of another one: there was nothing new about Bitcoin conceptually, it was all concepts we already had just in a new combination. IRC, Hashing, Proof of Work, Distributed Consensus, Difficulty algorithms, you name it. Aside from Base58 there wasn't much original other than the combination of those elements.


Base58 really should have been base57.


Hello Stavros, I agree. When I look at the goals that base58 sought to achieve, (eliminating visually similar characters) I couldn't help but wonder why more characters were not eliminated. There is quite a bit of typeface androgyny when you consider case and face.


Yeah, I don't know why 1 was left in there, seems like a lost opportunity. Discarding l, I, 0, O, but then leaving 1? I wonder why.


I can only assume it was for a superstitious reason so that the original address prefixes could be a 1. This is the only sense I can make from it.


Billions of creatures with stronger neutral networks, more parameters, better input have lived on earth for millions of years, but only now something like humans showed up. I fully expect AI to do everything animals can do pretty soon, but since whatever it is that differentiates humans didn't happen for million of years, there's good chance AGI research will get stuck at a similar point.


Nature has the advantage of self organisation and (partially because of that) parallelism, that's proved hard to mimic in man made devices. But on the other hand, nature also has obstacles such as energy consumption, procreation & development, and survival, that AI doesn't have to worry about.

I think finding a niche for humans has proved difficult especially because of those reasons, and AI can take those hurdles much easier.


Change arrives gradually, and then suddenly.

It takes nature thousands of years to create a rock that looks like a face, just by using geology. A human can do that in a couple hours. And then this AI can generate 50 3d human faces per second (assuming enough CPU).

It could be that an AGI is around the corner, as they say. We might not be machines, but are way faster than nature at reaching places. We don't have the option of waiting for thousands of years.


> This seems like basically plugging a couple of techniques together that already existed

as with a majority of ML research


True (I made such a proposal myself a few hours ago, albeit in vaguer terms). The thing is deployment infrastructure is good enough now that we can just treat it as modular signal flows and experiment a lot without having to engineer a whole pile of custom infrastructure for each impulsive experiment.


Isn't that what the Singularity was described as a few decades ago? Progress so fast it's unpredictable even in the short term.


Same as it ever was, scientific revolutions arrive all at once, punctuating otherwise uneventful periods. As I understand, the present one is the product of the paper "Attention is all you need": https://arxiv.org/pdf/1706.03762.pdf.


... that one has 52K citations, and the 2D to 3D paper "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis" with 1488 citations.

https://arxiv.org/abs/2003.08934


>as with a majority of ML research

Plus "we did the same thing, but with 10x the compute resources".

But yeah.


> This seems like basically plugging a couple of techniques together that already existed

Do this enough times and eventually the thing you have looks indistinguishable from something completely novel.


Time and time again these ML techniques are proving to be wildly modular and pluggable. Maybe sooner or later someone will build a framework for end to end text-to-effective-ML-architecture that will just plug different things together and optimize them.


I think this is what huggingface (github for machine learning) is trying with diffusers lib: https://huggingface.co/docs/diffusers/index

They have others as well.


Fascinating stuff! But who is working on the text-to-ML-architecture thing?


Cool stuff. But who is working on the text-to-ML-architecture thing?


They're AI generated, the singularity already happened but the machines are trying to ease us into it.


Scary fn thought!

and I agree with you!

And the OP comment its by the magnanimous/infamous AnigBrowl

You need to start doing AI legal admin ( I dont have the terms, but you may - we need legal language to control how we deal with AI)

and @dang - kill the gosh darn "posting too fast" thing

Jiminey Crickets I have talked to you abt this so many times...


"You're posting too fast" is a limit that's manually applied to accounts that have a history of "posting too many low-quality comments too quickly and/or getting into flame wars". You can email dang (hn@ycombinator.com) if you think it was applied in error, but if it's been applied to you more than once... you probably have a continuing problem with proliferating flame wars or posting otherwise combative comments.


I think you also can get it by just having unpopular opinions. Hackernews used to be much more nuanced than it is today imo.


Uhm... did you even check my account age ((and my old one is two years older))


It’s really easy to get throttled for a single comment out of thousands.


> Scary fn thought!

I'm a kotlin programmer so it's a scary fun thought for me.


It has become clear since alphaGo that intelligence is an emergent property of neural networks. Since then the time and cost requirements to create a useful intelligence have been coming down. The big change was in August when Stable Diffusion was able to run on consumer hardware. Things were already accelerating before August, but that has really kicked up the speed because millions of people can play around with it and discover intelligence applications, especially in the latent space.


SD is open source (for real open source) and the community has been having a field day with it.


We've hit a couple of inflection point.

The numbers of researchers and research labs scaled up that there is now many well funded teams with experience.

Public tooling and collaboration has reached a point where research happen across the open internet between researchers at a pace that wasn't before possible. Common Crawl, stable diffusion, hugging-face, etc...)

All the techniques that took years in small labs to prove as viable are now getting scaled up across data and people in front of our eyes.


I think DALLE really kicked things into high gear.


My hot take is that we're merely catching up on until recently unutilized hardware improvements. There's nothing 'self-improving', it's largely "just" scaled up methods or new, clever applications of scaled up methods.

The pace at which methods scale up is currently a lot faster than hardware improvements, so unless these scaled up methods become incredibly lucrative (not impossible), I think it's quite likely we'll soon-ish (a couple years from now) see a slowdown.


Maybe deadline for neurips which is coming up?


This was submitted to ICLR


Whose full paper submission deadline was also 2 days ago.

This should be further up than all the speculation about AI accelerationism. There's a very simple explanation why a lot of awesome papers come out right now, it's prestigious conference paper deadlines.


Partially coincidence, but also ICLR submission deadline was yesterday, so now papers can be public.


This has been going on for years. The applications are just crossing thresholds now that are more salient for people, e.g. doing art.


Conference season?


It’s called the technological singularity. Pretty fun so far!


This isn't what is usually meant by "technological singularity". It is an inflection point where technology growth becomes incontrollable and unpredictable, usually theorized to be cause by a self improving agent (/AI) that becomes smarter with each of its iterations. This is still standard technological progress, human control, even if very fast


It’s basically when AI starts self-improving. I think this started with large language models. They are central to these developments. Complete autonomy is not required for AGI-nor therefore for the singularity.

Whatever it is, this is a massive phase shift.


It's not really "human controlled". It's an evolutionary process, researchers are scanning the space of possibilities, each with a limited view, but in aggregate it has an emergent positive trend.


THIS

WTF - the singularity is closer than we thought!!!


yay


I see an old Sudoku game I wrote in here and it brings back some great memories but now I'm also scared to look at my old code now. O_O

Curious how did you back all this up?


I think this mostly answers your question: https://news.ycombinator.com/item?id=31611208

The other part is the community. People who had backed up their libraries. Devices found on eBay that still had a bunch of apps on them. People who had hoarded IPKs and had them saved on hard drives and were willing to share. That and lots of scraping Google by filename...


Curious if there's more news on this. They came back for a short bit but now they're gone again...


I'm really happy to see ZhuangZi show up on HN and a good reminder to go back and re-read his work from time to time.


I decided to take you up on that vicariously and spent most of the evening reading my copy again. How refreshing it is!

I'm sure you have your own copy, but here is my transcription of the story about Ting the cook from chapter 3:

https://arunkprasad.com/log/ting-the-cook-from-the-zhuangzi/


I never thought I'd see Vaporwave mentioned on HN. Anybody fans of Bandcamp here? I really feel like it's helped keep a lot of these microgenres going even more than Spotify.


My response to this will always be: whoever solves live music performance (as in two or more musicians in two different locations happily playing together over the internet) will win the video conferencing wars.

It's not just latency, it's the non-verbal cues, dynamics (including silence that often represents acknowledgement), "energy" and rhythm that I miss about in-person communication.


Jamulus is pretty good. With a good audio card, cabled connection a 25ms delay is achievable.

You're right. Talking on it feels much better than talking on the phone or video conference.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: