Hacker Newsnew | past | comments | ask | show | jobs | submit | Stem0037's commentslogin

I wonder how much of this overhead (like the 250µs for activations/consistency on B200) could be further chipped away with even finer-grained control or different sync primitives.


Consider implementing a 'guest upload' feature with stricter expiration policies and file size limits. This could maintain security while allowing for more flexible use cases, especially in client-facing scenarios where bidirectional file sharing is necessary.


It would be interesting to see benchmarks comparing HPy extensions to equivalent Cython/pybind11 implementations in terms of performance and development time.


While Matt's willingness to engage is commendable, it also exposes the need for clearer governance models in major open source projects. Perhaps it's time for WordPress to consider a foundation-style structure, separating Matt's personal influence from the project's direction.


While this approach can engage some students, it risks confusing others and potentially eroding trust. A balanced method might involve planned "mistakes" alongside clear, accurate instruction.


AI, at least in its current form, is not so much replacing human expertise as it is augmenting and redistributing it.


Yep. And that's the real value add that is happening right now.

HN concentrates on the hype but ignores the massive growth in startups that are applying commoditized foundational models to specific domains and applications.

Early Stage investments are made with a 5-7 year timeline in mind (either for later stage funding if successful or acquisition if less successful).

People also seem to ignore the fact that foundational models are on the verge of being commoditized over the next 5-7 years, which decreases the overall power of foundational ML companies, as applications become the key differentiator, and domain experience is hard to build (look at how it took Google 15 years to finally get on track in the cloud computing world)


I notice that a lot of people seem to only focus on the things that AI can't do or the cases where it breaks, and seem unwilling or incapable of focusing on things it can do.

The reality is that both things are important. It is necessary to know the limitations of AI (and keep up with them as they change), to avoid getting yourself in trouble, but if you ignore the things that AI can do (which are many, and constantly increasing), you are leaving a ton of value on the table.


> I notice that a lot of people seem to only focus on the things that AI can't do or the cases where it breaks, and seem unwilling or incapable of focusing on things it can do.

I might be one of these people, but in my opinion, one should not concentrate on things that it can do, but for how many of the things where an AI might be of help for you,

- it does work

- it only "can" do it in a very broken way

- it can't do that

At least for the things that I am interested in an AI doing for me, the record is rather bad.


Just because AI doesn’t work for you, doesn’t mean it doesn’t work for other people. Ozempic may have no effect, or even harmful to you, but it’s a godsend for many others. Acknowledge that, instead of blindly insisting on your use cases. It’s fine to resist the hype, but it’s foolish to be willfully ignorant.


How do you define "can do" ? Would answering correctly 9 out of 10 questions correctly for a type of question (like give directions knowing a map) mean it "can do" or that it "can't do" ?

Considering it works for so many cases, I think it is naturally to point out the examples where it does not work - to better understand the limit.

Not to mention that practically, I did not see anything proving that it will always "be able" to do something . Yes, it works most of the times for many things, but it's important to remember it can (randomly?) fail and we don't seem to be able to fix that (humans do that too, but having computers fail randomly is something new). Other software lets say a numerical solver or a compiler, are more stable and predictable (and if they don't work there is a clear bug-fix that can be implemented).


Yep! Nuance is critical, and sadly it feels like nuance is dying on HN.


This very discussion feels nuanced so i don't share your sentiment.


It would be nice to have more examples. Without specifics, “massive growth in startups” isn’t easily distinguishable from hype.

A trend towards domain-specific tools makes sense, though.


DevTools/Configuration Management and Automated SOC are two fairly significant example.


Am I the only one unimpressed by the dev tool situation? Debugging and verifying the generated code is more work than simply writing it.

I'm much more impressed with the advances in computer vision and image generation.

Either way, what are the startups that I should be looking at?


And even when the output is perfect, it may be that the tool is helping you write the same thing a hundred times instead of abstracting it into a better library or helper function.

Search/Replace as a service.


Those are more like broad categories than examples of startups, though.


Same with consultancy. There is a huge amount of automation that can be done with current gen LLMs, as long as you keep their shortcomings in mind. The "stochastic parrot" crowd seems an over correction to the hype bros.


It's because the kind of person who understands nuance isn't the kind of person to post in HN flame wars.

The industry is still in it's infancy right now, and stuff can change in 3-5 years.

Heck, 5 years ago models like GPT-4o were considered unrealistic in scale, and funding in the AI/ML space was drying up at the expense of crypto and cybersecurity. Yet look at the industry today.

We're still very early and there are a lot of opportunities that are going to be discovered or are in the process of being discovered.


GPT4o is unrealistic at scale. OpenAI isn't making a profit running it.


...and then being blown up when the AI company integrates their idea.


Not exactly.

At least in the cybersecurity space, most startups have 3-5 year plans to build their own foundational models and/or work with foundational model companies to not directly compete with each other.

Furthermore, GTM is relationship and solution, and an "everything" company has a difficult time sympathizing or understanding GTM on a sector to sector basis.

Instead, the foundational ML companies like OpenAI have worked to instead give seed/pre-seed funding to startups applying foundational MLs per domain.


OpenAI/Microsoft are building a $100B+ datacenter for foundation models and pitching ideas for $1T+. Compute is the primary bottleneck, startup competitors will not be physically possible.


Yes, it should really be called collective intelligence not artificial intelligence


While deliberate practice is undoubtedly crucial for developing creativity and expertise, I think there's an important nuance we often overlook - the role of diverse experiences and cross-pollination of ideas.

Deliberate practice helps refine skills and deepen domain knowledge, but breakthrough creativity often comes from making unexpected connections between disparate fields. Some of history's most creative figures - like Leonardo da Vinci or Benjamin Franklin - were polymaths who excelled in multiple domains.


This! Most of my creativity in private projects stems from having build a broad space of knowledge/experiences. Having tinkered with a lot of different disconnected things really helps me find interesting bits to combine in a new and creative way that I never had imagined before :)


> the role of diverse experiences and cross-pollination of ideas

Add to this: giving room for ideas to grow: the more you wait, the more diverse and numerous the life experiences, all of them having the potential to shape those uncrystallized ideas.


This is why AI can in fact create things that haven’t yet been.


Sure, but so can pure randomness, for the same reason. It is creative in the literal sense, but not in the ineffable sense that humans tend to describe in humans.


Well put! Well, the first sentence is -- I think there's ample evidence that chatbots are creative in the same manner as humans, for the simple reason that they speak coherently. I'm sure we all remember pre-2023 chatbots, which were cute but ultimately produced gibberish; the current chatbots reach the same limits if given a hard enough task, which I think is fantastic evidence that they are ineffably creative before that limit.

In Chomsky's words, quoting Wilhelm von Humboldt:

  Language is a process of free creation; its laws and principles are fixed, but the manner in which the principles of generation are used is free and infinitely varied. Even the interpretation and use of words involves a process of free creation. The normal use of language and the acquisition of language depend on what Humboldt calls the fixed form of language, a system of generative processes that is rooted in the nature of the human mind and constrains but does not determine the free creations of normal intelligence or, at a higher and more original level, of the great writer or thinker... 
  The many modern critics who sense an inconsistency in the belief that free creation takes place within – presupposes, in fact – a system of constraints and governing principles are quite mistaken; unless, of course, they speak of “contradiction” in the loose and metaphoric sense of Schelling, when he writes that “without the contradiction of necessity and freedom not only philosophy but every nobler ambition of the spirit would sink to that death which is peculiar to those sciences in which that contradiction serves no function.” Without this tension between necessity and freedom, rule and choice, there can be no creativity, no communication, no meaningful acts at all.
- https://chomsky.info/language-and-freedom/


You're absolutely right - and to identify the creation within randomness is also a form of creativity. Not all humans create (and identify) with the same methodologies!


In hindsight, I wish I’d included the disclaimer that I have creative pursuits (of the ineffable variety) which leverage creative tools in the more literal sense (not AI, not purely random either). I don’t mean to disparage the entire class of machine-generated creation per se.

But I do think that there is an important distinction between incorporating it in some form into a person’s expression, versus being the whole of the expression. Even if that incorporation is mere curation, at least that imbues some semblance of meaning, to someone capable of experiencing meaning.

And perhaps that’s a snobbish perspective. Maybe it deserves reexamination.


Nothing wrong with randomness combined with "taste" in the hands of the creator. Which is exactly the plan with generative AI.


I must not be using the right models because this is exactly what AI can not currently do IMO.


In this age of metrics and analytics, it's easy to get caught up in chasing views. But writing for yourself rather than for clicks seems like a much more sustainable long-term approach.


I'm a bit concerned about how this might impact their commitment to AI safety though. The non-profit structure was supposed to be a safeguard against profit-driven decision making. Will they still prioritize responsible AI development as a regular for-profit company?


> I'm a bit concerned about how this might impact their commitment to AI safety though.

Their commitment will remain unparalleled, because AI safety actually means doing whatever it takes to provide maximum return to the shareholders, no matter the social cost.


Depends how they predict it to affect their bottom line.


lmao what do you think?


The idea of "Safe Coding" as a fundamental shift in security approach is intriguing. I'd be interested in hearing more about how this is implemented in practice.


For more information on our safe coding approach, as applied to the web domain, check out this paper (https://static.googleusercontent.com/media/research.google.c...) or this talk (https://www.youtube.com/watch?v=ccfEu-Jj0as).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: