Hacker Newsnew | past | comments | ask | show | jobs | submit | TechnicolorByte's commentslogin

How’s that 2016 promise of LA to NYC autonomous driving goal going for Musk? Or his Cybercab venture going? And the decision to not use LIDAR in his vehicles? Or the Cybertruck’s dismal engineering and sales?


- “Hi. I’m an engineer at NASA.”

- “(Scoffs). You’re an engineer? Yeah, right. What about that Challenger explosion? And how come you don’t put anyone on the Moon for 50 years? Engineer…”

That’s how your comment reads.


Elon promises 10 revolutionary things and only delivers 9. Sheesh, what a loser!


Delivering 9 of 10 revolutionary things would, I agree, be amazing.

However...

https://en.wikipedia.org/wiki/List_of_predictions_for_autono...

making 31 public predictions about his self-driving cars over 20 years and only being right about one of them is not so clever.


He's still the most successful businessman in history, by far. Pretty good for a loser.


He’s a hype man. Tesla is a meme stock and always has been. There is no objective valuation that Tesla should be valued as highly as it is. The future projected revenue and definitely not the current revenue support it. Sales popped right before the EV credit is going away. But at most that is probably a dead cat bounce.


He essentially invented the electric car industry. E cars before then were impractical and failures.

> There is no objective valuation

Value investing is Warren Buffett's style, which is generally a backwards looking approach. It's not good at predicting transformative technologies. Such was no good at predicting the success of, say, Apple, Microsoft, Amazon, Facebook, etc.


Okay. He invented it. So what? Other companies are selling more and the first mover advantage is moot.

And the other examples you ara giving…

Apple’s valuation went up as it was clear that Apple had a sustainable advantage and was going to see increasing revenues and profits going forward. Where are the companies that invented the modern smart phone?

Microsoft has a sustainable advantage that hasn’t been challenged in over four decades with operating systems, office apps and later cloud services. But their stock was in the doldrums in the 2000s when investors didn’t see a sustainable advantage with increasing revenues

Meta has also been volatile when investors didn’t see a sustainable advantage.

Amazon also has a moat

What does Tesla have?


> What does Tesla have?

It does more than cars. It's in the solar energy business, the grid scale battery business, AI, FSD, robotics, a global network of superchargers, home battery systems, etc.

Tesla shareholders voted to offer him a $1t compensation package over the next decade, provided he meets certain targets.


Yeah - and how are any of those other bets doing? The shareholders only voted for him to keep their meme stock afloat.

Again what is their moat? Waymo is much further along with self driving, they are a very distant also ran in AI and absolutely no company is going to choose them over anyone else, and their supercharging network is more anemic than you think it is in the vast majority if the US let alone the world.

China has shown its very capable when it comes to battery technologies and they have most of the rare earth minerals needed.


Musk really is that good and nobody else is capable of building factories in the US, but the skills are in raising money and defeating NIMBYism. Raising money (and starting startups) involves a lot of lies and delusions which are not always adaptive skills.

He fell off when he lost his egirl and became a drug addict.


Can you expand on what types of uncomfortableness you faced and what you mean by effort (to the point of bewildering people)? Curious what worked for you. Not sure if you just mean you forced yourself to go on a million dates and were super selective.


I think I've met (or trying to meet) 5000 to 10000 women over my lifetime. I've been on 100 dates at least. While I was in relationships, I tried my best and hardest and tried to get it down to a science when the breakup would come as to why that is the case.

Oh, and learning to be playful by unleashing my inner silly goose.

For relationships, what works for me:

* Similar personalities. I can now intuitively see people who have a similar HEXACO to me in 2 minutes. Note, not everyone that has a similar HEXACO to me I can see, but a subset of them. I've never been wrong (I only did this twice). I'm high in openness, and it's easy to see other people high in openness. Then seeing how the other dimensions fall out is quite predictable.

* Same coping style

* Secure attachment style

* Ability to be reasonable, pragmatic and emotionally intelligent in ways that I characterize those terms


> I can now intuitively see people who have a similar HEXACO to me in 2 minutes.

Is that a good thing in a partner? I can see the case for similar openness, but with extraversion and emotionality, for eg., in my experience you probably want someone on the opposite side of the scale to balance things out and have complementary strengths and weaknesses that make life easier for the both of you.


Fair question, I am not sure if there’s a general answer. I simply know what works for me. I can imagine that in some cultures, it doesn’t matter much as the idea of what love is and what a relationship is, is culturally really different. The simple example I think of is being married out by your parents. I know nothing about how that works or what the emotions involved are. And I can imagine there are quite a few cultures that I am clueless about.

I feel that people are different enough in ways that the HEXACO doesn’t capture. It’s just much easier to communicate with someone, because you think in a very similar way. So far, I have seen different strengths and weaknesses come about. We both are have a subclinical case of ADHD so being with each other is basically body doubling all the time which removes a lot of the annoyance that ADHD has. So oftentimes it’s not a 1 + 1 = 2 thing because there’s also an interaction effect as psychologist would say.

I am not saying this is a generalized theory by the way. I simply know it works for me. I have been in a few short relationships (of a few months) and 4 that were a year or longer. Women that think like me are way more suited as romantic partners and it’s not even close.

Bonus point: I don’t have to do the whole “men are like this and women are like that” dance that many people in my social circle explicitly seem to do. Because my dance is “she is like me and I am like her”. I would get much closer to predicting how she is when I ask myself “what would I do?” as opposed to “what would a general woman do?” Of course, in some cases sex and gender differences are there.

Or weird stuff like “women are more emotional and men are more logical”. It doesn’t apply. We can both hold each other to a standard that we both find reasonable and fully understandable. I expect my wife to be logical and emotional. She expects the same from me. I seem to have more of a bias towards logic and she does to emotions (well… more accurately, towards vibes and vibe-based living) but it’s often enough that I see she’s the more logical one or I am, at that moment, the more emotionally in tune.

It took a long time to find her and a lot of relationships and a lot of women to meet (and then to think how many women I secretly/silently rejected, at least 100K). The biggest hurdle to overcome is fear of rejection. I didn’t set out to be in a lot of relationships, but I do break up when I clearly see it’s not working.


> Bonus point: I don’t have to do the whole “men are like this and women are like that” dance that many people in my social circle explicitly seem to do. Because my dance is “she is like me and I am like her”.

That does sound appealing when put like that.

My experience has been with the counterbalancing kind of relationships I mentioned (maybe I subconsciously seek them out that way), with about 50% overlap in personality or interests and 50% divergence. And many of the memories I cherish from them have been from them introducing me to new little worlds, social environments, and experiences that I wouldn't have sought out or even given a thought to, on my own.

But there were also times when I wished we were more similar, when some experiences (that I was excited about) would have been great to share, and were diminished or even skipped out on because they weren't as into it. So seeking out more overlap seems at least worth trying out.

Thank you for giving a thoughtful and well-considered reply, by the way.


Can anyone recommend a technical overview describing the design decisions PyTorch made that led it to win out?


The choice of the dynamic computation graph [1] of PyTorch made it easier to debug and implement, leading to higher adoption, even though running speed was initially slower (and therefore training cost higher).

Other decisions follow from this one.

Tensorflow started with static and had to move to dynamic at version 2.0, which broke everything. Fragmentation between tensorflow 1, tensorflow 2, keras, jax.

Pytorch's compilation of this computation graph erased the remaining edge of Tensorflow.

Is the battle over ? From a purely computational point, Pytorch solution is very far from optimal and billions of dollars of electricity and GPUs are burned every year, but major players are happy with circular deals to entrench their positions. So at the pace of current AI code development, probably one or two years before Pytorch is old history.

[1] https://www.geeksforgeeks.org/deep-learning/dynamic-vs-stati...


Someone’s got to prototype the next generation of architectures.


> at the pace of current AI code development, probably one or two years before Pytorch is old history.

Ehhh, I don’t know about that.

Sure, new AI techniques and new models are coming out pretty fast, but when I go to work with a new AI project, they’re often using a version of PyTorch or CUDA from when the project began a year or two ago. It’s been super annoying having to update projects to PyTorch 2.7.0 and CUDA 12.8 so I can run them on RTX 5000 series GPUs.

All this to say: If PyTorch was going to be replaced in a year or two, we’d know the name of its killer by now, and they’d be the talk of HN. Not to mention that at this point all of the PhDs flooding into AI startups wrote their grad work in PyTorch, it has a lot of network lock-in that an upstart would have to overcome by being way better at something PyTorch can never be good at. I don’t even know what that would be.

Bear in mind that it took a few years for Tensorflow to die out due to lock in, and we all knew about PyTorch that whole time.


> a lot of network lock-in that an upstart would have to overcome by being way better at something PyTorch can never be good at

Higher level code migration to the newer framework, is going to 0. You ask your favorite agent (or intern) to port and check that the migration is exact. We already see this in the multitude of deep-learning frameworks.

The day one optimization trick that PyTorch can't do but another framework can, which reduce your training cost 10x and PyTorch is going the way of the dodo.

The day one architecture which can't be implemented in PyTorch get superior performance, and it's bye bye python.

We see this with architectures which require real-time rendering like Gaussian Splatting (Instant Nerf), or the caching strategies for LLM sequence generation.

Pytorch's has 3 main selling point :

- Abstracting away the GPU (or device) specific code, which is due to nvidia's mess : custom optimized kernels, which you are forced to adapt to if you don't want to write custom kernels.

If you don't mind writing optimized kernels, because the machine write them. Or if you don't need Cuda because you can't use Nvidia hardware because for example you are in China. Or if you use custom silicon, like Grok and need your own kernels anyway.

- Automatic differentiation. It's one of its weak point, because they went for easy instead of optimal. They shut themselves off some architectures. Some language like Julia because of the dynamic low-level compilation can do things Pytorch won't even dream about, (but Julia has its own problems mainly related to memory allocations). Here with the pytorch's introduction of the "scan function"[2] we have made our way full circle to Theano, Tensorflow's/Keras ancestor, which is usually the pain point of the bad automatic differentiating strategy chosen by Pytorch.

The optimal solution like all physics Phds which wrote simulations know, is writing custom adjoint code with 'Source Code Transformation' or symbolically : it's not hard but very tedious so it's now a great fit for your LLM (or intern or Phd candidate running 'student gradient descent') if you prove or check your gradient calculation is ok.

- Cluster Orchestration and serialization : a model can be shared with less security risks than arbitrary source code, because you only share weights. A model can be splitted between machines dynamically. But this is also a big weakness because your code rust as you become dependent of versioning, you are locked with the specific version number your model was trained on.

[2] "https://docs.pytorch.org/xla/master/features/scan.html


What would stop PyTorch from implementing whatever optimization trick becomes important? Even if it requires a different API.


There are two types of stops : soft stops, and hard stops.

- Soft stops is when the dynamic graph computation overhead is too much, which mean you can still calculate, but if you were to write the function manually or with a better framework, you could be 10x faster.

Typical example involve manually unrolling a loop. Or doing kernel fusion. Other typical example is when you have lots of small objects or need to do loops in python because it doesn't vectorize well. Or using the sparsity efficiently by ignoring the zeros.

- Hard stop is when computing the function become impossible, because the memory needed to do the computation in a non optimal way explode. Some times you can get away with just writing customized kernels.

The typical example where you can get away with it are custom attention layers.

Typical example where you can't get away are physics simulations. Like for example the force is the gradient of energy, but you have n^2 interactions between the particles, so if you use anything more than 0 memory preserved during the forward pass per interaction, your memory consumption explode. And typically with things like Lagrangian or Hamiltonian neural networks where you look the discover dynamics of an energy conserving system, you need to be able differentiate at least three times in a row.

There are also energy expanding stops, where you need to find work-around to make it work like if you want to have your parameters changing shape during the optimization process like learning point clouds of growing size, and they spread you thin so they won't be standardized.


I don't know the full list, but back when it came out, TF felt like a crude set of bindings to the underlying c++/CUDA workhorse. PyTorch felt, in contrast, pythonic. It was much closer in feeling to numpy.


I think it was mostly the eager evaluation that made it possible to debug every step in the network forward/backward passes. Tensorflow didn't have that at the time which made debugging practically impossible.


I would highly recommend the podcast by ezyang https://pytorch-dev-podcast.simplecast.com/ for a collection of design discussions on the different parts of the library.


I’m not sure if such an overview exists, but when caffe2 was still a thing and JAX was a big contender dynamic vs static computational graphs seemed to be a major focus point for people ranking the frameworks.


Embarrassing comment. This current administration and its supporters most definitely does not agree on several of your points, in particular on sustainable sources of energy and regulations on pollution.


Plenty of them here before the election. Wish they’d speak up more now and explain how any of these policies are objectively good for the US economy and US citizens.


If they cared about measurable outcomes we wouldn't be in this situation.

For them, "success" involves feeling that a particular social arrangement has been solidified. It involves an exploitative hierarchy (which they believe is both inevitable and required) where they aren't obviously on the bottom and where "the right people" are on top.

They simply do not care how much it costs to raid people's attics looking for Anne Franco, or even the odds of finding her family, as long as The Authority is taking Firm Steps and people like Anne Franco are afraid.


Quite simply: de minimis import rules make no sense, they are inevitably abused by China in particular to import billions in untaxed goods. No foreign country has a right to sell things in America. China and EU and others impose their own arbitrary redtrictions and taxes on imports but for some reason if America does it, it gets worldwide press because for the longest time, it was just open season as we drained out manufacturing and gutted the base that built America in the first place.

We have laws on the books and they have to be enforced equally, whether you're shipping in entire containers or thousands of small direct mail packages.


Of course de minimis import rules make sense. Processing every $20 or $50 parcel through full customs would cost more in bureaucracy than it would raise in revenue. This is why many countries around the world have de minimis rules including Canada, the EU, and even China.

De minimis had nothing to do with draining out manufacturing; that's been happening for decades. Before 1993 the rate was $10.

And who cares about the "base that built America"? US unemployment was low! The US doesn't need these terrible jobs or look to the past for opportunity. There is plenty of opportunity available by looking forward.


> No foreign country has a right to sell things in America

Flipping this around: this is a limit on the rights of American citizens to purchase things from around the world. My argument is it's best for policy to center the rights of American citizens vs trying to curtail the rights of people who do not even live here.


Seems like the correct solution would be to just eliminate tariffs entirely then.. why shoot yourself in the foot by reducing trade when you can.. just not do that?

The irony is this comes from the conservative movement, who are purportedly neoliberal economists.. but then completely disregard a central plank of neoliberal theory.

consistency is low on the MAGA priority list


I am not a Trump voter, but here's my understanding of what they're hoping for, economically speaking. By devaluing the US dollar, American manufacturing becomes more appealing to other nations. I think it's generally believed that tariffs are a pretty lousy way to boost domestic manufacturing, but I think it might be an effective means of accomplishing the goal of devaluing the dollar. This devaluing shouldn't have any direct, negative effect on Americans when buying domestic (e.g. home prices, locally produced food), but will significantly reduce your ability to travel or buy imported goods.

Again, I'm not a Trump voter and I think this is the clumsiest, most dangerous way to bring manufacturing back to the US, but that's my understanding of what their goal is. I'm not even going to touch the Christian nationalist side of the plan.


it's got nothing to do with policies, it's tribal


[flagged]


Oh the blue collar and union workers that voted for this are getting exactly what they wanted and know better than you about the consequences. When they get a pay raise because their job and whole town aren't being gutted to globalization they are clearly playing 4D chess.


Blue collar jobs are going to evaporate as the supply chain gets wedged. This like trying to lose weight by burning down the farms with napalm.


They're already losing their jobs so this unjustified fantasy has already been destroyed by reality. There are no good economic indicators for the US right now.


you think this is going to bring back factories and blue collar jobs? oof


Reopen mental institutions and enable forced is institutionalization. Engage this at the federal level. So sick of this crap.

These “homeless” are not the kind who need clean clothes and shelter and some help getting a job. They want to live like this at the expense of the public’s money and enjoyment of public amenities.


I'm pretty far left but I have to agree that some people are not mentally capable of independence, cause public harm, and need to be forcibly committed. I want that to be done carefully and humanely. I don't want someone who sleeps on the street and causes no trouble to get institutionalized. But the worst of them should get jailed, tried, and then sent away.

We also need to support people at risk when they're young. If their parents had mental health support, if they didn't experience a loss of housing as children, if losing their job didn't make opioids looks so attractive, we wouldn't have that many people unable to care for themselves.


Hey, that actually works really well in India:

https://www.youtube.com/watch?v=MpIJJPvn_ZI

Edit: This got downvoted because it's a direct retort to what the GP is suggesting.


Happy New Years, y’all!

Curious to hear if anyone has any specific goals/resolutions/things they’re especially looking forward to in 2025.


i prefer a theme[1] over goals. my this year's theme is learning - which i'm starting with learning to cook more novel dishes.

[1]: CGP Grey's video on it: https://www.youtube.com/watch?v=NVGuFdX5guE


Comparing X, which is one company (with a far reduced headcount and much smaller revenues compared to pre-Elon) in a tech hub city to an entire oil industry in an oil-rich state seems beyond hyperbolic.


oil is tech, in my analogy. You know what I meant. You can't possibly be that autistic.


If you enjoy being bored to death, perhaps.

Somehow I think the group of people who choose to live in SF have particular interests and desired amenities that make high rent worth it. E.g., walkable and lively neighborhoods, access to parks, events, etc.


Seems silly. Nvidia focused on the market several years ago while other players fumbled the bag with multiple startup buyouts and no coherent execution plan (Intel) or were too focused on the wrong product with x86 CPUs (AMD). Never mind all the failed AI hardware startups who believed an ASIC optimized for ResNet and early DL models would somehow be in play as the market evolved to Transformers.

Nvidia played the long game, focused on enough generalization to be useful for future models and opening their software brought such that other players could write frameworks/compilers for them. Nvidia also supports x86 CPUs in their servers (they don’t lock it to their Grace CPU AFAIK) and allow Ethernet fabrics (not locking it to their infiniband).

Now AMD’s GPU division is doing what they do best: slowly copying Nvidia’s execution and catching up on software, which means they’ll always play second fiddle.

How is any of that antitrust?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: