Hacker Newsnew | past | comments | ask | show | jobs | submit | crabbygrabby's commentslogin

Really interesting take, I've had the opposite experience. I've also seen very talented c++ devs screw things up in prod that rust doesn't even allow for. Really recommend taking it more seriously, it's fun once you get a handle on it. Also the tooling is really good now.


C++ has grown several language tools to let you write reliable, safe code.

... But it can't shed the old stuff without breaking backwards compatibility, and that's what bites you. The fact that smart pointers exist now doesn't stop a developer from passing around non -const char* with no size specifier and calling that a "buffer," and because the language is so old and accreted most of its safety features later, the shortest way to express an idea tends to be the likeliest way to be subtly wrong.

This is really just a problem that other languages don't have because they didn't have the same starting point.


The problem is that there is a lot of code and libraries using older patterns. And modern C++ is sufficiently different from the older C++ to constitute effectively a new language while still being unsafe. Moreover, in an attempt to address the efficiency the language has added views and ranges that provides more ways to corrupt memory. So why rewrite libraries in modern C++ when you can port them to Rust or Go and gain memory safety?


It can't shed the old stuff, but it is very, very suspicious -- smelly -- when any of it appears in new code.


It's smelly if you have enough people on the team to know what new code versus old code smells like.

If your developers are coming to the table with some C++ tutorial books written a decade ago, good luck. Invest in a decent linter and opinionated format checker.


I was in the same boat once upon a time. I actually overhauled it all but spent the next six months trying to convince 80 yr olds who self taught themselves visual basic (version 3? I think)that the code I made was faster, more sustainable, and ready to hire more staff to build into it. That was pretty much a failure, they agreed it was faster, but it just wasn't written by the guys who originally wrote it. Shucks. So here's what I did. I left. Sometimes you lose and can't win even if it's to save a company from itself. Businesses might choose to hire retirees as consultants for decades, well past the point where they can control their bowels in public.

Not saying this is your only option, but I am saying, if the tech work is hopeless, the culture is unreasonable, and it's not gonna change until two-three people go through it and are honest at exit interviews, you have to make an honest assessment of your goals. Last I checked the company decided to hire someone for 2x what I worked for, and that person put "open to new roles" on their LinkedIn a few weeks ago...


Just type it in yourself and discover the post is incorrect like the rest of us...


Ok good I'm not insane lol...


Whoever made this post um didn't type in lisp and rust into the search... They look inversely correlated with rust rising in the past few years over lisp...


Theyve been at it for half a decade or so. Ignoring compilation times they shuffle the code around so frequently it's only real use imo is for the authors to publish papers and stay three steps ahead of any of it's users hoping for a stable tool after a few cycles give up. It's a shame but it's academia at its finest.


This part of your criticism seems quite disingenuous. You're simultaneously criticizing them for not improving their code quickly enough, and for changing it too much.


I appreciate your view but seeing as how most Julia projects work this way I sometimes wonder if it's just a problem with the language itself. Not trying to be a troll with impossible expectations, but genuinely the code is unstable, and yes they have been working on it for a long time.


We started working on it at JuliaCon 2021, where it was at 22 seconds. See the issue that started the work: https://github.com/SciML/DifferentialEquations.jl/issues/786. As you could see from the tweet, it's now at 0.1 seconds. That has been within one year.

Also, if you take a look at a tutorial, say the tutorial video from 2018, https://youtu.be/KPEqYtEd-zY, you'll see that the code is still exactly the same an unbroken over the half decade. So no, compile times have only been worked on for about a year and code from half a decade ago still runs just fine.


I think you misunderstood me, all good. The diffeq/sciml landscape has been a WIP for half a decade with lots of pieces of it changing rapidly and regularly. But so has the rest of the ecosystem. I think we both know how often this code has changed, but for some reason the Julia people are always like "oh we have packages for that" or "oh that's rock solid" and then you check the package it's a flag plant and does nothing or is broken from a minor version change, then you try to use it, maybe even fix it, and it breaks Julia base... I'm not going to waste anymore time with digging into this to file an issue or prove a point.

I think passerbys should be made aware of the state of things in the language without spin from people making a living selling it. No personal offence to you, just please consider not overselling, it's damaging to people who jump in expecting a good experience.


I linked to you a video tutorial from 2018, https://www.youtube.com/watch?v=KPEqYtEd-zY . Can you show me what code from that tutorial has broken in the SciML/DiffEq landscape? I know that A_mul_B! changed to mul! when Julia v1.0 came out in 2018, but is there anything else that changed? Let's be concrete here. That's still the tutorial we show at the front of the docs, and from what I can tell it all still works other than that piece that changed in Julia (not DifferentialEquations.jl).

> I'm not going to waste anymore time with digging into this to file an issue or prove a point. > No personal offence to you, just please consider not overselling, it's damaging to people who jump in expecting a good experience.

I'm sorry, but non-concrete information isn't helpful to anyone. It's not helpful to the devs (what tutorial needs to be updated where?) and it's not helpful to passerbys (something changed according to somebody, what does that even mean?). I would be happy to add a backwards compatibility patch if there was some more clear clue.

> I think passerbys should be made aware of the state of things in the language without spin from people making a living selling it.

The DiffEq/SciML ecosystem is free and open source software. There is nobody making a living from selling it.


Julia's syntax is it's greatest strength but also it's biggest weakness. Large Julia code bases without team standards are complete soup. For small to medium size projects it's all good though. Just wish the community wasn't overall so crappy.


What do you mean by crappy? I've always found it very lovely and welcoming (though a touch small). The Julia community is also (in my experience) skewed towards scientific computing rather than software engineers which can definitely have an impact on things like "codebase quality" even in big important libraries. That's not a dig or insult – there are different priorities (privileging exploration, innovation, new ideas, and code that only needs to work until the paper you're writing is done rather than long term maintainablity is not a fundamentally wrong tradeoff to make).


I met a guy who got kicked away from the community because he had different political beliefs than a lot of the people there. Immediately after he got the boot he stopped contributing, they took control of his two yrs of research(multiple repos). I get it, it's OSS but at the end of the day it kinda looks like stealing someone's work. There's other instances of stuff like this too... Just hang around for a while and watch...

Not here to say it's all like that. But keep in mind if you aren't paying for the product, you probably are the product.


There's only one person who's been kicked out of the community (in the sense of being banned from the discourse forum), and that was not due to 'political beliefs', but repeated abuse and personal attacks.

Anyway, how can anyone 'take control' of someone's repositories? Was this person kicked out of github too?


They forked them. Also this is not that person. And there's more instances of this than I think you realize.


Can't reply to you depth of discussion got too long I think. Yea they didn't get kicked out entirely not banned or anything but they became "persona non grata" over stuff in their personal life. At least that's how it was explained to me.

I dunno what you should do if someone leaves, but bullying someone until they leave, then after they do forking all their work is kinda crappy. Again I get that it's OSS, but a lot of people don't make OSS contributions and hope for that kind of outcome. It's worth putting out there imo.


If you click on the timestamp ("17 minutes ago"), then you will be able to reply below the post.

OK, I don't know that person's situation then, and cannot speak to it. But bullying is definitely against the community guidelines, and my experience is that there's not a high tolerance for rudeness, in fact I think the community is quite conflict shy.

That this happens with some frequency is a pretty big surprise to me, as I said, I follow the community closely.

Is it possible that you know only one side of the story?


From this description this is fairly obviously the LightGraphs situation, and that's a pretty misleading account of what happened. The person was not kicked out of anything—they were not blocked or banned from any platforms or forums. They choose, of their own volition, to stop participating in the community. I've never seen any evidence of bullying for political views or otherwise; maybe there was some, but if so it was never reported, and it would have been a clear and actionable community standards violation. Whatever their reasons, this person decided they wanted to leave, which is unfortunate—we don't want anyone to feel unwelcome—but it's their prerogative.

That would have been fine, if unfortunate, but they also wanted to "take their work with them" in the sense of archiving their registered open source package repos preventing any further maintenance or development. This desire was not about not wanting the maintenance burden—they were not willing to grant ownership of the repos to other maintainers. In short, the original author wanted to force all development of the packages by anyone to stop. Of course, that would have left all the people who had come to depend on those packages high and dry, since the code they'd come to depend on would get no bug fixes, security patches, etc. Despite the fact that there were active contributors to that code who were happy to take over maintenance.

Imagine if Linus Torvalds got mad one day and decided to insist that no one could do any further development of the Linux kernel. No bug fixes, no security patches, no new features. Linus out. That was the situation here. Fortunately this is not how open source works: open source licenses are not revokable and the ability to fork a project is baked into each license for this exact reason—so that a disgruntled author cannot screw over an entire community of people who have come to depend on their work. They don't have to keep doing work, but they also can't take away they've done. If Linus threw a tantrum and refused to allow any more work on Linux, the rest of the community could take over and continue maintaining the kernel—fixing bugs, patching security flaws, even adding new features. Linus could close down his git repo and never touch the kernel again, but other maintainers could continue to develop Linux and support the vast community of users who have come to depend on it.

Similarly, it would have been perfectly legal to fork LightGraphs and continue development in a new repo with the same name. Out of respect for the original author's wishes, however, the LightGraphs package was allowed to be "frozen" with no further development. But it would have been deeply irresponsible to cease all maintenance and leave all the people who use and depend on LightGraphs hanging, especially given that there were willing maintainers. So LightGraphs was forked and renamed to "Graphs"; the old repo has been allowed to remain frozen, while maintenance and development has continued under the Graphs name in a new repo. The author of LightGraphs got their wish for work on the thing called "LightGraphs" to cease. The users of the package didn't get screwed over since they can do a simple search and replace and keep using a maintained graphs package. Personally I think the community handled it with responsibility and grace.


There's another issue. In addition to the open source license and what it promises, when you accept contributions from others it isn't just your work anymore. LightGraphs had 100 contributors, what about their efforts? Not to mention additional work that others have done on top of that in other libraries.

Who would contribute to a software library if they knew that the main dev could just mothball their efforts at any moment.

If you have donated a ball, you can no longer just pick it up and go home. If you don't want to donate work, don't do open source and invite others to join in.


I don't know anyone else who got kicked out (that takes a lot). But I know of a situation where someone walked out due to a non-political disagreement. Perhaps that's the one.

I follow the community pretty closely, posters being banned (except pure spam accounts) is something I think I'd notice.

Besides, what can you do if someone walks away from an important package? Should everyone, including collaborators on that package just start from scratch? What?


Yea but keep in mind anytime you get a bug your session goes away so you can end up eating hours from a day precompiling. Not worth it with heavy pkgs imo. Wish they had incremental compilation like rust because most compiled Lang's are noticeably faster than Julia precompilation ime.


> anytime you get a bug your session goes away

This only happens for Segmentation fault type crashes which kill the entire Julia process, which shouldn't be common at all. Can you describe when you experience these issues?


It's super easy to get ooms and seg. Faults in Julia. For a while there this past year you couldn't even Ctrl c to stop the Julia process. It's rough for real work imo. Fine for research.


I don't remember C-c being broken in the past year. Could it have just been a specific program you were running with a tight loop that didn't have any yield points? If so, that's not really unique to Julia.

I'd also be interested in your workload that was generating lots of seg faults (oom makes some sense if working with large data since Julia's runtime does add an unfortunate amount of memory overhead.


Check the version summaries. Think it was Julia 171 or something. There's lots of bugs like that that crop up every other release.

Segfaults happen all the time with FFI. But yea OOM is a killer. Julia runtime guzzles RAM, but pkg add any of the sciml stuff and watch your RAM explode. Doesn't take much data to lose a half hour of your life installing a package...


The RAM usage is because precompilation runs in parallel. If you have like 16 threads going then yes it'll parallelize that over 16 times. But we have never seen a half hour package install, can you share the info to reproduce it?


On my decade old laptop `using Plots` (precompilation) takes a couple of minutes. Not half an hour but it may feel like that if someone is used to Python (or R) where imports are instantaneous. Though I think GP meant takes that much due to OOMs which may result in frozen or slow system.


A couple of minutes is expected. Even R and Python packages have to run the build processes for the associated C and Fortran codes, and those take similar time (or more for many packages). However, if a Julia package is precompiling for more than half an hour, that's not expected and I'd like to see a reproducer for this so we can fix it.


See my above comment for clarification


It's not a half hour package install. It's a half hour of loading everything back in to get to where you were because the runtime dropped sorry for the lack of clarity. But yea, it's real easy for pkg installs to oom people.


Julia devs seem to work on improving precompilation times every version. That said if a project or workflow revolves around some packages, a way to skip this step is to create a sysimage, essentially a saved Julia session, that includes those packages. PackageCompiler significantly simplifies the process.


It's still rough. They've been at it for yrs and it's a perpetual bed of sand... A lot of machines will fail to install it because it's so resource intense to install... Promising idea overall, maybe in three yrs or so it'll be worth using for something outside of research.


It's 0.1 seconds now and faster than one can type, as shown by the GIF. What is the speed where it's not rough?


It's a pretty weird critique, llvm is becoming ubiquitous for these tasks. Can rattle off probably a dozen programming languages using it and no one is grousing...


Those programming languages are presumably not as minimal, and give features in exchange for this large dependency.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: