Here's a concise explanation of SSA. Regular (imperative) code is hard to optimize because in general statements are not pure -- if a statement has side effects, then it might not preserve the behavior to optimize that statement by, for example:
1. Removing that statement (dead code elimination)
2. Deduplicating that statement (available expressions)
3. Reordering that statement with other statements (hoisting; loop-invariant code motion)
4. Duplicating that statement (can be useful to enable other optimizations)
All of the above optimizations are very important in compilers, and they are much, much easier to implement if you don't have to worry about preserving side effects while manipulating the program.
So the point of SSA is to translate a program into an equivalent program whose statements have as few side effects as possible. The result is often something that looks like a functional program. (See: https://www.cs.princeton.edu/~appel/papers/ssafun.pdf, which is famous in the compilers community.) In fact, if you view basic blocks themselves as a function, phi nodes "declare" the arguments of the basic block, and branches correspond to tailcalling the next basic block with corresponding values. This has motivated basic block arguments in MLIR.
The "combinatorial circuit" metaphor is slightly wrong, because most SSA implementations do need to consider state for loads and stores into arbitrary memory, or arbitrary function calls. Also, it's not easy to model a loop of arbitrary length as a (finite) combinatorial circuit. Given that the author works at an AI accelerator company, I can see why he leaned towards that metaphor, though.
About this part: There are a number of other POSIX-ish shells implemented in a non-C/C++ implementation language
OSH is implemented in an unusual style -- we wrote an "executable spec" in typed Python, and then the spec is translated to C++.
That speeds it up anywhere from 2x-50x, so it's faster than bash on many workloads
e.g. a "fibonacci" is faster than bash, as a test of the interpreter. And it makes 5% fewer syscalls than bash or dash on Python's configure (although somehow this doesn't translate into wall time, which I want to figure out)
It's also memory safe, e.g. if there is no free() in your code, then there is no double-free, etc.
---
As mentioned on the OSH landing page, YSH is also part of the Oils project, and you can upgrade with
I've been trying to figure out how to talk to folks on the right, and I keep looking for something, anything, I can say to make them realize the danger we are in. Reading this comment was therapeutic, because I think it's completely on the money. We can't change people's minds in a single argument; we can just try and nudge them in the right direction and hope they join us eventually.
If you find this article fascinating, and are intrigued by the possibility of learning to speak a dead language like Latin, I'm here to tell you that it's probably a lot easier than you think.
To start off, there is a textbook that I think really resonates with hackers. It's called "Lingua Latina Per Se Illustrata" (The Latin Language Illustrated through itself) and it teaches Latin in a fun and mind-altering way. The entire book is in Latin, but it starts of with very simple sentences that anyone who speaks English or a Romance language can intuit with a bit of effort. There are very clever marginal illustrations that help drive the meaning home. It builds an understanding in Latin brick by brick, and eventually you find yourself understanding complex sentences and ideas. Furthermore the book is just fun and often funny, it tells a story of a Roman family and strikes an excellent balance between teaching and entertaining. Contrast this approach with dense Latin texts that have a heavy focus on grammar and translation.
So that's one way to learn the language, but what about speaking it? Well, that's where the Legentibus app comes in. It's a Latin language podcast application which has wealth of well recorded stories in classical Latin at a bunch of different difficulty levels. It also has has the Latin language text of the stories that are highlighted as the audio is read, with optional interlinear English translations. I find these really help at first to help me understand the content. I turn them off later once I get the gist of what is being said, or just listen without reading. You can also do dictionary lookups of individual words without turning on the translation.
Here are the reasons why I think this is one of the most enjoyable and useful things I do as a newbie Latin language learner:
1) The stories themselves are engaging. Some of my favorites are from "Gesta Romanorum" (Deeds of the Romans) which is a 13th or 14th century collection of stories often with a moral allegorical themes. These were rewritten in a beginner friendly style, but use classical Latin idioms, some of which are explicitly pointed out in the text as clickable footnotes.
2) Daniel (the co-founder of the app and Latin scholar) does an excellent job as a reader. I listen to a lot of audio books, and I especially like it when the reader consistently does memorable character voices. Be it an extortionist dog slyly claiming "Omnēs canēs amant" (everyone loves dogs) or Pluto, King of the Underworld, commanding "Eurydicē accēde hūc!" in a booming voice, Daniel nails it.
3) You can listen to these while folding laundry, cooking dinner, or doing whatever. I manage to squeeze in 40 minutes a day or so of these stories, and I'm always happy to do it.
4) Often times when I learn a new bit of grammar or learn the precise meaning of a word, my mind often will replay in my head a phrase (in Daniel's voice) from one of the stories that uses that word or grammatical concept. This happens more than you might expect.
Finally, there is a pretty vibrant online community of Latin language learners out there, from the /r/Latin subreddit, to the LLPSI (Lingua Latina per se Illustrata) Discord (https://discord.gg/uXSwq9r4) to the Latin & Ancient Greek) Discord (https://discord.gg/latin) and others.
>I also gave up on progressives and now have two pairs: readers and drivers.
Yes, both progressives and high-index lenses suffer from the same problems: smaller field-of-vision and higher distortion of peripheral vision
If I sit with my eyes 36 inches away from a 30-inch screen...
With a single-vision lens, without my head moving at all, my eyes can move within their sockets to see the bottom corner of screen showing the date & time, and to the top corner showing the [x] button to close windows. Single-vision lenses have edge-to-edge clarity.
With progressives or high-index lenses, it requires rotating & tilting my head to put those corner locations directly in my central field of vision. Imagine a horse with blinkers[1]. The edges of those lenses are blurrier so you have to move your head to move the lens' center spot toward the item of interest for maximum sharpness.
Maximum field-of-vision is the optimal ergonomics of looking at multiple windows of text on a 30" monitor. Yes, high-index lenses are thinner and more fashionable but I don't need that when concentrating on programming code and reading web pages.
>You cannot depend on optometrists+opticians to make the right choice for you. You need to educate yourself on the available choices.
That's why I bought my own set of trial lenses[2]. I can methodically optimize my computer distance vision without exhausting the patience of my optometrist repeatedly asking, "which is better? 1? or 2? (again) 1? or 2?". I then go to Zenni and order the exact diopters I need. (I'm not going to buy a glaucoma tester so I'll still go to the eye doctor for that.)
I end up with 3 separate pairs of single-purpose glasses: 1 for reading books ~12 inches, 1 for computer distance ~36 inches, and 1 for driving 20+ feet. Swapping out glasses for each purpose is inconvenient but the larger field-of-vision makes it worth the hassle.
I had to deal with a lot of FFI to enable a Java Constraint Solver (Timefold) to call functions defined in CPython. In my experience, most of the performance problems from FFI come from using proxies to communicate between the host and foreign language.
A direct FFI call using JNI or the new foreign interface is fast, and has roughly the same speed as calling a Java method directly. Alas, the CPython and Java garbage collectors do not play nice, and require black magic in order to keep them in sync.
On the other hand, using proxies (such as in JPype or GraalPy) cause a significant performance overhead, since the parameters and return values need to be converted, and might cause additional FFI calls (in the other direction). The fun thing is if you pass a CPython object to Java, Java has a proxy to the CPython object. And if you pass that proxy back to CPython, a proxy to that proxy is created instead of unwrapping it. The result: JPype proxies are 1402% slower than calling CPython directly using FFI, and GraalPy proxies are 453% slower than calling CPython directly using FFI.
What I ultimately end up doing is translating CPython bytecode into Java bytecode, and generating Java data structures corresponding to the CPython classes used. As a result, I got a 100x speedup compared to using proxies. (Side note: if you are thinking about translating/reading CPython bytecode, don't; it is highly unstable, poorly documented, and its VM has several quirks that make it hard to map directly to other bytecodes).
Living in Poland ruled by trumpists for 8 years I have these experiences:
- Get subscription of high value newspaper or magazine. Professionals work there, so you will get real facts, worthy opinions and less emotions.
- It is better to not use social media. You never know if you are discussing with normal person, a political party troll, or Russian troll.
- It is not worth discussing with „switched-on” people. They are getting high doses of emotional content, they are made to feel like victims, facts does not matter at all. Political beliefs are intermingled with religious beliefs.
- emotional content is being treated with higher priority by brain, so it is better to stay away from it, or it will ruin your evening.
- people are getting addicted to emotions and victimization, so after public broadcaster has been freed from it, around 5% people switched to private tv station to get their daily doses.
- social media feels like a new kind of virus, we all need to get sick and develop some immunity to it.
- in the end, there are more reasonable people, but democracies needs to develop better constitutional/law systems, with very short feedback loop. It is very important to have fast reaction on breaking the law by ruling regime.
A few books which I enjoyed that are somewhat on the subject of polymathy:
- The Polymath by Waqas Ahmed. This one directly discusses polymathy in history, and how our "natural state" really is of that of the polymath, not the specialist. A nice book that helped me get past some mental blocks and really embrace learning whatever it is I'm interested in, without reservations on what others may think about such "distractions".
- The Creative Way by Rick Rubin. This masterpiece is written by a legendary musical producer who has probably worked with one of your favorite Western musicians at some point. His Zen-inspired approach to creativity and acceptance of ideas, wherever they may come from, is an essential tool in any polymath's toolbox.
- How We Got to Now by Steven Johnson. I really enjoyed learning how the world-changing inventions that we take for granted were often invented by creatively gluing together wildly divergent ideas, to ultimately make something that appears deceptively simple.
The key takeaway for me is that anyone and everyone can be a polymath, with the correct attitude. Learning is not a skill reserved only for the most intelligent and capable - everyone is capable of learning anything they want. Some people may be more naturally skilled, but that doesn't preclude the rest of us from participating.
The literature on programming languages contains an abundance of informal claims on the relative expressive power of programming languages, but there is no framework for formalizing such statements nor for deriving interesting consequences. As a first step in this direction, we develop a formal notion of expressiveness and investigate its properties. To validate the theory, we analyze some widely held beliefs about the expressive power of several extensions of functional languages. Based on these results, we believe that our system correctly captures many of the informal ideas on expressiveness, and that it constitutes a foundation for further research in this direction.
What's the best way to get started with Blender these days? I'm mostly interested in making art and possibly even 3d printing some stuff (do these skills overlap at all?).
What a beautiful use of technology to uphold someone's personhood, and let them know they are loved, despite (and with regard to) a profound injury.
This reminds me of a desire I've had for a long time: a simple, wall-mountable eInk device that could be configured with a URL (+wifi creds) and render a markdown file, refreshing once every hour or so. It would be so useful for so many applications – I'm a parish priest and so I could use it to let people know what events are on, if a service is cancelled, the current prayer list, ... the applications would be endless. I'd definitely pay a couple of hundred dollars per device for a solid version of such a thing, if it could be mounted and then recharged every month or two.
Background: Started in industrial automation (lots of Fanuc, Yaskawa, Omron, etc.), built a lot of cool systems with cool people that made things with robots. Pivoted to "general" robotics in grad school. Been spending the last 5+ years making "general" robots.
I think the best thing for learning robotics looks pretty similar to learning a programming language: Have a specific task in mind that the robot/programming language will help you solve. Even if its just a pick-and-place and a camera, or a shaker table with a camera over top, or a garden watering timer/relay combo. Just work on something specific with your toy robot and you'll naturally encounter much of the difficult things about robotics (spatial manipulation, control, timing, perception, drivers (GODDAMN DRIVERS), data, you name it).
Going right to a high-DOF arm or trained LLM is always cool, but the person who hacks together a camera/relay/antenna to automate some gardening task or throws some servos and slides together to make a Foosball robot is doing the most interesting things, in my opinion.
And not everyone has the same abilities. Tayloresquece methodologies like Agile are great for getting mediocre results for mediocre projects using mediocre developers (if they are not mediocre to begin with, certainly they will become so) in a repeatable and reportable fashion.
I read through the first 10 chapters of TAPL, and skimmed the rest. The first 10 chapters were good to remind myself of the framing. But as far as I can tell, all the stuff I care about is stuffed into one chapter (chapter 11 I think), and the rest isn't that relevant (type inference stuff that is not mainstream AFAIK)
And yeah some of us had the same conversation on Reddit -- somebody needs to make a Crafting Interpreters for type checking :) Preferably with OOP and functional and nominal/structural systems.
---
Also, it dawned on me that what makes TAPL incredibly difficult to read is that it lacks example PROGRAMS.
It has the type checkers for languages, but no programs that pass and fail the type checker. You are left to kind of imagine what the language looks like from the definition of its type checker !! Look at chapter 10 for example.
I mean I get that this is a math book, but there does seem to be a big hole in PL textbooks / literature.
Also I was kinda shocked that the Dragon Book doesn't contain a type checker. For some reason I thought it would -- doesn't everyone say it's the authoritative compiler textbook? And IIRC there are like 10 pages on type checking out of ~500 or more.
I have started to journal last month. I came to it through pens when I randomly decided to find a good pen when I got interested in it through some youtube videos(wow, YT actually working as intended, for once). To anyone interested, Pentel Energel 0.7mm is the best gel pen in the entire world, hands down. For oil-based ink, it would be Uniball Jetstream, i guess(i am gel guy). I always wanted to journal and thought it would be great to be able to read my worldviews, problems and daily life from years ago. But I just never did. Anyway, second month in, I must say, i love it. Not because of writing about some interesting philosophical topics, but rather because it allows the brain to dump all that chaos out(onto the paper) and allows me to focus more or just be chill in general. It literally frees your thinking. Like clearing up your RAM. I would never thought this would be the main effect but it is. If anyone wants to try, I recommend getting a notebook with 100gsm blank paper. They are hard to get though. Mostly you will find either dotted paper or 70-80gsm. Do not compromise on the density, dots vs lines vs grid vs blank is personal preference. Preferable is black pen over blue one, but again, PP. Blank pages impose no restrictions on you in regards to formatting though. There are well price notebooks on aliexpress, look for dot ding, legendary notebooks or paperideas. Soft or hard cover is personal preference as well. Format, A5 is the best/most practical. If you go with A6, note that most items are actually close to A7 so make sure you buy a correctly sized one. And smooth PU leather cover will catch fingerprints, so go mesh or rougher PU surface. As for hand writing, the slower you write, the prettier it will be. You have to find your own tempo where you are satisfied with speed and the look of the letters.
I haven't dealt with this side of Java in a while, but it reflects my experience poking at Java 8 performance. At some (surprisingly early) point you'd hit a performance wall due to saturating the memory bus.
A new GC could alleviate this by either going easier on the memory itself, or by doing allocations in a way that achieves better locality of reference.
The tragedy of modern psychiatry is that it's so close to knowing enough to be helpful to people whose symptoms are behavioral, but the psychiatric standard of care is a straight jacket that prevents progress.
I've commented here about my efforts to extract my friend from her psychiatric misdiagnosis. tl/dr: she has the genetic condition where she can't turn the food fortification folic acid into a methylated form of Vitamin B-9 (MTHFR), which results in her being harmed by fortified food. Folate-deficiency is known to be behind problematic alcohol consumption.
She told me about how adding L-Methyl-Folate to her routine was like flipping a switch from 'depressed' to 'not-depressed'. But the doctors had already decided her substance-associated psychosis required tranquilizers ('antipsychotics') in perpetuity, and only added the vitamin to their forced prescriptions. The latest news is that she escaped from her court-ordered guardian and involuntary mental health treatments, sometime in February 2023. The antipsychotics have worn off, and she's been able to stay sober. She sounds like she's doing well.
SCOTUS dismissed my latest petition without comment, as if to say it's perfectly fine for the mental health industry to perpetrate fraud on the United States Court. Still thinking about how to proceed.
A book - Brain Energy by Chris Palmer - was published in 2022. All the old approaches to forced psychiatric drugging have been obsolete for decades, now they're indefensible. There's nothing new in the book, Dr. Palmer just compiled 50+ years of research into his book.
This is true in principle and it is good calling it out, but in practice I've never seen a mutex-based data structure beat an equivalent lock-free data structure, even at low contention, unless the latter is extremely contrived.
A mutex transaction generally requires 2 fences, one on lock and one on unlock. The one on unlock would not be strictly necessary in principle (on x86 archs the implicit acquire-release semantics would be enough) but you generally do a CAS anyway to atomically check whether there are any waiters that need a wake-up, which implies a fence.
Good lock-free data structures OTOH require just one CAS (or other fenced RMW) on the shared state.
Besides, at large scale, no matter how small your critical section is, it will be preempted every once in a while, and when you care about tail latency that is visible. Lock-free data structures have more predictable latency characteristics (even better if wait-free).
1. Removing that statement (dead code elimination)
2. Deduplicating that statement (available expressions)
3. Reordering that statement with other statements (hoisting; loop-invariant code motion)
4. Duplicating that statement (can be useful to enable other optimizations)
All of the above optimizations are very important in compilers, and they are much, much easier to implement if you don't have to worry about preserving side effects while manipulating the program.
So the point of SSA is to translate a program into an equivalent program whose statements have as few side effects as possible. The result is often something that looks like a functional program. (See: https://www.cs.princeton.edu/~appel/papers/ssafun.pdf, which is famous in the compilers community.) In fact, if you view basic blocks themselves as a function, phi nodes "declare" the arguments of the basic block, and branches correspond to tailcalling the next basic block with corresponding values. This has motivated basic block arguments in MLIR.
The "combinatorial circuit" metaphor is slightly wrong, because most SSA implementations do need to consider state for loads and stores into arbitrary memory, or arbitrary function calls. Also, it's not easy to model a loop of arbitrary length as a (finite) combinatorial circuit. Given that the author works at an AI accelerator company, I can see why he leaned towards that metaphor, though.