Yes. The autodesk fusion course that I learned 3D printing design off of on Udemy had a bunch of instructions for UI elements that had moved in the application.
It wasn’t a big deal but I would have still appreciated it if the author inserted some new recorded segments or re-recorded some content to make up for it.
> Does it though? I mean I'm still teaching thread-safety and recursion to my interns... a solid foundation is a solid foundation.
I think you are confusing interiorizing some fundamentals with things moving fast. There are languages and frameworks rolling out higher level support for features covering concurrency and parallelism. While you focus on thread-safety, a framework you already use can and often does roll out features that outright eliminate those concerns with idiomatic approaches that are far easier to maintain. Wouldn't you classify that as moving fast?
But are you teaching the basics of programming with 30 year old textbooks? Can you learn the principles of web dev by building like they did 30 years ago? Sure. But it will be a pain in the ass vs using something that is up to date.
See you in ten years! We're a hop, skip and a jump from one click automated conversion from every legacy Java app to web and electron desktop compatible code and we can just retire Java entirely. in 2025, Java is not the most performant. It does not run in the most places. it is not the easiest to write or reason about. its advantage over anything else is momentum and it's losing that too.
React is just a formalizatio of a UI update pattern that exists in every app ever made except the ones that are bad.
Source: written a lot of java and nobody is currently paying enough to make it worth doing again.
I don't know what argument you think you are making. React was released in 2011 whereas AngularJS was released in 2010 and Angular2+, what we actually call Angular, was released in 2014.
So your counter examples of popularity are projects what at best started out at the same time as React but unlike React winded down in popularity.
After over a decade l, React is not only the most popular framework by far but also is the support framework for a few of the top 10 frameworks.
So what point did you thought you were making? That React managed to become the dominant framework whereas your examples didn't?
The world of the Digital Humanities is a lot of fun (and one I've been a part of, teaching programming to Historians and Philosophers of Science!) It uses computation to provide new types of evidence for historical or rhetorical arguments and data-driven critiques. There's an art to it as well, showing evidence for things like multiple interpretations of a text through the stochasticity of various text extraction models.
From the author's about page:
> I discovered digital humanities (“humanities computing,” as it was then called) while I was a graduate student at the University of Virginia in the mid-nineties. I found the whole thing very exciting, but felt that before I could get on to things like computational text analysis and other kinds of humanistic geekery, I needed to work through a set of thorny philosophical problems. Is there such a thing as “algorithmic” literary criticism? Is there a distinct, humanistic form of visualization that differs from its scientific counterpart? What does it mean to “read” a text with a machine? Computational analysis of the human record seems to imply a different conception of hermeneutics, but what is that new conception?
They all can be done through Meetup - I think the point here is that multiple channels avoids vendor lock-in and increases the likelihood that a user will overlap with one of the 4 communication strategies.
The domain of Artificial Life is highly related and has had an ongoing conference series and journal going, might be worth mining for more inspiration:
Just getting HTML working would be mind-blowing on a Windows 1.0 machine, assuming you were running the typical hardware of the time.
From what I can tell, the 80286 was still a server-grade CPU in 1985, and most PCs were still running an 8086 or 8088, which maxed out at 1 MB of RAM. Just the HTML for the Wikipedia article on the 8086 [0] is nearly 224 kilobytes, nearly 1/4 of the RAM available.
I'd think it could do TLS encryption, it just wouldn't do it FAST. It would also probably be limited in the ciphers and key exchange algorithms it would support.
While you’re technically correct, I don’t think it captures the spirit of my comment.
Modern (in comparison) CPUs contain instructions to aid with performance, such as AES extensions. If you’re doing everything in software then you’ll be limited to the older ciphers, most of which might not even be supported by modern sites given TLS1.3 is the recommendation. For example OpenSSL doesn’t support RC and AES will benchmark much slower on CPUs (when compared with RC) without AES instructions and smaller L1 cache.
Sure you could workaround that by supplementing cache with writing to persistent storage and just throwing more time at the problem. But then you’re looking at days or even weeks just to complete the handshake, let alone pulling an HTML content. By which point the novelty of getting an 8088 online would long since have worn off.
So to say TLS wouldn’t be fast is an understatement. Calling it an understatement is itself an understatement :)
If you remove TLS from the equation things get dramatically simpler. I’ve got a 64k 8bit micro (Amstrad CPC 464) with a WiFi adapter. The adapter handles the wireless protocols but beyond that I wrote a very simple HTML browser. Though most other similar projects I’ve seen have used a Raspberry Pi hooked up via serial to provide offloading. But that’s basically turning your 8bit Micro into a dumb terminal and thus feels somewhat like cheating.
> For example OpenSSL doesn’t support RC and AES will benchmark much slower on CPUs (when compared with RC) without AES instructions and smaller L1 cache.
> Sure you could workaround that by supplementing cache with writing to persistent storage and just throwing more time at the problem.
I think my question is...if you only supported a single key exchange algorithm and AES-128, and didn't verify server certificates, how much code would a TLS negotiation take? My understanding is that TLS libraries get large because they support dozens of key exchanges and ciphers, plus support previous versions of TLS, and a huge suite of features that most people probably don't even use. I would THINK you could achieve a bare minimum that an 8088 could run in under 100K of code.
Damn, now I'm tempted to actually try this. Know any way to emulate an 8088 at an accurate speed? ;-)
The problem isn’t code size for unused cyphers. The very first part of any SSH or TLS handshake is agreeing to what cypher suit to communicate on.
The problem is purely the computational overhead of the encryption itself. This is why modern CPUs have instructions to offload some of that overhead to hardware.
You can think of this as the same kind of problem of MPEG encoding and decoding in software vs hardware. The difference in performance is massive. Then try to do that on a 40 year old 16 bit CPU with virtually no cache and only 1 MB of RAM to play with
DCA and IAD have their work-load shared due to regulatory action:
> The Perimeter Rule is a federal regulation established in 1966 when jet aircraft began operating at Reagan National. The initial Perimeter Rule limited non-stop service to/from Reagan National to 650 statute miles, with some exceptions for previously existing service. By the mid-1980s, Congress had expanded Reagan National non-stop service to 1,250 statute miles (49 U.S. Code § 49109). Ultimately, Reagan National serves primarily as a "short-haul" airport while Washington Dulles International Airport serves as the region's "long-haul" growth airport.
> Congress must propose and approve federal legislation to allow the U.S. Department of Transportation to issue "beyond-perimeter" exemptions which allows an airline to operate non-stop service to cities outside the perimeter. As a result of recent federal exemptions, non-stop service is now offered between Reagan National and the following cities: Austin, Denver, Las Vegas, Los Angeles, Phoenix, Salt Lake City, San Francisco, San Juan, Seattle and Portland, Ore.
I updated my M2 MacBook Pro and the USB-C hub on my Dell U3219Q monitor no longer works... it's been a substantial disruption to my workflow with external peripherals and I don't have a USB-A to C converter to even partially get back on my feet with my external mouse/keyboard (that was the monitor hub's job!)
See my comment on the parent for a study I did on these questions regarding reading strategies. Of particular relevance, Darwin kept detailed records of the books he read and wished to read in two notebooks [CUL-DAR119, CUL-DAR118], which spanned from his return aboard the Beagle until a few months after The Origin of Species was published.
This idea is very, very closely related to what I did for my PhD dissertation, with our analysis of Darwin's Readings being published in Cognition and available at https://arxiv.org/abs/1509.07175.
Summary: Darwin kept a series of reading notebooks recording everything he read for a very convenient time-span: from his return aboard the Beagle until a few months after The Origin of Species was published. (CUL-DAR119, CUL-DAR128, http://darwin-online.org.uk/EditorialIntroductions/vanWyhe_n...) I used the bibliographic entries to look up 97% of the English, non-fiction works in his library and trained an LDA topic model over it. We observed clear behavioral shifts in his reading patterns over time, which corresponded with other biographical events.
We tied this into a theory of "information foraging" - that as people are researching they are either exploiting existing topics or exploring new areas. For example, early in the reading history, one reading largely followed the next - there were few jumps in subjects between readings. As he was writing the Origin and synthesizing resource, the topic shifts became much more exploratory.
This original model had no goal-directed behavior, it was only a description of what he read, but still found these behavioral shifts when he marked he was beginning the Origin. In a later study, we began to look at the "zeitgeist" question you mention and how Darwin's drafts of the Origin diverged from the culture, as represented by the books he was reading (https://arxiv.org/abs/1802.09944) In this second study, we used the reading model and sampled his writings into that model space.
Both this library and his "to be read" notebooks define a nice "adjacent possible" - could he have read other things that may have been closer to his thoughts in the Origin? We use permutations of what he DID read as a null for studying his reading behavior - could he have read them in a more "optimal" manner? (With a lot of qualification on different "optimal" learning strategies...)
Final note: the time scales of his record keeping were not rare for the time - many people of letters kept "commonplace books" that journaled these reading histories. There was once a project to collect these reading histories, but I don't have the link at hand now.