at a first glance this looks like it's going back to old 80s research on neural representations, back then this kind of stuff was known as Parallel Distributed Processing
They're collecting data on children without their informed consent, babies today have their information capital compromised from the moment they are born, via their parents shares, purchases, and activity, without their say.
Because the risks involved with complete knowledge of everyone at every moment are huge. Consider that most developed countries have for decades spent millions in spy agencies to get just enough dirt on people of interest to be able to manipulate them.
Imagine that the same information is now available on all people to many giant companies, some with almost government like spheres of influence, you can see the potential for manipulation grow.
The other part of Facebook in particular is that they sell that leverage to advertisers.
An example of that would be the possibility that Russia influenced the US election. Whether or not it happened in either direction, the concept and possibility is something to be worried about.
The reason we should still care even though it seems ubiquitous already is because we messed up by getting here, but we can still fix it.
I don't have the luxury to pay attention to most media or opinions administered by anyone older than 45, neuroplasticity loss can be really blatant when paired with decades of alcohol use, and they're all going to be dead or vegetative by the time a crisis comes, where in the past wiser and less pampered elders could have led us.
There were only 8 Scanimates produced. I have the first R&D machine and the last one produced. I've heard there may be one in storage in Japan, and two in storage in Luxembourg, but mine are the only two I know of that are plugged in and actually function.
Tim Teitelbaum, one of the first researchers on IDEs summarized problems like this in a quote from one of his papers.
"Programs are not text; they are hierarchical compositions of computational structures and should be edited, executed, and debugged in an environment that consistently acknowledges and reinforces this viewpoint."
It is unfortunate that most major code editors and IDEs today do not store units of code (like Lisp SEXPs) as databases to be updated as you edit, which would make problems like this or other operations like metaprogramming, formal analysis, or documentation much easier to solve.
There are editors that do this. Notable, Jetbrains MPS comes to mind. It looks like a text editor but once you use it you quickly notice that you're actually editing the abstract syntax tree directly.
It's cool, but it has some major downsides too. For example, MPS stores the source as XML, not text (since it isn't text, it's a tree). This makes lots of basic tools we've taken for granted a lot harder, such as git merging etc. They've had to make a custom mergetool just to make basic collaborative coding feasible.
I bet there's other ways around that, all I'm saying is that text has major, major upsides because of the enormous ecosystem support.
>It's cool, but it has some major downsides too. For example, MPS stores the source as XML, not text (since it isn't text, it's a tree). This makes lots of basic tools we've taken for granted a lot harder, such as git merging etc. They've had to make a custom mergetool just to make basic collaborative coding feasible.
Doesn't solving this just require a text-to-AST, AST-to-text input and output step?
Anyplace outside the editor the programmer just sees regular text.
The issue with this is that operations on text don't necessarily preserve a valid AST. Doing `git merge` on the plain text of a source file may result in invalid code, at which point you have other annoying questions to answer about how to handle text that doesn't parse into a valid AST.
Rich structured editors free one from text and are able to encode other information that is not currently recorded in text formats. Directly operating on structures would free languages from parsing, correctness checking and compiling could occur at every semantically correct operation.
With a rich structure editor that can do merges, the undo history of edit and refactor operations could be persisted and merged into the VCS. Currently this isn't possible. Text is a projection for the page and a lowest common format.
Directly operating on structures would free languages from parsing, correctness checking and compiling could occur at every semantically correct operation.
Directly operating on structures would mean that you'd have to write an editor, which had to enforce correctness as well. And then you'd have to write a generator to save those structures in some kind of format that could be written to a file and passed around, and a parser to read such format. And check for correctness again, since who knows what generated that file.
As for constant compilation, that already exists, many IDEs have it. That's because parsing text is not actually hard, the other stages are.
With a rich structure editor that can do merges, the undo history of edit and refactor operations could be persisted and merged into the VCS. Currently this isn't possible.
Of course it is, you could write a plugin for any IDE that would record edits and refactor operations and save those in or alongside the text (much like they've have to be save alongside the AST). Of course, that doesn't help if the user does a manual refactor, but that's no different than they choosing a node in the rich editor, deleting it, then manually recreating it in its refactored form.
Instead of retrofitting a structured format on top of the current text centric world, can we imagine if a structure centric world would be better? A large number of tools would exist to operate semantically on the same structured format, including editors, versioning systems, grep, etc. Diffs and merges would work better. Languages would define the syntax in terms of a tree input instead of text input, and so on.
It's not about parsing being difficult or easy (e.g. you would still have to parse an abstract structure into a syntax tree specific to your language semantics). It's about making a structured form be the canonical baseline (instead of the canonical being a 'sequence of lines' i.e. text).
Consider that every programming language and every config language first invents a new syntax to encode a tree like structure (typically using a combination of curly braces, other brackets, keywords, indentation etc.) but the code itself is saved as 'text'. This is a lossy encoding - all a generic reader such as `git` or `grep` can now infer is that the file contains a 'sequence of lines' and can then only offer line based operations (git diffs are line based, grep searches are line based, etc.), when in fact a more meaningful operation would be the tree structure based.
If a tree based format was the canonical baseline, diffs could display the location of the node added (e.g. 'Added <Class X> -> <Function Y>'), without having language specific parsing knowledge. Similarly, most editors could provide 'tree view' and 'jump-next', 'jump-up' etc based on context, again without knowing language specific details. Further, many internal representations of programs (e.g. intermediate representations in compilers) also use trees, and could potentially be exported into one of these forms, to make the plethora of tools work with them.
(BTW, I'm not saying a tree is the best generic structure to replace text, but just using it as an example to argue for advantages of a generalized extensible structure over plain text.)
Given the number of security vulns that boil down to broken parsing, I don't think this is true. Maybe it isn't difficult _for you_. It is still a difficult problem. By moving to structured editors, many more dimensions of data can be encoded into a program than can be cleanly represented by text.
Why do you argue so vehemently against someone perusing an avenue of research?
I'm not sure why you think they don't? Most IDE's do typically build a language-specific search index and keep it up to date as you edit. (That's the main difference between an IDE and a text editor, though the line is blurrier these days.)
Having taken Teitelbaum's compiler class as an undergrad, I was happy to ditch the IDE he inflicted on us and go back to text editors. Structured code editing is a tricky UI problem and I didn't find a really good IDE until many years later.
There's no one way to edit code. Sometimes refactoring tools work well, but typing text can be quite efficient too. Getting locked into a tree editor at the expression level is no fun.
If they are they either aren't offering user/programmer access to that database, they aren't doing semantic/type binds, or aren't advertising those features well.
>I was happy to ditch the IDE he inflicted on us and go back to text editors. Structured code editing is a tricky UI problem
It's reconcilable with normal text editing, just update the structure once its valid. I agree that things like block/visual programming can be absurd.
I'm not sure what you mean by "semantic/type binds", but if you're writing a plugin, IDE's like Eclipse and IDEA do give you access to program syntax via a Java API, and for many languages there is also type-aware indexing. Typically this is exposed to the user as specific queries (such as "go to definition" or "find all usages") and updates ("rename method"). From a UI perspective, more features can be added by writing more plugins and/or improving them.
But this indexing is only on one user's workstation and tends not to scale up well. Updating dependencies or switching to a different branch means rebuilding large parts of the index.
Also, part of the problem is that there is little standardization. Many ecosystems are language, platform, build tool, and/or editor-specific. When you do something new you end up reinventing the wheel.
Especially in statically typed programs you want to "break" your program for refactoring purposes. Usually I look at a function or class and create new functions/classes and then rename the old one and then fix all the errors one by one by using the new code.
I think Microsoft did that in their Roslyn dot.net compiler. There is a server that is fed source code changes and it updates the AST internally incrementally. The text editor can then make queries against that AST. I believe Microsoft also took a portion of this as added it to Visual Studio Code.
What are some examples of open source Lisp projects and codebases whose features and elegance could have only been executed as well as they are in the language, or are just great codebases in general to study?
GNU Guix. It uses code staging a lot, which works best in a language in which you can trivially pass around code as data.
It also demonstrates that Scheme is flexible enough to easily implement features that the language designers did not, such as monads and laziness. While both can be done in almost any language, I think that Scheme's macros allow for exceptionally seamless implementations.
In Guix not only package definitions are written in Scheme, but also build phases. These build phases are evaluated at a different time in the context of the build daemon. This means that there are two major strata, both of which are written in Scheme. Expressions that are evaluated in the build context look no different than other expressions.
That's not the only instance of "staged" execution. Guix introduces G-expressions, a quoted expression in a build context where unquoted package values are replaced with absolute directory names that are not known until execution time.
Quoting and unquoting code and dealing with different strata of evaluation come natural in Scheme.
Another simpler example of staged execution might be the remote code execution feature of Guile-SSH.
Axiom, Maxima (DOE), and Macsyma (Symbolics) are large systems.
Axiom is a FOSS computer algebra system implemented in about 1.2 million lines of Common Lisp.
Axiom relies heavily on the ability to dynamically define and re-define functions. It relies heavily on the macro facility. It presents a domain specific language (called SPAD) which implements mathematically friendly syntax and semantics.
(https://github.com/daly/axiom)
https://mitpress.mit.edu/books/parallel-distributed-processi...