You need tools adequate to the task, but the task isn't necessarily what other people think it is. In this case, I'd bet the tasks are to learn what's going on inside a database and become better at programming in Python, not to write a high-performance production-ready database implementation.
Almost all quoting for non-trivial cases comes down to the [list] command. It does the Right Thing, especially in the critical cases where you're generating a command to call later. Pretty much everything else is either much simpler to the point of needing no thought at all, or complex enough that you're best off writing a command to do it properly (e.g., because you're generating JSON or CSV or some other language like that) as you'll want to test it thoroughly too.
It's hard to be much more specific than that without talking detailed examples.
The main issue with supporting those IDEs is that the Language Server Protocol is vast and really quite complicated. More than a weekend's work to go to doing the interesting bits so life gets in the way...
The IDE tools for Tcl have tended to be commercial and to not see much general adoption. Never understood why.
Python has nothing at all like safe interpreters (I've looked). You just can't prevent things from leaking, it isn't designed for it at all. You can do a half-hearted approach by specifying the globals dictionary to eval(), but you can't really count on safety because the builtins do the leaking and you can't really stop anyone from finding a way to get back to that; the language is just too deeply interlinked for a proper security boundary to be erected anywhere inside it.
Well, there are classes and objects in there now. They're designed for modelling pretty heavyweight entities like widgets instead of lightweight things like linked lists.
It was a fun OO system to write. It's much more dynamic than it appears to be at first glance; it's first glance looks much like many other languages object systems with a few slight oddities (no new operator, but rather classes have a new method).
Technically, all values are considered subtypes of strings, but the notion of types is quite different to those of many other languages. In particular, Tcl's types do not describe the memory storage model of their values. (They're implemented with 64-bit words and buffers and arrays and so on, but that's not what the value model describes.)
It works well as long as your goal isn't to totally eliminate boxing of values.
Tcl uses a memory model internally where each piece of memory belongs strictly to the thread that allocated it, with this enforced in lots of places. There are a few loopholes past it (for process-wide concepts such as the current working directory cache, or for inter-thread messaging) but by and large you write what appears to be single-threaded code.
There's support for deep coroutines and non-blocking I/O so it isn't limiting. You only really need multiple threads when dealing with compute-heavy code or a particularly crufty API.
Technically, Tcl's internal type system is that all other value types are subtypes of string, universally serializable to string, and will correctly round-trip through string. But it also means that you can type-pun stuff if you want; it just costs time.
FWIW, the best handling I know of JSON in Tcl is the rl_json package (https://github.com/RubyLane/rl_json) which essentially makes JSON into what works like a native Tcl value type.
It really depends on whether you include the memory in that count, as memory uses masses of transistors without being very interesting as most of those transistors spend their time just sitting there in a stable state. It's the transistor count (more properly, the gate count; gates can be thought of as multiple transistors fused together, yet they're truly a single thing in terms of manufacturing and layout) in the computational parts of the processor that is really interesting.
SpiNNaker is built using old ARM968 cores on an ancient process (because that was cheap, for various reasons). The SpiNNaker2 hardware (under design; I can't remember if it is next year or the one after when it is finalized) will be on a modern process that will let us pack ten times as many cores on per chip, with those cores being quite a lot more powerful. Which isn't bad; we're not a commercial outfit here…
There isn't a plan to do the whole human brain, and doing so would require both at least two further generations of hardware and likely building a new facility for deploying it in. If someone's got a spare billion, and a decade or so to work on it, we could give it a go, but it is a lot to spend for no actual certainty that we'd succeed.
We do plan to simulate the mouse brain, but our interests are more in understanding network-level mechanisms that are difficult to study at the neuron or whole-brain levels. The meso-scale stuff is where understanding is critical and tricky.
When you say "simulate the mouse brain", what do you mean exactly? Given as even the c. elegans with its 300 neurons can't be simulated well enough to actually be a living digital organism.