> you must surely be guilty of something. Otherwise, all offers would be meaningless and worth nothing.
You don't have to be "guilty" of anything to be liable in civil law (which contract law is a part of). "Guilt" is a concept from criminal law. It isn't required for contracts to be enforceable.
In general (there are exceptions) offers alone aren't enforceable and don't result in a contract. You need other elements (agreement by the parties, plus something done in return for what's offered) for a contract to be formed - and then it's enforceable.
> For years, despite functional evidence and scientific hints accumulating, certain AI researchers continued to claim LLMs were stochastic parrots: probabilistic machines that would: 1. NOT have any representation about the meaning of the prompt. 2. NOT have any representation about what they were going to say.
But did any AI researchers actually claim there was no representation of meaning? I thought generally, the criticism of LLMs was that while they do abstract from their corpus - ie, you can regard them as having a representation of "meaning" - it's tightly and inextricably tied to the surface level representation, it isn't grounded in models of the external world, and LLMs have poor ability to transfer that knowledge to other surface encodings.
I don't know who the "certain AI researchers" are supposed to be. But the "stochastic parrot" paper by Bender et al [1] says:
> Text generated by an LM is not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind.
That's a very different objection to the one antirez describes - I think he's erecting a straw man. But I'd be happy to be corrected by anyone more familiar with the research.
> Text generated by an LM is not grounded in communicative intent
This means exactly that no representation should exist in the activation states about what the model wants to tell, and there must be only a single token probabilistic inference at play.
Also their model requires the contrary, too: that the model does not know, semantically, what the query really means.
Stochastic Parrot has a scientific meaning, and just only observing the function of the models, it is quite evident that they were very wrong, but now we have stong evidence (via probing) that also the sentence you quoted is not correct, since the model knows the idea to express also in general terms, and features about things it is going to say much later activates a lot of tokens earlier, including conceptual features that are relevant later in the sentence / concept expressed.
You are doing the big error that is common to do in this context of extending the stochastic parrot to a non scientifically isolated model that can be made large enough to accomodate any evidence arriving from new generations of models. The stochastic parrot does not understand the query nor is trying to reply to you in any way, it just exploits a probabilistic link among the context window and the next word. This link can be more complex than a Markov chain but must be of the same kind: lacking understanding whatsoever and communication intent (no representation of the concept / sentences that are required to reply correctly). How it is possible to believe in this, today? And, check yourself what the top AI scientists today believe about the correctness of the stochastic parrot hypothesis.
> > Text generated by an LM is not grounded in communicative intent
> This means exactly that no representation should exist in the activation states about what the model wants to tell, and there must be only a single token probabilistic inference at play.
That's not correct. It's clear from the surrounding paragraphs what Bender et al mean by this phrase. They mean that LLMs lack the capacity to form intentions.
> You are doing the big error that is common to do in this context of extending the stochastic parrot to a non scientifically isolated model that can be made large enough to accomodate any evidence arriving from new generations of models.
No, I'm not. I haven't, in fact, made any claims about the "stochastic parrot". Rather, I've asked whether your characterisation of AI researchers' views is accurate, and suggested some reasons why it may not be.
gtk-vector-screenshot (<https://github.com/nomeata/gtk-vector-screenshot>) will do this, but for GTK apps only. It relies on a custom protocol layered on top of X Window, and I think traverses the tree of GTK widgets to create a vector representation. For a general screenshot program to work, I imagine it would need some sort of hook into every GUI framework used on your system.
The whole point of corporations is that they can sue and be sued like a natural person can - they have legal personhood, and can pursue and defend actions in their own name.
I assume the portion of the first link you're referring to is the section that starts
> Courts have also split on whether corporations may be held liable under the ATS.
This is a question about the ATS and its scope specifically; the source is not discussing the nature of corporations generally.
It sounds like the scope of the ATS is fairly ill-defined, and that at various points courts have looked for whatever reasons they could to limit its scope, and whether a corporation was involved has just been one of those reasons.
Romans didn't use chariots, except for chariot racing. For military purposes, they used cavalry and footsoldiers. For domestic transport, they used wagons. And as far as I'm aware, they never constructed "tunnels" for transport - are you referring to mine tunnels?
GP is probably referring to the standard railroad gauge which is alleged [1] to have been derived from the width of roman chariot wheels and which would have driven the train tunnel width in modern times.
What? No, it's a Latin word, and not a distortion of anything. It's a conjugation of the verb 'gero', which has several meanings - but when put next to 'bene', most likely means 'to behave, conduct oneself, comport oneself'. You can see the conjugation table here: https://en.wiktionary.org/wiki/gero#Latin. And it forms part of the legal Latin phrase 'Quamdiu se bene gesserit', or 'So long as he shall behave himself properly'. (https://www.oxfordreference.com/display/10.1093/acref/978019...).
We don't know what Herbert actually meant by "Bene Gesserit", or even whether it has strictly one meaning, since he never elaborated on that. Given how syncretic everything else is, it is entirely plausible that there are multiple references encoded in the name. The Latin one is fairly obvious given that they refer to each other as "Reverend Mothers".
But then note that Bene Tleilaxu is clearly not Latin, and makes more sense if you interpret the word "bene" ("bani") in its Arabic meaning - "descendants of", in practice often referring to tribes and similar groupings (as in Bani Quraish or Bani Isra'il) - Tleilax being their home planet. If we apply the same to "Bene Gesserit", it would mean "Daughters of ???". One possibility then would be Arabic "Jazirah", "peninsula".
An even more interesting theory along these lines is that "Gesserit" refers to the mythical demon-slaying hero-king Gesser (aka Geser aka Gesar) of the Mongolian and Tibetan folklore. In Mongolian, "... of Gesar" is "Geseriyn". Gesar is the Chosen Son of the Sky God (head of the pantheon), the first man who descended from Heaven to purge the world of evil demons that menace humanity - it sure does make for some interesting parallels with Kwizats Haderach...
Again, to re-iterate, it's entirely possible that all of these are simultaneously true. Herbert liked referencing obscure (to his culture) folklore, so a play on words like this could well be intentional.
Interestingly enough that ticket is about preserving and/or following - I would be happy with ignoring so it doesn’t end up in an infinite loop - as it stands a self referencing symlink with files in the same directory will result in either your disk filling up, your inodes filling up, or a maximum bested paths error (assuming your FS has such a thing).
Edit: and until the ticket you just linked, I assumed scp and cp were intended to be close equivalents, given the whole ending in ‘cp’ thing.
The Sunk-cost fallacy is when you weight the value of something disproportionately because of effort or expense you've put into it.
But if you've got an OS that's certified for the work you're doing, and it's not costing you extra to work on that OS, then there's no fallacy - you're getting more value out of the cost of certification you've incurred, and shifting to some other OS would presumably require you to incur the expense of certification again.
That said, the skills needed to work with a legacy OS will tend to become rarer, so you ought to factor that into your calculations.
You don't have to be "guilty" of anything to be liable in civil law (which contract law is a part of). "Guilt" is a concept from criminal law. It isn't required for contracts to be enforceable.
In general (there are exceptions) offers alone aren't enforceable and don't result in a contract. You need other elements (agreement by the parties, plus something done in return for what's offered) for a contract to be formed - and then it's enforceable.