Duolingo | Full-time | Multiple roles | $177K-$240K for NY roles | NY and London | Hybrid
Our mission is to develop the best education in the world and make it universally accessible. Our culture is eng-driven, friendly, and very data-driven thanks to our large userbase. Check out our blog to see what engineering at Duolingo can look like: https://blog.duolingo.com/hub/engineering/
Duolingo | Full-time | Multiple roles | $177K-$300K | NY and Pittsburgh | ONSITE
Our mission is to develop the best education in the world and make it universally accessible. Our culture is eng-driven, friendly, and very data-driven thanks to our large userbase. More info about working as an engineer at Duolingo here: https://www.youtube.com/watch?v=WThT8sufdBE
This is also the case with Google Fuchsia, just replace 9P with FIDL. I'm really hoping Fuchsia doesn't end up just being vaporware since it has made some very interesting technical decisions (often borrowing from Plan 9, NixOS, and others.)
> One way of thinking about a blockchain is to think of it as a shared datastructure to keep databases in sync. Any time you want to distribute your database over more than just a single central place, in a cryptographically secure way, you're probably going to re-invent a blockchain to do it.
Even more specifically, a blockchain is for when you want Byzantine fault tolerance, i.e. you don't trust one or more of the actors involved. This is the main distinguishing feature of blockchains IMO, the reason we have proof of work, proof of stake, etc. It's also the main thing I saw people getting wrong when using blockchains during the earlier waves of cryptocurrency fever; most proposals for blockchains did make sense as distributed public ledgers, but didn't really need the extra computational overhead because only trusted parties were adding blocks to begin with.
> Even more specifically, a blockchain is for when you want Byzantine fault tolerance, i.e. you don't trust one or more of the actors involved.
Often yes. But also blockchain's can be useful simply for backups and scaling: by cryptographically linking every bit of data together you can be confident that you actually have a complete copy without any errors.
Git is basically a blockchain for this exact reason: starting from a git commit hash, git works backwards, checking that every byte of data is what it should be. Similarly, modern filesystems like btrfs use strong (if not cryptographically strong) hashes for this same reason.
Though in a sense, you're still correct: the "actor" you aren't trusting here is your own computer hardware.
I think you're technically correct here: if you just have a bunch of Merkle trees where each one tracks the hash of the previous block, it would be accurate to refer to it as a blockchain even if you're not bothering to implement any of the distributed consensus algorithms that cryptocurrencies are actually known for. It's probably not the first thing that would come to mind, but it is a correct way to use that word.
I think both you and GP are correct, but in different ways.
It's true that the English language has a very large number of phonemes... but accents tend to regularize/restrict these phonemes. For example, a typical bilingual speaker of Indian English and Hindi will replace instances of the /æ/ phoneme (as in "blast" or "fast") with another phoneme like /a:/ (as in "father"). Which isn't that unusual since /æ/ is pretty uncommon among languages.
Other rare English phonemes include the dental fricatives, i.e. the "th" sounds in "ether" (voiceless) and "either" (voiced). Speakers of Indian English often replace this with a dental stop, a "t" sound (voiceless) or "d" sound (voiced). (Note that Devanagari has a _lot_ of stops, so this is one place where it cannot be cleanly encoded into the Latin alphabet without diacritics.)
So overall: while I think Devanagari can't encode e.g. American English, it can actually do a pretty solid job of encoding Indian English, but not the other way around.
Sounds like a reasonable theory but do you have an actual example? The one you gave:
> For example, a typical bilingual speaker of Indian English and Hindi will replace instances of the /æ/ phoneme (as in "blast" or "fast") with another phoneme like /a:/ (as in "father"). Which isn't that unusual since /æ/ is pretty uncommon among languages.
does not apply to Indian languages because most of them have daily-use-words with the /æ/ sound.
For prices specifically I think it's fair to say that inflation only goes in one direction, but for larger market trends, IMO the key here is _habit building_.
Many things were technically feasible pre-pandemic but not done habitually: remote work, streaming movies instead of going to the theater, ordering delivery instead of dining out, and so on. The pandemic forced many people to change their habits and get over any initial inertia (e.g. investing in a WFH setup or home theater). The result is that when the world returned to normal, the markets didn't: consumer habits had already moved on.
But those training the LLMs are still using the works, and not just to discuss them, which I think is the point of fair use doctrine. I guess I fail to see how it's any different from me using it in some other way? If I wanted to write a play very loosely inspired by Blood Meridian, it might be transformative, but that doesn't justify me pirating the book.
I tend to think copyright should be extremely limited compared to what it is now, but to me the logic of this ruling is illogical other than "it's ok for a corporation to use lots of works without permission but not for an individual to use a single work without permission." Maybe if they suddenly loosened copyright enforcement for everyone I might feel differently.
"Kill one man, and you are a murderer. Kill millions of men, and you are a conqueror." (An admittedly hyperbolic comparison, but similar idea.)
>If I wanted to write a play very loosely inspired by Blood Meridian, it might be transformative, but that doesn't justify me pirating the book.
I think that's the conclusion of the judge. If Anthropic were to buy the books and train on them, without extra permission from the authors, it would be fair use, much like if you were to be inspired by it (though in that case, it may not even count as a derivative work at all, if the relationship is sufficiently loose). But that doesn't mean they are free to pirate it either, so they are likely to be liable for that (exactly how that interpretation works with copyright law I'm not entirely sure: I know in some places that downloading stuff is less of a problem than distributing it to others because the latter is the main thing that copyright is concerned with. And AFAIK most companies doing large model training are maintaining that fair use also extends to them gathering the data in the first place).
(Fair use isn't just for discussion. It covers a broad range of potential use cases, and they're not enumerated precisely in copyright law AFAIK, there's a complicated range of case law that forms the guidelines for it)
I think the issue is that its actually quite difficult to "unlearn" something once you've seen it. I'm speaking more from human-learning rather than AI-learning, but since AI is inspired by our view on nature, it will have similar qualities. If I see something that inspires, regardless of if I paid for that, I may not even know what specifically inspired me. If I sit on a park bench and an idea comes to me, it could come from a number of things - the bench, park, weather, what movie I watched last night, stuff on the wall of a restaurant while I was eating there, etc.
While humans don't have encyclopedic memories, our brain connects a few dots to make a thought. If I say "Luke, I am your father", it doesn't matter that isn't even the line is wrong, anyone that's seen Star Wars knows what I'm quoting. I may not be profiting from using that line, but that doesn't stop Star Wars from inspiring other elements of my life.
I do agree that copyright law is complicated and AI is going to create even more complexity as we navigate this growth. I don't have a solution on that front, just a recognition that AI is doing what humans do, only more precisely.
which AFAIN IANAL, copyright and exhaustive rights are completely different. Under copyright, once a book is purchased: that's it. Reselling the same, or transformed (re: highlighted) worked 'used' is 100% legal, as is consuming it at your discretion (in your mind {a billion times}, a fire, or (yes even) what amounts to a fancy calculator).
(that's all to say copyright is dated and needs an overhaul)
But that's taking a viewpoint of 'training a personal AI in your home', which isn't something that actually happens... The issue has never been the training data itself. Training an AI and 'looking at data and optimizing a (human understanding/AI understanding) function over it' are categorically the same, even if mechanically/biologically they are very different.
> I tend to think copyright should be extremely limited compared to what it is now, but to me the logic of this ruling is illogical other than "it's ok for a corporation to use lots of works without permission but not for an individual to use a single work without permission."
That's not what the ruling says.
It says that training a generative AI system not designed primarily as a direct replacement for a work on one or more works is fair use, and that print-to-digital destructive scanning for storage and searchability is fair use.
These are both independent of whether one person or a giant company or something in between is doing it, and independent of the number of works involved (there's maybe a weak practical relationship to the number of works involved, since a gen AI tool that is trained on exactly one work is probably somewhat less likely to have a real use beyond a replacement for that work.)
But if you did pirate the book, and let's say it cost $50, and then you used it to write a play based on that book and made $1 million selling that, only the $50 loss to the publisher would be relevant to the lawsuit. The fact that you wrote a non-infringing play based on it and made $1 million would be irrelevant to the case. The publisher would have no claim to it.
The judge actually agreed with your first paragraph:
> This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use. There is no decision holding or requiring that pirating a book that could have been bought at a bookstore was reasonably necessary to writing a book review, conducting research on facts in the book, or creating an LLM. Such piracy of otherwise available copies is inherently, irredeemably infringing even if the pirated copies are immediately used for the transformative use and immediately discarded.
(But the judge continued that "this order need not decide this case on that rule": instead he made a more targeted ruling that Anthropic's specific conduct with respect to pirated copies wasn't fair use.)
The analogy to training is not writing a play based on the work. It's more like reading (experiencing) the work and forming memories in your brain, which you can access later.
I'm allowed to hear a copyrighted tune, and even whistle it later for my own enjoyment, but I can't perform it for others without license.
It is easy to dismiss, but the burden of proof would be on the plaintiff to prove that training a model is substantially different than the human mind. Good luck with that.
That makes no sense as a default assumption. It's like saying FSD is like a human driver. If it's a person, why doesn't it represent itself in court? What wages is it being paid? What are the labor rights of AI? How is it that the AI is only human-like when it's legally convenient?
What makes far more sense is saying that someone, a human being, took copyrighted data and fed it into a program that produces variations of the data it was fed. This is no different from a photoshop filter, and nobody would ever need to argue in court that a photoshop filter is not a human being.
If I buy a book, and use it to prop up the table on which I build a door, I dont owe the author any additional money over what I paid for it.
If I buy a book, and as long as the product the book teaches me to build isnt a competing book, the original author should have no avenue for complaint.
People are really getting hung up on the computer reading the data and computing other data with it. It shouldnt even need to get to fair use. Its so obviously none of the authors business well before fair use.
> But those training the LLMs are still using the works, and not just to discuss them, which I think is the point of fair use doctrine.
Worse, they’re using it for massive commercial gain, without paying a dime upstream to the supply chain that made it possible. If there is any purpose of copyright at all, it’s to prevent making money from someone’s else’s intellectual work. The entire thing is based on economic pragmatism, because just copying does obviously not deprive the creator of the work itself, so the only justification in the first place is to protect those who seek to sell immaterial goods, by allowing them to decide how it can be used.
Coming to the conclusion that you can ”fair use” yourself out of paying for the most critical part of your supply makes me upset for the victims of the biggest heist of the century. But in the long term it can have devastating chilling effects, where information silos will become the norm, and various forms of DRM will be even more draconian.
Plus, fair use bypasses any licensing, no? Meaning even if today you clearly specify in the license that your work cannot be used in training commercial AI, it isn’t legally enforceable?
> Worse, they’re using it for massive commercial gain, without paying a dime upstream to the supply chain that made it possible. If there is any purpose of copyright at all, it’s to prevent making money from someone’s else’s intellectual work.
This makes no sense. If I buy and read a book on software engineering, and then use that knowledge to start a career, do I owe the author a percentage of my lifetime earnings?
Of course not. And yet I've made money with the help of someone else's intellectual work.
Copyright is actually pretty narrowly defined for _very good reason_.
> If I buy and read a book on software engineering
You're comparing that you as an individual purchase one copy of a book to a multi-billion dollar company systematically ingesting them for profit without any compensation, let alone proportional?
> do I owe the author a percentage of my lifetime earnings?
No, but you are a human being. You have a completely different set of rights from a corporation, or a machine. For very good reason.
If you pirate a book on software engineering and then use that knowledge to start a career, do you owe the author the royalties they would be paid had you bought the book?
If the career you start isn't software engineering directly but instead re-teaching the information you learned from that book to millions of paying students, is the regular royalty payment for the book still fair?
Definitely seems reasonable to say "you can train on this data but you have to have a legal copy"
Personally I like to frame most AI problems by substituting a human (or humans) for the AI. Works pretty well most of the time.
In this case if you hired a bunch of artists/writers that somehow had never seen a Disney movie and to train them to make crappy Disney clones you made them watch all the movies it certainly would be legal to do so but only if they had legit copies in the training room. Pirating the movies would be illegal.
Though the downside is it does create a training moat. If you want to create the super-brain AI that's conversant on the corpus of copyrighted human literature you're going to need a training library worth millions
> Personally I like to frame most AI problems by substituting a human (or humans) for the AI. Works pretty well most of the time.
Human time is inherently valuable, computer time is not.
The issue with LLMs is that they allow doing things at a massive scale which would previously be prohibitively time consuming. (You could argue but them how much electricity is worth one human life?)
If I "write" a book by taking another and replacing every word with a synonym, that's obviously plagiarism and obviously copyright infringement. How about also changing the word order? How about rewording individual paragraphs while keeping the general structure? It's all still derivative work but as you make it less detectable, the time and effort required is growing to become uneconomical. An LLM can do it cheaply. It can mix and match parts of many works but it's all still a derivative of those works combined. After all, if it wasn't, it would produce equally good output with a tiny fraction of the training data.
The outcome is that a small group of people (those making LLMs and selling access to their output) get to make huge amounts of money off of the work of a group that is several orders of magnitude larger (essentially everyone who has written something on the internet) without compensating the larger group.
That is fundamentally exploitative, whether the current laws accounted for that situation or not.
That's a part of the issue. I'm not sure if this has happened in visual arts, but there is in fact precedent against trying to hire a sound a like over the one you want to sound like. You can't be in talks with Scarlet Johannsen, reject her, and then hire a sound a like and say "talk like Scarlet". It's pretty clear at that point what you want but you didn't want to pay talent for it.
I see elements of that here. Buying copyrighted works not to be exposed and be inspired, nor to utilize the aithor's talents, but to fuel a commercialization of sound-a-likes.
> but there is in fact precedent against trying to hire a sound a like over the one you want to sound like. You can't be in talks with Scarlet Johannsen, reject her, and then hire a sound a like and say "talk like Scarlet". It's pretty clear at that point what you want but you didn't want to pay talent for it.
You're referencing Midler v Ford Motor Co in the 9th circuit. This case largely applies to California, not the whole nation. Even then, it would take one Supreme Court case to overturn it.
In the human training case probably a Store DVD would still run afoul of that licensing issue. That's a broader topic of audience and I didn't want to muddy the analogy with that detail.
It changes the definition of what a "legal copy" is but the general idea that the copy must be legal still stands.
> It's not adding to the cultural expression like a parody would.
Says who?
> Is AI contributing to education and/or culture _right now_, or is it trying to make money?
How on earth are those things mutually exclusive? Also, whether or not it's being used to make money is completely irrelevant to whether or not it is copyright infringement.
I can't find anything in there or its linked articles about culture. I do find quite a bit about synthetic performers and digital replicas and making sure that people who do voice acting don't have their performance used to generate material that is done at a discounted rate and doesn't reimburse the performer.
> Protective A.I. guardrails for actors who work in video games remain a point of contention in the Interactive Media Agreement negotiations which have been ongoing from October 2022 until last month’s strike. Other A.I.-related panels Crabtree-Ireland participated in included a U.S. Department of Justice and Stanford University co-hosted event about promoting competition in A.I., as well as a Vanderbilt University summit on music law and generative A.I. SAG-AFTRA Executive Vice President Linda Powell discussed the interactive negotiations and A.I.’s many implications for creatives during her keynote speech at an Art in the Age of A.I. symposium put on by Villa Albertine at the French Embassy.
> She said A.I. represents “a turning point in our culture,” adding, “I think it’s important that we be participants in it and not passengers in it ... We need to make our voices known to the handful of people who are building and profiting off of this brave new world.”
This doesn't indicate that its good or bad, but rather that they want to make sure that people are in control of it and people are compensated for the works that are created from their performance.
Agreed. If I memorize a book and I am deployed into the world to talk about what I memorized that is not a violation of copyright. Which is reasonable logically because essentially this is what an LLM is doing.
But a commercial product is reaching parity with human capability.
Let's be real, Humans have special treatment (more special than animals as we can eat and slaughter animals but not other humans) because WE created the law to serve humans.
So in terms of being fair across the board LLMs are no different. But there's no harm in giving ourselves special treatment.
Generative AIs are very different from humans because they can be copied losslessly and scaled tremendously, and also have no individual liability, nor awareness of how similar their output is to something in their training material. They are very different in constraints and capabilities from humans in all sorts of ways. For one, a human will likely never reproduce a book they read without being aware that that’s what they are doing.
>So in terms of being fair across the board LLMs are no different
Why should "fair" factor into it? The LLMs are not humans, thus they have no rights, and treating them fairly shouldn't come into it. Stop anthropomorphizing linear algebra ffs.
Except you can't do it at a massive scale. LLMs both memorize at a scale bigger than thousands, probably millions of humans AND reproduce at an essentially unlimited scale.
You can talk about it, but you can't sell tickets to an event where you recite from memory all the poems written by someone else without their permission.
LLMs may sometimes reproduce exact copies of chunks of text, but I would say it also matters that this is an irrelevant use case that is not the main value proposition that drives LLM company revenues, it's not the use case that's marketed and it's not the use case that people in real life use it for.
I wouldn't call it that. Goldsmith took a photograph of Prince which Warhol used as a reference to generate an illustration. Vanity Fair then chose to buy a license Warhol's print instead of Goldsmith's photograph.
So, despite the artwork being visual transformative (silkscreen vs photograph) the actual use was not transformed.
The nature of how they store data makes it not okay in my books. You massage the data enough and you can generate something that seems infringement worthy.
For closed models the storage problem isn't really a problem, they can be judged by what they produce not how they store it as you don't have access to the actual data. That said, open weight LLMs are probably screwed, if enough of the work remains in the weights such that they can be extracted (even if it's without even talking to the LLM) then the weight file itself represents a copy of the work that's being distributed. So enjoy these competent run-at-home models while you can, they're on track for extinction.
Why doesn’t this apply to humans? If I memorize something such that it can be extracted did I violate the law? It’s only if I choose to allow such extraction to occur then I’m in violation of the law right?
So if I or an LLM simply doesn’t allow said extraction to occur, memorization and copying is not against the law.
I think an important distinction here is distribution... did you tell someone else what you memorized? Is downloading a model akin to distributing that same information?
What if I don't download the model and I just communicate with it. Sort of like chatting with another human. That's not a copyright issue right? I mean that's how most LLMs are deployed today.
My understanding is that it depends on a judge/jury's subjective opinion on how similar the output is to something copyrightable. Perhaps intent may play a role as well.
You don't need a license for most of what people do with traditional, physical copyrighted copies of works: read them, play a DVD at home, etc. Those things are outside the scope of copyright. But you do need a license to make copies, and ebooks generally come with licensing agreements, again because to read an ebook, you must first make a brand new copy of it. Anyway as a result physical books just don't have "licenses" to begin with and if they tried they'd be unenforceable, since you don't need to "agree" to any "terms" to read a book.
> If a publisher adds a "no AI training" clause to their contracts?
This ruling doesn't say anything about the enforceability of a "don't train AI on this" contract, so even if the logic of this ruling became binding prcecednet (trial court rulings aren't), such clauses would be as valid after as they are today. But contracts only affect people who are parties to the contract.
Also, the damages calculations for breach of contract are different than for copyright infringement; infringement allows actual damages and infringer's profits (or statutory damages, if greater than the provable amount of the others), but breach of contract would usually be limited to actual damages ("disgorgement" is possible, but unlike with infringer's profits in copyright, requires showing special circumstances.)
Fair Use and similar protections are there to protect the end user from predatory IP holders.
First, I dont think publishers of physical books in the US get the right to establish a contract. The book can be resold for instance and that right cannot be diminished. But secondly adding more cruft to the distribution of something that the end user has a right to transform, isn't going to diminish that right.
Fair use "overrides" licensing in the sense that one doesn't need a copyright license if fair use applies. But fair use itself isn't a shield against breach of contract. If you sign a license contract saying you won't train on the thing you've licensed, the licensor still has remedies for breach of contract, just not remedies for copyright infringement (assuming the act is fair use).
I am not going to sign a contract at the bookstore. Anyone who tries to get me to sign a contract at the bookstore is just going to lose book sales. IIRC the case involved Anthropic literally feeding physical books into scanners. Your proposed solution sounds like its just going to make books worse, not AI better.
I'm not proposing any kind of solution, just stating what the law currently is. A book purchased at a store is a purchase; content obtained from online services like Bloomberg or LexisNexis is typically licensed; more and more of these license contracts include AI-focused restrictions.
I suspect IP like text is going to follow the college virtual textbook model where DRMed software is needed to access it and physical copies won't exist. Maybe some HDCP-like protection to stop screen scraping.
To access them, institutions do have to sign contracts, along with abiding by licensing terms.
I know, but the article mentions that a separate ruling will be made about that pirating.
quote: “We will have a trial on the pirated copies used to create Anthropic’s central library and the resulting damages,” Judge Alsup wrote in the decision. “That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for theft but it may affect the extent of statutory damages.”
This tells me Anthropic acquired these books legally afterwards. I was asking if during that purchase, the seller could add a no training close to the sales contract.
> The doctrine was first recognized by the Supreme Court of the United States in 1908 (see Bobbs-Merrill Co. v. Straus) and subsequently codified in the Copyright Act of 1909. In the Bobbs-Merrill case, the publisher, Bobbs-Merrill, had inserted a notice in its books that any retail sale at a price under $1.00 would constitute an infringement of its copyright. The defendants, who owned Macy's department store, disregarded the notice and sold the books at a lower price without Bobbs-Merrill's consent. The Supreme Court held that the exclusive statutory right to "vend" applied only to the first sale of the copyrighted work.
> Today, this rule of law is codified in 17 U.S.C. § 109(a), which provides:
> Notwithstanding the provisions of section 106 (3), the owner of a particular copy or phonorecord lawfully made under this title, or any person authorized by such owner, is entitled, without the authority of the copyright owner, to sell or otherwise dispose of the possession of that copy or phonorecord.
---
If I buy a copy of a book, you can't limit what I can do with the book beyond what copyright restricts me.
see, but if you ask a copyright attorney: Google lost. This is what I mean by aspirational. They won something, in very similar circumstances to Anthropic, "fair use," but everything else that made what they were doing a practical reality instead of purely theoretical required negotiation with Authors Guild, and indeed, they are not doing what they wanted to do, right? Anthropic has to go to trial still, they had to pirate the books to train, and they will not win on their right to commercialize the results of training, because neither did Google, so what good is the Fair Use ruling, besides allowing OpenAI v. NYTimes to proceed a little longer?
> Anthropic has to go to trial still, they had to pirate the books to train
They did not have to, they had an alternate means available (and used it for many of the books), buying physical copies and destructively scanning them.
> and they will not win on their right to commercialize the results of training
That seems an unwarranted conclusion, at best.
> so what good is the Fair Use ruling
If nothing else, assuming the logic of the ruling is followed by the inevitable appeals court decision and becomes binding precedent, it provides a clear road to legally training LLMs on books without copyright issues (combination of "training is fair use" and "destructive scanning for storage and searchability is fair use"), even if the pirating of a subset of the source material in this case were to make Anthropic's existing products prohibited (which I think you are wrong to think is the likely outcome.)
If you do something else, the result may be something else. The line is drawn by the application of subjective common sense by the judge, just as it is every time.
Even if LLMs were actual human-level AI (they are not - by far), a small bunch of rich people could use them to make enormous amounts of money without putting in the enormous amounts of work humans would have to.
All the while "training" (= precomputing transformations which among other things make plagiarism detection difficult) on work which took enormous amounts of human labor without compensating those workers.
BRB, I'm going to download all the TV shows and movies to train my vision model. Just to be sure it's working properly, I have to watch some for debugging purposes.
Indeed, I forsee a "training dataset consortium" arising out of this, whereby a bunch of companies team up to buy one copy of everything and then share it for training amongst themselves (ex. by reselling the entire library to each other for $1).
> But fish is too much like bash in syntax, meaning that I just think of it like bash until I have to type "(foo)" instead of "$(foo)", or "end" instead of "fi"
Note that fish does also support bash's "$(foo)" syntax and has for a few years now.
supporting more and more bashisms is what makes fish less attractive for me. i used fish for years. $(foo) in bash forks a subshell. in fish it doesn't. i am not a fan of supporting different syntaxes to do the same thing. if they had implemented $() to fork a subshell, that might have made some sense, but otherwise it is just redundant. learning to use () instead of $() or `` really isn't hard. so why?
Fair question. For me, it's extra friction whenever I copy a shell snippet that includes these non-fishisms, or when I'm running things between my workstation and the nearly 200 machines I manage, and I don't want to force my coworkers to have fish as the default root shell, or have to remember to "sudo --shell" or set up aliases. Well, plus, I'm still not entirely sold on fish, so I haven't wanted to set it up on my whole fleet.
I just recently switched my cordless tool ecosystem at home for DIY work. There's something about having tools that I'll reach for because they're a joy to work with, rather than avoiding picking them up because of rough edges.
Nit: that's not what negative reinforcement means. Negative reinforcement is about removing a negative stimulus, like inducing someone to go to a desirable website by improving their initially bad text contrast whenever they go there.
In this case, jumpscaring yourself would just be considered punishment (or "positive punishment").
* Positive reinforcement: [Adding] something so that entity does the action [more]
* Negative reinforcement: [Removing] something so that entity does the action [more]
* Positive punishment: [Adding] something so that entity does the action [less]
* Negative punishment: [Removing] something so that entity does the action [less]
P.S.: Note that this intentionally avoids diving into exactly how the Entity judges the Something. It's not always clear, even if in many cases you can guess.
P.S.: Sharing a book-quote that seems apropos, particularly the final two lines.
> People came, and tormented a nameless thing without boundaries, and went away again. He met them variously. His emerging aspects became personas, and eventually, he named them, as well as he could identify them. There was Gorge, and Grunt, and Howl, and another, quiet one that lurked on the fringes, waiting.
> [...] Howl handled the rest. He began to suspect Howl had been obscurely responsible for delivering them all to [the torturer] in the first place. Finally, he'd come to a place where he could be punished enough. Never give aversion therapy to a masochist. The results are unpredictable.
Our mission is to develop the best education in the world and make it universally accessible. Our culture is eng-driven, friendly, and very data-driven thanks to our large userbase. Check out our blog to see what engineering at Duolingo can look like: https://blog.duolingo.com/hub/engineering/
Tech stack: (frontends) Swift/Kotlin/TypeScript, (backend) Python/Kotlin/Postgres/Dynamo, (infra) AWS/k8s/Terraform.
We're always hiring for engineers. A few roles below:
Senior Engineering Manager, Chess (NY) https://careers.duolingo.com/jobs/8385137002
Senior Android Engineer (NY) https://careers.duolingo.com/jobs/8217266002
Senior iOS Software Engineer (NY) https://careers.duolingo.com/jobs/8318257002
Engineering Director (London) https://careers.duolingo.com/jobs/8444624002
Senior Gameplay Programmer (London) https://careers.duolingo.com/jobs/8424809002
Base salary: $177-240K for the NY-based roles, unsure about the London-based roles.
Perks: breakfast and lunch served in office, 2-week winter break + 20 days of flexible time off.
reply