Reminds me of this interactive demo (created by the lead of Dead Cells) where you can adjust the juice amount in the menu: https://deepnight.net/games/game-feel/
Are you sure this isn't just an example of the Gell-Mann amnesia effect?
> Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.
> In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
> Complex training courses, meditation and cultural exposure were most effective (gs = 0.66), while the use of cognitive manipulation drugs was least and also non-effective, g = 0.10. The type of training material was also important. For instance, figural methods were more effective in enhancing creativity, and enhancing converging thinking was more effective than enhancing divergent thinking.
When I read that text my first thought was making some videos of my mom that passed away, since so few videos of her exist and pictures don't capture her personality
The fact that your first thought was how you could use this amazing tech to remember a lost family member who you love, and OP's first thought was that it could be used for evil so it shouldn't exist says a ton about each of you.
If you put a piece of technology into the world you should spend more time on what consequences that has for the living in the future, not the dead.
As someone who has worked on payments infrastructure before, it's probably nice if your first thought is what great things an aunt can buy for her niece, but you're better off asking what bad actors can do with your software, or you're in for a bad surprise.
I wonder if one possible solution is making things more "the Unix way" or like microservices. Then instead of depending on some super specific inputs to reach deep into some code branch, you can just send input directly to that piece and fuzz it. Even if fuzzers only catch shallow bugs, if everything is spread out enough then each part will be simple and shallow.
Fuzzers can already do this. When you set up a fuzzer you set up what functions it's going to call and how it should generate inputs to the function. So you can fuzz the X.509 parsing code and hope it hits punycode parsing paths, but you can also fuzz the punycode parsing routines directly.
This is the flip size of the fuzzing approach that is called property testing. It's legit but involves unit test style manual creation of lots of tests for various components of the system, and a lot of specs of what are the contracts between components & aligning the property testing to those.
> blows GT right out of the water with its accuracy.
For general vocabulary, very likely. I believe Google Translate's competitive edge comes in the form of figuring out translations for jargon, slang, etc. from equivalent-corpus context.
I like to think of Google Translate as a keyword extractor on steroids. It doesn't necessarily give you the right prose, but it does better than anything I know of at giving you the right bag-of-words for indexing a foreign-language document in English.
(I hypothesize this to be Google Translate's real driving purpose, and the reason it still sees regular updates after all these years: it's used to index foreign-language web pages, books, and videos so that there can be a single TF-IDF token in Google's backend for each language-neutral conceptual category rather than distinct token for each language-specific word.)
Don't know about slang, but I recently translated some technical documentation from German to English. I was surprised how many specific terms deepl knew, and even if it failed at that the grammar surrounding the offending term was still mostly correct. It was night and day compared to Google. Bag of words describes it well.
I just pasted a few conversations from whatsapp in portuguese and at least for that language google translate was 10x more accurate on the meaning of the very colloquial words used, mixed with English etc.
Edit: thanks for the link though, I'm saving it and trying from time to time, we use translation a ton in my household (everyone is learning English after moving abroad)
I immediately checked it, it has good translation for some languages (specificaly German), but definitely does not blow GT out of water, it is on par at best for a few languages for translation quality. Besides,
- It has only a handful of languages
- No OCR / photo support
- No speech / conversation support.
IMO, as is, it is way inferior to GT. But it is good to have competition in this area as there is still a lot of room for improvement.
Being in a relationship with a English/French language divide, DeepL was a total game-changer. I can't attest to its abilities in other languages, but it is obviously superior to google translate on correctness and "natural" translations for our use-case.
For Japanese Deepl leaves GT in the dust and I've found myself using it more over the last few months.
I've often seen GT get the basics dead wrong - for example formal greetings Japanese people have been using in formal correspondence for hundreds of years. When it gets things right it's often fleeting, a week later it gives you something different.
I'm curious to see how many of my friends will stop using Google Translate on their phones and just go with Apple's new translation app coming in iOS 14.
It probably will be like Maps where the first year or two were just a joke and then Maps became faster, easier to use, and just as good as Google Maps.
I was recently translating a French tweet to English that happens to mention the title of a French book. I was very surprised to find that DeepL kept the book title in its original French instead of translating it. It suggests to me that DeepL seems to have more "understanding" of the text, if deep learning algorithms can be said to have any understanding at all.
Translation to a single language back and forth seems to be really really good.
Although quite impressive, it still suffers from the same problem that most other translator service have if you keep translating the same text between random languages.
I translate the following text from English to various other languages (without going back to english) 6 times and then I went back to English.
The original text is
I wonder if the test you gave me was biased. My belief is that because I'm an elf, some questions are inherently biased. Water dwarfs would not have a problem answering the question: Are unicorns wet? But elves do.
The final English text is
I wonder if the test they gave me was biased? I guess being an elf, there is a bias in the question. I'm sure the water gnomes would have no problem answering the question of whether the unicorn is wet. ? But the elves.
Notice how drarfs became gnomes. The last sentence is not a sentence. Various other problems like a "?" by itself etc.
Translation to a single language back and forth seems to be really really good.
This could be considered a "feature" if used for comedic effect. Anyone who has young children that are into Disney movies should check out this video https://www.youtube.com/watch?v=2bVAoVlFYf0. This woman runs "Let It Go" from the movie Frozen through Google Translate a bunch of times and then sings the result.
You might also be testing the inherent difficulty of lossless translation between many languages.
It would be interesting to see the same tasks with professional translators. I'm guessing some of the errors might still be there at the end, like the gnome one, whereas the last sentence would probably be folded in the previous sentences.
Do you remember which 6 languages you went through before going back to English?
Since these are hyperparameters, some of which are annealed over the entire training period, and given the fact that the training required ungodly amounts of computing time, I think it is just impractical for them to have fully checked whether they were set optimally. They probably went with what seemed good, and trusted deep networks to pick up the slack. (this is total speculation on my part).
I do think if they'd used some more sophisticated RL algorithms, perhaps with intrinsic curiosity, or some kind
of hierarchical task learning, they might have been able to reduce their training time and maybe been able to tune their hyperparameters a bit more