This particular blog post and plan is regarding removing support of old TLS versions and options on the client. Which means that all server administrators need to do is enable TLS 1.2 and modern options.
Old clients (such as Android) will still be fine, if the servers don't also turn off older versions/options.
The problem with this fix is that then as long as you have the fallback, the user gains none of the security properties of TLS 1.3 (since the attacker can always force a downgrade by sending junk to the client during the handshake) and has the additional cost of a second TLS negotiation.
While there was previously this "TLS fallback" implemented in Chrome to work around buggy endpoints, this was primarily due to buggy endpoints* which was a much larger issue and difficult to fix, while these middlebox issues affect a much smaller portion of users and we're hopeful that the middlebox vendors that have issues can fix their software in a more timely manner.
* TLS 1.3 moves the version negotiation into an extension, which means that old buggy servers will only ever know about TLS 1.2 and below for negotiation purposes and won't break in a new matter with TLS 1.3.
Am I not correct that 1.3 got backed out of chrome for the current issue? So 1.3 isn't even there now... Which breaks anything that explicitly requires 1.3. My fix would support all cases and not break anything. Unless I missed something?
Nothing can require 1.3, since 1.3 isn't finished yet. They were doing interoperability testing with a draft version of TLS 1.3, and nobody should require a draft version of TLS 1.3 without having a fallback to TLS 1.2.
While there was a secondary issue with the deployment regarding unofficial builds/derivatives, the field trial was primarily rolled back due to the number of affected customers due to the middle-box issues in their enterprise/edu networks.
This at least means that the developer is aware that these parts of the spec are extensible, and by explicitly ignoring the GREASE values are explicitly choosing to potentially have a broken application in the future. This is a different class of problems than developers who weren't aware certain fields were extensible.
Since computation is measured by bytecode cost of instructions, supporting other languages would require a great deal of effort to do similar bytecode counting and to make sure that the counting across languages is fair and balanced. They've considered doing other forms of computation measurement, but none of the alternatives are very good or deterministic (time, lower-level instruction counting, etc).
We don't explicitly require teams to open source their bot after the tournament, since sometimes teams have special restrictions on publishing their code. But some of the top teams have published their code or framework online in the past.
We're not happy about the restriction either, and hopefully we'll run a Battlecode that doesn't have a programming language restriction, but we haven't found a way to deal with limited instruction count execution that we are happy with that is machine independent, and counting Java bytecode has worked for us so far.
If there are other JVM languages people want supported, we'd be happy to take pull requests on Github once we make the new gameplay public tomorrow.
Can you explain in more detail what you mean by "limited instruction count execution that is machine independent"? For example:
- Is it measuring the count and making sure it doesn't exceed some threshold, or is the client API designed to actually give each client a specific number of instructions and terminate if that is exceeded?
- Does "machine independent" mean it needs to run outside of x86/amd64?
- Would it offend your sensibilities if CPU cycles used by C programs counted the same as those used by JVM programs?
In the past we've had it where where each bot is allowed a certain amount of "computation", with different upgrades giving you more "computation" each turn.
Machine-independent isn't really the write word, more of that we want the same two bots fighting on the same maps to be deterministic regardless of the machine its being run on.
For mechanics like this, we've found that bytecode instruction counts are the best metric we currently have, since anything that is time-based could result in different results depending how the CPU schedules the bots, and doing something like using PIN or another system to count assembly instructions turns it into a competition of who can write the most optimized assembly code.
Why the JVM? Why not run your own bytecode, or even use Redcode straight up?
edit: Perhaps I should explain. I dabbled in rec.games.corewars a few times, and what I found most rewarding was all the tricks you could play with the bytecode. I've never found the Java variants as fun, as one fun part of the game seems lost. I feel like I'm missing something.
While we do allow bytecode tricks, we want the focus of the competition to be on developing high level macro and bot level micro strategy, instead of spending all the time working on optimizing at the bytecode level. Especially since the main competition period is less than a month, we don't want to encourage time being spent on that.
A majority of them, however since we take source and compile it on the tournament servers, we'd need to get the compile chain and server working with those languages.
Old clients (such as Android) will still be fine, if the servers don't also turn off older versions/options.