> The volume of cargo carried by sailing vessels in the old days was orders of magnitude lower.
Surprisingly, no, it wasn't. I'll slightly fudge the numbers and talk in terms of proportion of world trade that was carried by ocean-going vessels (because if you double the population then it's reasonable to talk about doubling the number of ships).
The world economy was very globalised in 1913. That level of globalisation in trade wasn't matched again until the 1990s.
We're only a little more global now than we were in the age of sail.
The British navy and merchant fleet was a wonder of its era.
Show your work. Without numbers, those are all just assertions. And the assertion that the world's economies were more globalized before WW1 than after the Cold War is particularly dubious.
Writing a course for a customer on how to use Claude Code well, especially around brownfield development (working on existing code bases, not so much around vibe-coding something new).
If the "outcompeting" is possible because of Chinese government subsidies, then it's important to protect local industry from unfair competition.
It's similar to the logic behind anti-trust actions against monopolists. If the playing field isn't level, then the USA government steps in to level it.
(Whether BYD is subsidised or not is another question, but the above is the logic of protecting local industry.)
> If the playing field isn't level, then the USA government steps in to level it.
More recently though, it kind of seems like if the playing field isn't tipped strongly towards the US, then the US government will step in to tip it their way.
Not sure why this is downvoted. The Chinese government has been quite transparent in terms of globally dominating several industries including EV through heavy government support.
It would make no sense to destroy your own industry because it can’t compete with a heavily subsidized foreign industry.
Indeed. The obvious counter-example to the claim is "rainbows" which were definitely the topic of heated scientific argument for hundreds of years (and non-scientific ones before that).
I think of it as trying to encourage the LLM to want to give answers from a particular part of the phase space. You can do it by fine tuning it to be more likely to return values from there, or you can prompt it to get into that part of the phase space. Either works, but fiddling around with prompts doesn't require all that much MLops or compute power.
That said, fine tuning small models because you have to power through vast amounts of data where a larger model might be cost ineffective -- that's completely sensible, and not really mentioned in the article.
My understanding of model distillation is quite different in that it trains another (typically smaller) model using the error between the new model’s output and that of the existing - effectively capturing the existing model’s embedded knowledge and encoding it (ideally more densely) into the new.
What what I was referring to is similar in concept, but I've seen both described in papers as distillation. What I meant was you take the output of a large model like GPT4 and use that as training data to fine-tune a smaller model.
Yes, that does sound very similar. To my knowledge, isn’t that (effectively) how the latest DeepSeek breakthroughs were made? (i.e. by leveraging chatgpt outputs to provide feedback for training the likes of R1)
> That said, fine tuning small models because you have to power through vast amounts of data where a larger model might be cost ineffective -- that's completely sensible, and not really mentioned in the article.
...which I thought was arguably the most popular use case for fine tuning these days.
Not sure I agree with either/or. In person assessments are still pretty robust. I think an ideal university will teach both with a clear division between them (e.g. whether a particular assessment or module allows AI). What I'm currently struggling with is how to design an assessment in which the student is allowed to use AI - how do I actually assess it? Where should the bar actually be? Can it be relative to peers? Does this reward students willing to pay for more advanced AI?
I'm not sure what's wrong with me, but I just wasted several hours wrestling codex to make it behave.
Here's my workflow that keeps failing:
- it writes some code. It looks good a first glance
- I push it to github
- automated tests on github show that there's a problem
- go back to codex and ask it to fix it
- it does stuff. It looks good again.
Now what do I do? If I ask it to push again to github, then it will often create a pull request that doesn't include stuff from the first pull request, but it's not a pull request that stacks on top of the previous pull request, it's a pull request that stacks on top of main.
When asked to write something that called out to gpt-4.1-mini, it used openai.ChatCompletion.create (!?!!?)
I just found myself using claude to fix codex's mistakes.
I upgraded to Pro just because of Codex and I am really not impressed. Granted, I am using rust so that may be the issue (or skill issue on my end too).
One of the things I am constantly struggling with is that the containers they use are having issues to fetch anything from the internet:
error: failed to get `anyhow` as a dependency of package `yawl-core v0.1.0 (/wor
kspace/yawl/core)`
Caused by:
download of config.json failed
Caused by:
failed to download from `https://index.crates.io/config.json`
Caused by:
[7] Could not connect to server (Failed to connect to proxy port 8080 after 30 65 ms: Could not connect to server)
Hopefully they fix this and it gets better with time, but I am not going to renew past this month otherwise.
You can specify a startup script for your environment in the Edit -> adbvaned section. The code placed there will run before they cut off the internet access. Also worth noting that it uses a proxy stored in $http_proxy.
Took me an few hours today to figure out how to install maven and have it download all the dependencies. Spent an hour trying to figure out why sudo apt-get update was failing, it was because I was was using sudo!
I have this issue with Devin. Given my limited knowledge of how these work, I believe there is simply too much context for it to take a holistic view of the task and finish accordingly.
If both OpenAI and Devin are falling into the same pattern then that’s a good indication there’s a fundamental problem to be solved here.
I think you need to run the tests locally before you push the PR. I actually think you need to (somehow?) make this part of the generation process before Codex proposes the changes.
At the time when they need to be making connections and needing patronage you want PhD students to be showing up integrity and honesty problems and asking awkward questions about the powerful people in the community?
Of course that's what should be happening, but the incentives aren't pushing in the right direction for it currently.
Surprisingly, no, it wasn't. I'll slightly fudge the numbers and talk in terms of proportion of world trade that was carried by ocean-going vessels (because if you double the population then it's reasonable to talk about doubling the number of ships).
The world economy was very globalised in 1913. That level of globalisation in trade wasn't matched again until the 1990s.
We're only a little more global now than we were in the age of sail.
The British navy and merchant fleet was a wonder of its era.