Maybe she just needed the money. Paying later always has the risk that it will never going to happen. And she helped you, did she get anything in return?
I think customers will say: "I don't want to try to come up with (correct) requirements. I rather hire this SW firm that specializes in that skill.
What is often overlooked is that we are not trying to just produce programs, we are trying to produce "systems", which means systems where computers and humans interact beneficially.
In the early days of computers, "System Analyst" was a very cool job-description. I think it will make a comeback with AI.
> What is often overlooked is that we are not trying to just produce programs, we are trying to produce "systems", which means systems where computers and humans interact beneficially
People do overlook that. Software is written to solve a problem. And it interacts with other software and data from the real world. Everything is in flux. Which is why you need both technical expertise (how things work) and domain knowledge (the purpose of things) to react properly to changes.
Creating such systems is costly (even in the AI edge), which is why businesses delegate parts that are not their core business to someone else.
Intersting observation. There is a difference however. Pre-AI each human programmer understood the code they wrote, in general. So there were many humans who understood some part of the code. Poast-AI there will be no humans who understand any code, presumably. Sure we will understand its syntax, but the overall architecture of applications may be so complicated that no human can understand it, in practice.
Currently, the US copyright application process has an AI disclosure requirement for the determination of applicability of submitted works for protections under US copyright law.
The copyright office still holds that human authorship is a core tenet of copyrightability, however, whether or not a submission meets the "de minimis" amount of AI-generated material to uphold a copyright claim is still being decided and refined by the courts and at the moment the distinction appears to fall on whether the AI was used "as a tool" or as "an author itself", with the former covered in certain cases and the latter not.
The registration process makes it clear that failure to disclose submissions in large contribution authored by contractor or ai can result in a rejection of copyright claim now or retroactive on discovery.
You do not apply for copyright. In the US you can, optionally, register a copyright. You do not have to, but it can increase how much you get if you go to court.
I do not know whether any other country even has copyright registration.
Your main point that this is something the courts (or new legislation) will decide is, of course, correct. I am inclined to think this is only a problem for people who are vibe coding. The moment a human contributes to the code that bit is definitely covered by copyright, and unless you can clearly separate out human and AI contributed bits saying the AI written bits are not covered is not going to make a practical difference.
My (limited) understanding was that without formal registration you cannot file any infringement suits against any works protected by said copyright. Then what's the point of the copyright other than getting to use that fancy 'c' superscript?
That comment is spot on. Claude adding a co-author to a commit is documentation to put a clear line between code you wrote and code claude generated which does not qualify for copyright protection.
The damning thing about this leak is the inclusion of undercover.ts. That means Anthropic has now been caught red handed distributing a tool designed to circumvent copyright law.
The binary should be considered "derived work". Only the original copyright owner has the exclusive right to create or authorize derivative works. Means you are not allowed to compile code unless you have the license to do so. Right?
Yes, so is LLM generated code a derivative work of the prompts? Does it matter how detailed the prompts are? How much the code conforms to what is already written (e.g. writing tests)?
It looks like it will be decided on a case by case basis.
It will also differ between countries, so if you are distributing software internationally what will be a constraint on treating the code as not copyrightable.
> is LLM generated code a derivative work of the prompts?
Very good question I would think it is. You are just using a mechanical system to transform your prompt to something else, Right?
But, a distiguishing factor may be that:
1. Output of the LLM for the same prompt can vary
2. So you don't really have "control" over what the
AI produces
3. Therefore you should not get a copyright to the output
of the LLM because you had very little to say about
how that transformation (from prompt to code) was made.
" ...accidentally shipping your source map to npm is the kind of mistake that sounds impossible until you remember that a significant portion of the codebase was probably written by the AI you are shipping.”
I think writing is writing to an audience which includes yourself.
When you're thinking you are speaking in your mind which means you can not really listen to yourself at that same time. You don't hear yourself from yourself. You are too busy talking (in your head to yourself) that you can not really think about what you just said to yourself. You are producing language, not consuming it
But when you read what you have written, you can pause reading and do some thinking about what you just read. That makes it easier to understand what you are saying, and more easily see logical errors or omissions in it.
I think this is correct. I told a coworker that when I edit my email drafts they get shorter. He was surprised and said that his get longer. I trim and refine. Sure, I add details that I missed at first. But I also create better structure and remove ambiguity or unnecessary words.
Yesterday, I was working on an email for someone who I was trying very hard not to overwhelm with technical details. I cut it roughly in half in terms of words, but I also turned paragraphs in single lines of sequenced steps or concise statements without decorating the text with unneeded aphorisms / commentary.
I was pretty pleased with the end result. This is only possible because of careful rereading and reflection (including knowing my intended audience). I imagine an LLM can approximate this, but I don't trust one to craft with the same level of care. Then again, we all think we're better than the robots at the things we care about most.
I understand the urge to throw mechanical writing at the bots. But a human will grasp the need to add a detail explaining the why of something when (the current) bots gloss over it. There's still nuance worth preserving.
So there's a direct monetary cost to this extra verbiage:
"Great question! I can see you're working with a loop. Let me take a look at that. That's a thoughtful piece of code! However,"
And they are charging for every word! However there's also another cost, the congnitive load. I have to read through the above before I actually get to the information I was asking for. Sure many people appreciate the sycophancy it makes us all feel good. But for me sycophantic responses reduce the credibility of the answers. It feels like Claude just wants me to feel good, whether I or it is right or wrong.
reply