You mean toggling the data setting? It's on the program to make implications visible. That's a big part of design for usability. It's possible ChatGPT did that and the user was unexpectably dense, it's more likely the implications were not properly explained/shown. That's why you add undo functionality, which the user even tried to find. Here, given the legal component, an undo available for a short time frame seems like a good fit.
But your comment could equally be about the fact of using chatGPT in the first place for the job, that I wouldn't justify at all.
Usability, UI, that's not the point, my question is just how is it possible that an esteemed academic professional doesn't understand that touching anything that deals with "data" on a service like ChatGPT could possibly result in consequences? And how is it possible that we have started to justify every careless and sloppy behaviour ever? Better not justify sloppiness.
That's absolutely usability. To show what will happen if something like "data" on a service like ChatGPT is touched, to prevent outcomes the user did not want, to prevent accidental data loss through guidance and safeguards. It's usually not sloppyness when users run into those situation, but bad program design. Maybe also here: ChatGPT now does not remove the old interactions when the data consent setting is changed, and shows prominent warnings before the removal of interaction threads, which is a separate option, according to a german article about this (https://www.notebookcheck.com/ChatGPT-Professor-verliert-zwe...). It is likely this was changed in the meantime, to better the usability.
Or the user really was surprised that deleting all interactions meant deleting all interactions. Then your position is a bit more understandable, but even then - mistakes happen and an undo would still be good.
The issue is not backup, the issue is that he is publicly and nonchalantly admitting that most of his work for the past years was ai-based, which might or might not constitute fraud given his professional position. Imagine being a student paying thousands over thousands expecting an expert human led instruction just to get this, imagine being a fellow researcher and suddenly being in a situation of not being able to trust this guy's current and past work.
The worst thing is all the people looking at this behaviour as normal and totally acceptable, this is where ai-sloppiness is taking us guys. I hope it's just the ai bros talking in the comments, otherwise we are screwed.
The shame is not that he was so imbecile to not have appropriate backups, it is that he is basically defrauding his students, his colleagues, and the academic community by nonchalantly admitting that a big portion of his work was ai-based. Did his students consent to have their homework and exams fed to ai? Are his colleagues happy to know that probably most of the data in their co-authored studies where probably spat out by ai? Do you people understand the situation?
It's not that I don't see or even agree with concerns around the misuse and defrauding angle to this, it's that it's blatantly clear to me that's not why the many snarky comments are so snarky. It's also not as if I was magically immune to such behavioral reflexes either, it's really just regrettable.
Though I will say, it's also pretty clear to me that many taking issue with the misuse angle do not seem to think that any amount or manner of AI use can be responsible or acceptable, rendering all use of it misuse - that is not something agree with.
It seems you are desperately trying to make a strawman without any sensible argument, i don't personally think it is "snarky" to call things as they are, plain and simple, you, as supposed expert and professional academic, post a blog on Nature crying that "ai stole my homework", it's only natural you get the ridicule you deserve, it's the bare minimum, he should be investigated by the institution he works for.
A reasonable amount of AI use is certainly acceptable, where "reasonable" depends on the situation, for any academic related job this amount should be close to zero, and no material produced by any student/grad/researcher/professor should be fed to third party LLM models without explicit consent, otherwise what even is the point? Regurgitating slop is not academic work.
Sorry to hear that's how my comments seem to you; I can assure I put plenty of sense into them, although I cannot find that sense on your behalf.
If you think considering others to be desperate, senseless, and erroneously reasoning without any good reason improves your understanding of them, and that snarky commentary magically ceases to be or is all-okay because it describes something you find a big truth, that's on you. Gonna have to agree to disagree on that one.
The author is an absolute and utter embarrassment for all the good academic professionals out there, and he is also literally admitting to defrauding his students of their precious money, which they thought was going to human-led instruction, he's also put all of his colleagues in an very dodgy position right now. It is preposterous that we are even arguing about it, it is the sign of how much AI-sloppiness is permeating our lives, it is crazy to think that you can be entitled to give years of work to a chatbot without even caring and then write an article like this "uh oh, ai eat my homework".
It is not the student‘s money - academic education is basically free in Germany. But they are still defrauded of their valuable time and effort to follow classes they thought were worth it.
Rent, university taxes and all the other taxes are still due, i'm from the EU too and education is definitely not "free", freer than somewhere else for sure.
It's really not obvious to calculate the output of any employee even with years of data, way harder for a software engineer or any other job with that many facets. If you've found a proven and reliable way evaluate someone in the first 2 weeks you just solved one of the biggest HR problems ever.
What if, and hear me out, we asked the people a new employee has been onboarding with? I know, trusting people to make a fair judgment lacks the ass-covering desired by most legal departments but actually listening to the people who have to work with a new hire is an idea so crazy it might just work.
The Brits don't produce anything anymore except money laundering in the City of London (where i work btw) and some cattle, so you should expect that kind of biased narrative from them. We in Italy are among the best and most profitable in the high-tech mechanical manufacturing industry, but we also have the worst paid engineers and technicians of the western world. The Italian Miracle.
How much are we willing to justify every wrong behaviour possible?
reply