Suffering from executive dysfunction obviates that there is no free will, and it's as if good health is simply having that illusion work out harmoniously.
I think you make a good point that much of what we call thinking is really discourse either with another ^[0], with media, or with one's own self. These are largely mediated by language, but still there are other forms of communicative _art_ which externalize thought.
The other thoughts here largely provide within-indivudal examples: others noted Hellen Keller and that some folks do not experience internal monologue. These tell us about the sort of thinking that does happen within a person, but I think that there are many forms of communication which are not linguistic, and therefore there is also external thinking which is non-linguistic.
The observation that not all thought utilizes linguistic representations (see particularly the annotated references in the bibliography) tells us something about the representations that may be useful for reasoning, thought, etc. That though language _can_ represent the world it is both not the only way and certainly not the only way used by biological beings.
Agreed. I have never heard a scientist use this phrase, but I've heard plenty of left leaning folks use it. As left wing politics absorbs science as it's cause - and not a cause that everyone is for - I fear it actively turns right leaning folks against science solely because of it's mindless support by progressive folk.
Without having read into this deeper, it sounds like someone could take an original video which has this code embedded as small fluctuations in luminance over time and edit it or produce a new video, simply applying the same luminance changes to the edited areas/generated video, no? It seems for a system like this every pixel would need to be digitally signed by the producer for it to be non-repudiable.
Exactly, that is my question too. If you can detect the lighting variations to read and verify the code, then you can also extract them, remove them, reapply to the edited version or the AI version... varying the level of global illumination in a video is like the easiest thing to manipulate.
Although there's a whole other problem with this, which is that it's not going to survive consumer compression codecs. Because the changes are too small to be easily perceptible, codecs will simply strip them out. The whole point of video compression is to remove perceptually insignificant differences.
As I understand it, the brilliant idea is that the small variantions in brightness of the pixels look just like standard noise. Distinguishing the actual noise from the algorithm is not possible, but it is still possible to verify that the 'noise' has the correct pattern.
Take a computer screen with a full wash of R, G, or B. Sync the RGB display with your 2FA token, but run it at 15FPS instead of one code per minute.
Point the monitor at the wall, or desk, or whatever. Notice the radiosity and diffuse light scattering on the wall (and on the desk, and on the reflection on the pen cap, and on their pupils).
Now you can take a video that was purported to be taken at 1:23pm at $LOCATION and validate/reconstruct the expected "excess" RGB data and then compare to the observed excess RGB data.
What they say they've done as well is to not just embed a "trace" of expected RGB values at a time but also a data stream (eg: a 1FPS PNG) which kindof self-authenticates the previous second of video.
Obviously it's not RGB, but "noise" in the white channels, and not a PNG, but whatever other image compression they've figured works well for the purpose.
In the R, G, B case you can imagine that it's resistant (or durable through) most edits (eg: cuts, reordering), and it's interesting they're talking about detecting if someone has photoshopped in a vase full of flowers to the video (because they're also encoding a reference video/image in the "noise stream").
The code could be cryptographically derived from the content of the video. For simplicy, imagine there are subtitles baked into the video and the code is cryptographically derived from those.
The general idea is for the signature to be random each time, but verifiable. There are a bajillion approaches to this, but a simple starting point is to generate a random nonce, encrypt it with your private key, then publish it along with the public key. Only you know the private key, so only you could have produced the resulting random string that decodes into the matching nonce with the public key. Also, critically, every signature is different. (that's what the nonce is for.) If two videos appear to have the same signature, even if that signature is valid, one of them must be a replay and is therefore almost certainly fake.
(Practical systems often include a generational index or a timestamp, which further helps to detect replay attacks.)
I think for the approach discussed in the paper, bandwidth is the key limiting factor, especially as video compression mangles the result, and ordinary news reporters edit the footage for pacing reasons. You want short clips to still be verifiable, so you can ask questions like "where is the rest of this footage" or "why is this played out of order" rather than just going, "there isn't enough signature left, I must assume this is entirely fake."
But the point is that you'd be extracting the nonce from someone else's existing video of the same event.
If a celebrity says something and person A films a true video, and person B films a video and then manipulates it, you'd be able to see that B's light code is different. But if B simply takes A's lighting data and applies it to their own video, now you can't tell which is real.
I am not defending the proposed method, but your criticism is not why:
Lets assume the pixels have an 8-bit luminance depth, and lets say the 7 most significant bits are kept, and the signature is coded in the last bit of the pixels in a frame. A hash of the full 7-bit image frame could be cryptographically signed, while you could copy the 8-th bit plane to a fake video, the same signature will not check out according to a verifying media player, since the fake video's leading 7-bit planes won't hash to the same hash that has been signed.
What does this change compared to status quo? nothing: you can already hash and sign a full 8-bit video, and Serious-Oath that it depicts Real imagery. Your signature would also not be transplantable to someone elses video, so others can't put fake video in your mouth.
The only difference: if the signature is generated by the image sensor, and end-users are unable to extract the private key, then it decreases the number of people / entities able to credibly fake a video, but provides great power to the manufacturers to sign fake videos while the masses are unable to (unless they play a fake video on a high quality screen being imaged by a manufacturer-privatekey-containing-image-sensor.
The bandwidth of the encoding is too low for playing cryptographic games. This doesn't preclude faking a video by introducing the code into your faked video--it's just that that is much, much more difficult than stringing pieces together in an incorrect fashion.
This is more akin to spread spectrum approaches--you can perfectly well know the signal is there and yet finding it without knowing the key is difficult. That's why old GPS receivers took a long time to lock on--all the satellites are transmitting on top of each other, just with different keys and the signal is way below the noise floor. You apply the key for each satellite and see if you can decode something. These days it's much faster because it's done in parallel.
Given how profitable it is, I doubt it’ll be changed.
That said, I very much like Codeweavers’ approach [0], which IMO is the modern equivalent to purchasing software on a physical medium: you buy it, you can re-download it as many times as you’d like, install it on as many machines as you’d like (single-user usage only), and you get 1 year of updates and support. After that, you can still keep using it indefinitely, but you don’t get updates or paid support. You get a discount if you renew before expiry. They also have a lifetime option which, so far, they’ve not indicated they’re going to change.
I have no affiliation with them, I just think it’s a good product, and a good licensing / sales model.
It's not really about the culture anymore. Software that requires maintenance — and most does — has a continuous development cost. As such, subscription is the most natural way to cover it.
On the other hand, we have software which has low maintenance cost, but sold for peanuts ($0-$10) in small quantities, so authors try to introduce alternative revenue streams.
As in, it's fair to pay continuously (subscription) for continuous work (maintenance), so I don't expect that to go away. Ads, though, yuck...
Software sold today does not require maintenance. Software to work in the future requires maintenance. I am not buying future software. I am buying today software.
This is a good argument in favor of subscriptions not being mandatory, but not in favor of the abolishment of subscriptions overall, which is what they were talking about.
That is the old way. You bought some application and it came with upgrades until next major version release or similar. Then when that release came out you could decide to pay again or just keep using the old (now unsupported) version you already paid for.
That solved all the issues with paying for maintenance, but sadly someone must have figured out a mandatory subscription was a better way to make more money.
It's not only a way to make more money, but it also matches better to modern development approaches.
Major versions come from a time where one had to produce physical media. Thus one could do a major release only every few years. Back then features had to be grouped together in a big bang release.
Nowadays one can ship features as they are being developed, with many small features changes all the time.
That was probably true a long time ago, but I bought software using that model that did not have any physical releases and at least one had frequent minor releases adding new features.
It seems to me like the "subscription model" is exactly the same, except for the use of DRM and cloud dependencies to force users to pay for new versions. The only thing that changed was that the option to remain on an old version was taken away from users.
Even ignoring security, bug fixes, new features, etc it is also not fair that you can get value from the app every month, but the developer doesn't get to capture a reward for any of this value. Having people pay monthly for value they get monthly seems reasonable.
Also leasing cars isn't usually (ever?) from the manufacturer of the car.
Houses, not sure how it's done in more populous areas, but around here you don't ever rent from the builder. You rent from someone who bought the house from a builder (or bought from someone who did, etc etc).
I disagree. You can read a book or listen to a record, watch a dvd, unlimited times, having fairly paid upfront a price for the item. A computer is general purpose and lets you check your email every day, hell even lets you create new value in the form of new software, without the manufacturer receiving a royalty.
The idea of capturing reward post-receipt is feudalistic.
The existence of products in competitive markets is not a counter example to what my point was. I recommend looking at the terms bottom up pricing and top down pricing. The former is about creating a price based off of how much it costs to do business and then adding a profit margin. The latter is creating price in line with how much value it offers customers. The existence of products using bottom up pricing doesn't mean top down pricing does not exist.
That's not how markets work (and I disagree that it would be reasonable).
Price is usually established based on how much something cost to make (materials, effort, profit), combined with market conditions (abundance/shortage of products, surplus cash/tough economy...).
If you want to continuously extract profit from consistent use of a hammer or vacuum cleaner, somebody else will trivially make a competing product at a lower price with no subscription.
>somebody else will trivially make a competing product at a lower price with no subscription.
And software like photoshop is not trivial to copy so it can survive being priced based off of value provided. There exists competitors that don't have a subscription, but they are not good enough to kill it.
I mean, are the phones themselves really making money off ads or are those totally separate companies? I don't disagree that this brings in business, but I don't agree that this is a significant motivator in terms of phone sizes
Number one factor for me when buying a phone is how long is it going to last. I mean durability, camera quality, os updates. Will I still want to use/be using it in 5 years?
I cannot justify $700 as much as I _really want a smaller phone_. But _maybe if it was built to last_ I would be the customer and I would tell all my friends.
Currently use a Pixel 7a because it was cheap and OK. I was debating the iPhone 12 mini but it was already a little old, and I prefer Android.
I suspect, if others are like me, that those who want small phones also just want something that works and is a little minimal - not necessarily all the power best camera etc. To be clear, I _don't_ want one of those minimalist dumbphones, I want _a smartphone_ that's small Do y'all feel the same?
Propose a $500 small phone that's OK on specs but LASTS.
Yes. The iphone se was basically the exact thing I wanted (and previously the moto G series). It doesnt seem like apple wants to continue to offer the product line.
Right, and even the 2nd/3rd gen SE got much bigger than the original. I basically want the original SE (which I believe is the same as the iPhone 5s), with no bezels.
Actually, looking at that handy link you provided, it appears the 13 mini is basically that. Discontinued in 2023 : /
I think my ideal would be a folding phone that, when folded, is the size of the 13 mini. Not sure if Apple will venture into folding phones anytime soon, though.
This brings personal nostalgia to when I was very young and made an "OS" in PowerPoint using links between slides, animations, and the embedded internet explorer object. Similarly, I'm not sure I see any practical use in this. Still it's a really fascinating conceptual demonstration of networks understanding intent in the complex state-machine that is a graphical user interface.
Certainly other field are competitive, but the current AI boom has been ridiculous for a while now. As an outside observer, the competition seems to be for the final money, prestige, or whatever the top papers win, rather than competition at the level of paper acceptance...
The competition racket and inflation keeps turning. It used to be publications. Then it was top conference publications. Now it's going viral on social media, being popularized by big AI aggregators like AK.
It's crazy, most Master's students applying for a PhD position already come with multiple top conference papers, which a few years ago would get you like 2/3 of the way to the PhD, and now it just gets you a foot in the door in applying to start a PhD. And then already Bachelor students are expected to publish to get a good spot in a lab to do their Master thesis or internship. And NeurIPS has a track for high school students to write papers, which - I assume - will boost their applications to start university. This type of hustle has been common in many East Asian countries and is getting globalized.
That whole thing feels like a crypto coin, as in, its currency that’s worth something to just that particular group. The industry obviously doesn’t care about all these papers, so the question is, what is the social structure where these papers provide status and respect (who values their currency?).
Science is prestigious and quick and quantifiable way to measure it are used as heuristic proxies. There are many angles to answer your question. Are you interested on the industry connection, how it translates to money, or the political aspects etc? People generally have little time for evaluation, there is an oversupply of applicants, being able to point to metrics can cover your ass against accusations of bias. It offloads the quality assurance to the peer review system. This person's work has been assessed by expert peers in 5 instances and passed to acceptance in a 20% acceptance rate venue where the top experts regularly publish. It's a real signal. They can persist through projects, communicate and defend it against the reviewers, has presented it to crowds, etc.
Its a prestige economy. There are other things too like having worked with someone famous or having interned in a top company.
Prestige economy is what I suspected. I recently read an AI paper that I mostly came up on a random walk, but there was a Stanford student that had already created the research paper (not exactly but more or less). In terms of “true” signal, I’d imagine that student getting reviewed as credible signals that we’re in bad shape because I can promise you I came up with the exact thesis and implementation and it was truly just common sense stuff - not research worthy.
Makes me wonder, have I turned brilliant or is it quite unimpressive out there?
I’m inclined to even suggest to you that the prestige economy started with truly prestigious research work, of which then the institutions “ordered” as many more of those as they could, hence the industrial levels of output. Not unlike VCs funding anything and everything for the possibility of the few being true businesses.
The reality is that innovation is hard to plan. It's like outperforming the market. Scientific breakthroughs are about figuring out where are gaps in our knowledge that are fruitful when filled, or where our current understanding is wrong. But if we already knew what we believe wrongly, then we already wouldn't believe it. You can't produce breakthroughs like clockwork and the more thorough work you do, the less opportunity there is to find out later that you were wrong!
The problem is that of course everyone wants the glory of finding out some new groundbreaking innovative disruptive scientific discovery. And so excellence is equated with such discoveries. So everything has to be marketed as such. Nobody wants to accept that science is mostly boring, it keeps the flame alive and passes on the torch to the next generation, but there's far less new disruption than it is pretended. But again, a funding agency wants sexy new finding that look flashy in the press, and bonus points if it supports their political agendas. The more careful and humble an individual scientist is, the less they will seem successful. Constantly second guessing your own hypotheses, playing devil's advocate strongly and doing double and triple checks, more detailed experiments, etc. take longer time and have better chance at discovering that really the sexy effect doesn't exist.
> Makes me wonder, have I turned brilliant or is it quite unimpressive out there?
Obviously, it's impossible to say without seeing their work and your work. But for context, there are on the order of tens of thousands of top-tier AI-related papers appearing each year. The majority of these are not super impressive.
But I also have to say, what may seem "just common sense" may look like that just in hindsight, or you may overlook something if you don't know the related history of methods, or maybe you're glossing over something that someone more experienced in the field would highlight as the main "selling point" of that paper. Also, if common sense works well, but nobody did it before, it's still obviously important to know how well it works quantitatively, including detailed analysis of the details.