Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Log is the "Pro" in iPhone 15 Pro (prolost.com)
1163 points by robenkleene on Oct 11, 2023 | hide | past | favorite | 409 comments


I've never owned an Apple device. I don't take photographs or video with my phone very often. But this video presentation was captivating. It was clear, concise, without any nonsense, and thoroughly interesting.


The guy in the video is Stu, not only does he have an impressive resume (https://www.imdb.com/name/nm0556179/ known for originality, e.g. he did Sin City's look), he is also the original author of MagicBullet, which is one of the most used software by people in the industry to do easy color work. If there's one person who knows about color work, LUTs in creative work, color encoding systems, etc., it's him so naturally he knows how to present relevant subject matter without nonsense.


> If there's one person who knows about [thing] it's him so naturally he knows how to present relevant subject matter

One lesson (no pun intended) of the academic environment is that no, it doesn’t work that way. Some people are subject-matter experts, some are brilliant expositors, but while you can’t be a good expositor without a decent knowledge of the subject matter, you absolutely can be a world-leading scientist and at the same time completely rubbish at explaining your ideas. (Some areas are better at making use of such people than others.)

Good exposition deserves additional credit on top of subject expertise, is what I’m trying to say.


Hacker News. Why would you agree when you can argue? <-- official subtitle.


I mean, you’re not wrong :P even if I’d probably put this as: Why not stay silent if you don’t have anything to add?

This particular topic, though, I do feel more strongly about than most I comment on here.

Technical exposition is an underappreciated art, and in many places an unrewarded one. How many star technical writers can you name, compared to star programmers? How much effort does writing a good review paper take, compared to an incremental research advance, and how much harder is it to publish one?

Halmos, one of the best expositors of (non-popular!) mathematics in at least a generation, quotes an anonymous mathematician in How to write mathematics: “All of us, I think, feel secretly that if we but bothered we could be really first-rate expositors”; the mathematician’s point being that nothing could be further from the truth. (Incidentally, it’s a bit startling how well Halmos’s principles map to the ones in Vonnegut’s How to write with style.)

I’m sensitive about this in part because I myself want to be good at exposition and never feel that I actually am. Still, personal hangups aside, if the guy is an expert and can communicate, with no tradeoff on either side, that’s rare enough—and must have needed work enough—that it should be celebrated.


I'm pretty sure that's not the official subtitle


Can you please provide three sources supporting your counter argument?


And I thought it was hit on the head lessons here. And I don't want to argue about that..


I agree, it is a different skillset.

This also means that you will often find excellent easily digestible expositions of ideas from people who don't have a good grasp of the subject matter, which is problematic.


Yeah the 'good at explaining / doesn't understand' part of the 2x2 is an entire genre on twitter.


oh thx for sharing his background. this highlights something that i repeatedly find so frustrating.

this guy knows what he’s talking about. he has authority in his field. he has proven both theoretically and materially that his wisdom is high quality.

yet, videos from people with far less expertise on the subject matter will likely drown his out. over and over again i’ve seen the top videos on so many subjects repeating just plain wrong information but the “creator” says it with confidence so people just eat it up. even worse, so many times i’ve seen people in the comments try to nicely point out where the “creator” went wrong, and people go after the commenter for daring to imply this person on the magic screen might be misinformed.

it’s such a shame that being higher in the search says almost nothing about your knowledge on a subject and only means you’re better at manipulating the algorithm rather than having actual higher quality information.

i really wish that our searches would prioritize quality of information. at this point, years later, i think it’s pretty clear that the “wisdom of the crowd will shoot the highest quality to the top” has been proven to not be the case.

also, i don’t want to imply that amateurs can’t have valid and interesting perspectives, sometimes they might, just that being higher in search means quite literally nothing on wisdom of a subject—especially if a grifter can monetize off a trendy nuanced subject.

anyway, this video was incredible information. thx for sharing his background.


>> i really wish that our searches would prioritize quality of information

Use a free service, and the person providing it doesn't have an incentive to give you what you want.

Which is how we got current Google.


Except much of the alt media ecosystem of grifters and influencers are precisely providing people what they want to hear and raise massive amounts of money doing so. They oftentimes do not think they’re grifters because they feel they are responding to a “market need” and so they are providing a service even if they might not even agree ethically with the output necessarily. This is how we wind up with so many conspiracy theorist nutjobs dominating recommendation algorithms again and again across many different platforms. People are highly, highly fallible and what they want may be, in fact, some of the worst things for them. As such I’m convinced that most adults are only physically and legally that way given how incredibly naive and immature people appear to be as a baseline regardless of cultural or even educational background.


yeah, i mean, this is hn, we’ve been collectively dissecting the “you’re not the customer” issue for years upon years.

it doesn’t make it less of a shame. and it certainly shouldn’t stop us from wondering.


IMHO, it's the monopoly aspect that really jumped the shark.

Google as one among competitors (read: pre-Android/~2005) still had to be a useful enough search engine to attract users.

Once their search share attained hegemony, user satisfaction deprioritized (relatively) and revenue was allowed to dominate.

And thus, we now get a Google who has little interest in weeding out SEO'd spam. (And not as in 'tweaking their algorithm' -- as in fundamentally detecting and delisting all recipe-story porn and answer-mills)


Google had the same issue pre-monopoly. Since its early history it’s been trying to show users what other users like. “Correctness” was never part of the equation because search engines that paid deference to authority all sucked.

You can easily start a search engine using a different model. You’re going to struggle to compete with anything that prioritizes what users find popular.


My memory may be faulty, but ~2000 I remember results being more objectively useful from a user perspective. Versus competitors.

Granted, it was a simpler problem then, given early web.

But the slide since (in the face of revenue counter-incentives) feels at best like ambivalence about objective value to user.


Circa 2000 the Web was a lot less commercial and filled with ads (and trackers).


The problem isn't that it is free. The problem is that it is funded by ads. Ads are a disincentive to providing useful information. That is fundamentally what an ad is. Someone paying to spread disinformation.


> That is fundamentally what an ad is. Someone paying to spread disinformation.

So when I see an add for say Kenwood's latest radios in QST magazine, what would be the disinformation being spread? Or an ad for House of Staunton chess sets in Chess Life magazine?


They are presenting a perspective that is designed to not only present the product in the best light, but also to convince you that you should purchase it (whether that is true or not).

I think disinformation probably implies a level in malice that is not present (or rather, present in consumerist systems by default, and not specific to advertisers), but obviously trying to convince someone to purchase something which it might actually not be good for them to buy, is misinforming them.

If a friend told you to buy a product and you did and it sucked, you'd be upset at them. But when a company does it with an ad, no one gets upset, because we understand that ads are lying to us.

We don't expect that they're being honest.

Or put another way, we expect that they are misinforming us about our need for, and appreciation of, the product.


Honestly I disagree with this pretty strongly.

First, I (and many people that I know) do get upset when an ad misrepresents a product and we get burned. It happens all the time, and people complain about it constantly. You might argue this is wasted breath, but it still happens.

I would also say that while everyone accepts that ads are at a high risk for dishonesty, the vast majority of people live their lives assuming that ads convey some useful information, even if it's just the vague intent and target audience of the product. It's easy to pick out specific genres of ads that don't seem to do this, and ad rhetoric can often be complicated and interesting, but generally speaking they do try to convey some sort of genuine information about the product.

Saying that ads are fundamentally misinformation people because they are "obviously trying to convince someone to purchase something which might actually not be good for them to buy" seems like a reasonable criticism at first glance, but actually I think that's kind of a ridiculous standard. No one except my closest friends and family have any idea what might be "good for me to buy", and I am not so against the idea of basic commerce that I think it's useful to say that someone hawking their wares is "misinforming" the public simply because they're trying to sell to strangers.

This is not to say that the ad industry isn't infested with slimy people with perverse incentives. Of course it is. But advertising itself is a basic extension of human communication, and I think it's awfully cynical to say that any attempt to show someone why they might want to buy your product is spreading misinformation. I do believe that it's okay to advocate for something, and then let people decide for themselves if they agree. If it's not, then nearly all human discourse is an exercise in misinformation.

(Inb4 "actually, nearly all human discourse is misinformation")


There is a massive gap between a vendor hawking their wares on the street, and modern ad campaigns and techniques.

To conflate those 2 as being remotely similar in levels of influence (even at an individual level) is insane.

Also, I'm not sure why you jumped to assuming that I apparently believe advertising shouldn't be allowed, just because I can acknowledge that it is based entirely on selfish intent by the ad creators?

All I said was that ads are a form of misinformation. You're the one drawing conclusions about what responses that must entail.

If you prefer to pretend it's not misinformation just because you apparently cannot reconcile that stance, and allowing ads to continue to exist, that's a "you" problem.


You've jumped to quite a few conclusions here.

You said that ads are misinformation because they try to convince you to buy something that may or may not be in your best interest to buy. My response to this is that's a ridiculous definition of "misinformation," and I disagree with it strongly.

You said nothing about levels of influence, or about modern ad campaigns versus word-of-mouth advertising, or anything of the like. I couldn't have responded to these things even if I wanted to, because you didn't say them. All you said was that selfish motivation == misinformation, and that's wrong, and I disagree with it.

Of course I suspected that you wrote your comment because of angst against the manipulations of "modern ad campaigns," and not because you believe that's actually a useful definition of misinformation, and you just proved it for me. You're trying to redefine words so you can use those words to give your point more oomph.

I also said nothing about you believing advertising shouldn't be allowed. Why do you think I said this? I didn't even imply it. I only said I thought you were being cynical.

It's true that there's a massive gap between a vendor hawking their wares on the street and "modern ad campaigns and techniques." There is also an entire gradient between these two things, and you see examples all along this gradient if you actually pay attention to advertising. The existence of this gradient is why I take issue with your original comment.


> All you said was that selfish motivation == misinformation

No I didn't, even according to you:

> You said that ads are misinformation because they try to convince you to buy something that may or may not be in your best interest to buy

Correct. Which is true if the thing they say you should buy, you should not in fact buy.

> that's a ridiculous definition of "misinformation," and I disagree with it strongly

You don't like that definition because you don't think that pressuring someone to buy something, when you have a vested interest in that thing selling, is inherently wrong. Not everyone has to have the same opinions about business rights and ethics as you.

> You said nothing about levels of influence, or about modern ad campaigns versus word-of-mouth advertising, or anything of the like.

No, you brought that into the conversation:

> I am not so against the idea of basic commerce that I think it's useful to say that someone hawking their wares is "misinforming"

Before that, we were clearly talking about modern ads, not "someone hawking their wares" as part of "basic commerce", which is obviously far more expansive a discussion, and includes said individual sellers.

> not because you believe that's actually a useful definition of misinformation

I am not limiting "misinformation" to only that, I am including that within the umbrella of misinformation, because it is.

Misinformation is simply, "Untrue or incorrect information." You are the one trying to redefine it to exclude common forms of misinformation we're used to navigating.

> I also said nothing about you believing advertising shouldn't be allowed.

Not explicitly. What you said was

> "I do believe that it's okay to advocate for something, and then let people decide for themselves if they agree. If it's not..."

Which is an inherent implication that I do not think it's okay to do so. I do think it's okay, I just can acknowledge that it's usually misinformation. As I said, the issue here is you being unable to reconcile something being misinformation, and that thing still being allowed.


I have a very strong distaste for our based future, but saying ads are “paying to spread disinformation” is like saying joining the army is only for people who want to murder other humans.

There’s an element of truth, but it is way too reductive. Are you of the mind that word of mouth is the only moral way to grow a business?


> Someone paying to spread disinformation.

They prefer you to use the form "their take on reality" . Misinformation is bad! Point of view is good!


Yeah this frustrates me too. I also see this when say HN or Reddit is talking about a topic I know very deeply. People say incorrect stuff often in those situations, and it makes me think most situations we are spreading incorrect information.


If I'm talking to a non-educated (in the subject matter) audience, I'll often aim for "correct enough" and people saying "well actually" is super annoying and confusing to everyone involved. There is always more to learn, and you can always go deeper into a subject, that doesn't mean everyone around you wants to go down that rabbit hole too; like 'cool flex bro.'


As a counterpoint, the number of folks teaching ‘accessible’ information that sounds good, and may even be plausibly kinda correct - but is actually harmful to understanding what is actually happening. Or ancient aliens (as a ‘theory’). Which is rife.


Is that a thing? I kinda want to search for it ... but if I'm not seeing any ancient alien shenanigans, I want to keep it that way.


The AI generated content on YouTube is often 80% right, as is human generated by some channels, and sounds very plausible (and is sometimes very hard to identify!) - but is presenting hallucinated facts as reality for the other 20%. Quite convincingly too.

It’s in almost all topics now, from the Ukraine War to cooking.

And ancient aliens type stuff is all over the place - UFOs, pyramids, gov’t conspiracies of various forms, etc.

Not just YouTube of course, but very visible there.


Agreed, though I will say on HN there are usually (nearly always) people on that are correcting the misinformation. Sometimes their voices are drowned out by down voters who vote on emotion or a sea of spouters, but at least there is good discussion. Compared to Reddit and other places, it's amazing.


Frankly, the writing is bad. Overly verbose, poorly structured, heavily editorialized section titles.


Fun Sin City story...

https://forums.pelicanparts.com/off-topic-discussions/301497...

I'm the (formerly) young guy who rented the 550 to the production team for a few days. Pretty fun story to be tied to the movie.


Thanks for pointing this out, I probably would never have dug further into his blog otherwise, and there's a ton of other fascinating posts.


I'm glad to know that the person behind this post have real expertise, because at first glance the headline is a bit clickbait-like. (The content, though, makes excellent points.)


Also the author of the excellent DV Rebel’s Guide book which was an indie film makers bible for a long time. Obviously the title gives away its age though.


Well, there was one bit of nonsense and I thoroughly enjoyed it. I'm referring to the Ren & Stimpy "Log Song" sound track to the video of the woman walking up the stairs:

https://duckduckgo.com/?t=ffab&q=ren+and+stimpy+log+song&atb...


I don’t know if it’s the same thing, but capture on my Nikon D7100 always felt more “manipulable” than capture on an iPhone or the like, I suspected as a downstream effect of using RAW format with a larger image sensor. Interpreting log through this understanding, it felt pretty intuitive reading through this post. I don’t know if it’s accurate, but it feels accurate…


One reason is also that phone cameras have many limitations, so to get good images they have to "cheat" to work around those limitations. Additionally they often apply filters to the images so it looks good "out of the box", like contrast, smoothing, sharpening. Those choices done for you mean you lose information to do better yourself.


Which is a great if you consider that 99.98% of iphone owners wont know how to do it better themselves.

It will be fascinating for anthropologists a few hundred years from now to see the increase in quality in «every day photography» that came with the increasing quality of smartphone camera software.


I actually think what’s happening is that it’s averaging out photography with such amazing tools in everyone’s hands, however we are seeing an outlier explosion of creative photography that we haven’t seen before.

That said, the dopamine hit comes from taking your own shots and seeing the visual perfectness Apple creates, rarely if ever will those shots give the same effect to a viewer. So they don’t need to be photography-perfect, they just need to appeal enough to our monkey brains and monkey eyes to get that shot of dopamine to want us to take more pictures, therefore using the phone and resulting cloud storage more.


The magic of phone cameras disappears in a moment when you get hold of a mirrorless for 5 minutes. Even a bottom end one is orders of magnitude better than the best phone camera even if it’s got a lot less megapixels.


I love my mirrorless, but it certainly wasn't 5 minutes. The first two lenses I tried (cheap ones obviously) were pretty underwhelming. Once I got a large aperture lens, I started to really get it. Even then, so many of my photos came out dark or blurry because I hadn't learned how to pick settings or focus for different lighting conditions and subject movement speeds. Autofocus on consumer cameras is pretty trash compared to iPhone/Pixel. EyeAF my ass.

These camera companies need to invest more in their software. Superzoom, night sight, subject tracking and smart autofocus should be table stakes. Auto mode on my mirrorless should at least be on par out of the box with my phone. It's sad that the pixel phones with very old Sony sensors can take better 10x pictures than mirrorless out of the box. They need to worry less about better lenses and sensors, and worry more about better onboard compute capabilities.


Lenses make SUCH a difference. A family member asked me to shoot their wedding (no pressure, right?) and since without me it wouldn’t have happened, I agreed. I also rented some L glass for my SLRs, and holy shit was that eye-opening. Turns out that a $2000 lens is objectively better than a $300 lens, who knew?

The clarity, the sharpness, the pop - everything was improved. Good glass is a bigger difference than the body.


Actually this is potentially wrong.

A $300 lens is objectively better than a $5 smartphone lens.

A $2000 lens may be objectively better than a $300 but it depends what you're standing in front of and you.

The Nikkor Z 28mm f/2.8 is my favourite so far and it wasn't exactly expensive.

The priority order for things is of course:

1. What the photographer is standing in front of

2. The photographer

3. The lens

4. The camera


This is also introducing the difference between zoom lenses and prime lenses. You can get a good 28mm lens for much less cost than a good 24-70mm zoom lens. Most novices in photography don't start nowadays with good prime lenses, but with cheap zoom lenses.


The 16-50 that came with my Z50 is really good too.


Great! Inexpensive zoom lenses are getting better all the time. And manufacturing processes are likely also improving. The gap is narrowing.

But, at least today, you still get enhanced features on the more expensive zoom lens, such as wider aperture, and a constant maximum aperture across the entire zoom range. Neither of those things necessarily yields a superior photograph -- you don't need f2.8 across the whole zoom range if you're taking pictures at f6 -- but they can be very helpful. If they're worth paying for depends on one's personal needs, desires, and budget.


>A $300 lens is objectively better than a $5 smartphone lens.

Not sure where you are getting the $5 figure from. In any case, smartphone lenses are manufactured in vastly higher quantities than lenses for interchangeable lens cameras, so it doesn't make sense to compare the per unit cost. Modern smartphone lenses are miracles of optical engineering. See e.g. https://news.ycombinator.com/item?id=30557578 The cost of the R&D that's gone into enabling their design and manufacture probably couldn't have been recuperated if they were being used only in cameras.


Fair enough; perhaps it’s fair to say that given a specific application or lens type, a more expensive one will generally be better than a cheaper one. For example, you can get any prime or zoom you want from Canon as a normal or L variety. The latter will cost about 10x as much, and will be better. 10x better is subjective.

On the flip side, my favorite macro was a Sigma 105mm prime. Tack-sharp, and cost well under $1000. Of course, I’ve never shot with the equivalent Canon L (which isn’t quite the same at 100mm, but close).


And the light. Good lighting can compensate for not so great sensor


Can you say a bit more what you think was the factor?

-- more control of depth of field / shallower DoF ability?

-- faster shutter speeds?

-- less chromatic aberrations?

It doesn't intuitively feel like sharpness should be a factor -- even cheap kit lenses usually get that right.


* wider f-stop * less chromatic aberration * less distortion generally * smaller circle of confusion

The chromatic aberration is an important but subtle effect. Remember that lenses are multiple pieces of glass, and every interface diffracts the wavelengths of light like a prism. One of the considerations in lens design is converging all those different wavelengths of light in the same place. Not just at one point, but at every point across the image plane.

Poor lenses might do the well in an area. Good lenses do it everywhere.


Sorry, should have clarified. The lens in particular that made me rethink everything else I had was a 70-200mm f/2.8L. Zooms in particular often suffer from sharpness and chromatic aberration issues compared to a prime due to the larger number of optics. This lens did not. I’m sure a comparable prime stuck next to it would still show it up, but coming from kit zoom lenses, it was quite a shocking difference.

The static aperture also helps tremendously of course, yes - nice bokeh with a tight zoom means you can easily get candid portraits that look great from anywhere in the room.


70-200 f2.8 L IS III is the Bentley of lenses, the Aston Martin, the Maybach, etc. you got the best hardware possible for the job. for the price it better be amazing! even the older ones without IS are excellent.

L glass is also a very interesting used market - those things basically don't lose value IME.


It was the IS II at the time, but yes - an absolutely spectacular piece of kit. I think it was about $100 to rent for the weekend? Very reasonable IMO, and made me realize that one could quite easily bootstrap a wedding photography business without actually owning gear.

Other than the actual business side of things, pesky details like getting clients. And the massive stress of shooting a wedding. I was happy to do it gratis for family, but I don’t think I’d want to deal with paying clients.


When I was still shooting Canon, I used a 70-200mm f/4L which I picked up for a song (C$~600 sixteen years ago?). Not the beauty of a 2.8, but having a consistent 4 made for some beautiful shots on Cape Breton.


Lenses affect color contrast too. I don't fully grasp it but internal reflections adding neutral white bias or correction tradeoffs between geometry and color or something. Aperture can be widened as much as lens barrel allows so that isn't it.


(Guessing the faster glass.)


I feel the same comparing my iphone and mirrorless. It's obvious software is years behind in almost every aspect; even relatively easy fixes like the horrible designed and unintuitive UI choices where the same mistakes are made year after year despite complaints...ugh! The last thing I need when taking pictures is fumbling around in 5 layers of menus to change important settings while my subject moves on and the moment has past. It almost feels on purpose as if the though is that added complexity is some proxy for it being "professional"

If processing power is one of the bottleneck to get some of the features phones can do it would be great if there was a universal hotshoe-like way to mount phones to camera bodies to use the screen, touch capabilities, and offload processing power, maybe with all phones now having USB-C its more of a possibility. If the camera makers don't do it I wouldn't be surprised if Apple/Google eventually do and eat their lunch.


> relatively easy fixes like the horrible designed and unintuitive UI choices where the same mistakes are made year after year despite complaints...ugh!

This is exactly the complaint I have about car manufacturers not having enough / the “right” digital UX experience.


My mirrorless definitely beats my iPhone. But I have to put in more time, it’s not with me at all times and I need to transfer the images.

In the end, the best camera is the one I have with me. And if I can take a pretty good portrait shot of my kid while I’m at a diner then I’m happy.

To me, the memory/moment is more important than the “quality”.


The micro 4/3 M.Zuiko 45mm 1.8 is the bees knees - but the process of getting a useful shot into my family icloud album is so much work I rarely pull it out. Mirrorless really should be bodies with sensors and a thunderbolt connection to a smartphone.


Affinity Photo will let you export an image into the Photo library directly from the editor. I use it to do minimal edits (a bit of crop, exposure, maybe highlights/shadows) and then go File > Share > Add to Photos. It's a great workflow for a hobby photographer like me and I like that it is a perpetual license. Adobe products will let a pro fly through hundreds of images a day, but this is more than enough for quick edits and dumping the files into iCloud.

They offer a free trial, try it out! (I am not affiliated, just a happy user)

https://store.serif.com/en-gb/checkout/?basket=ed0b917180520...


Nikon's SnapBridge does that very well. Mine is tethered to my phone via WiFi when I'm out. Straight into my photo stream.


I bought a $1200 mirrorless which was supposedly the best in class a couple of years ago. All my photos look like they were shot on a potato compared to my iphone.

Not to mention that I don't walk around with that mirrorless camera in my pocket at almost all times.


I spent that and mine don't look like it's shot on a potato. Are you sure you know what you are doing?

My Z50 and kit lens fits in my pocket fine.


Thats the whole point, I don’t know what I am doing.


Fix that. Then you have a right to complain :)


I once asked someone with a nice piece of kit that wasn't too far from mine in cost. They said they sorted top to bottom on DxOMark list and bought one on the top and they didn't even know what a prime is. But that approach seemed to work.


put aperture, iso, focus, shutter speed on auto (if it's not a manual lens) and you will get good pictures


I spent about that on a mirrorless and my photos blow smartphones out of the water.

It takes 10x longer to setup, take, and post-process and is a hassle for many reasons, but the photo quality is extremely noticeably better.


How much did you spend on the lens that went on it?

Camera bodies will keep being updated. Glass is updated, but a lot more stable.

If I have a friend who wants to invest, say, $3,000 in a camera setup, I'd tell them to get a $1,000 body and $2,000 of lenses. A couple have thought they'd buy a $2,500 body and $500 lens, and I explained why they might be disappointed with that investment.


I agree but for the average user who takes sunset pics, kid pics or pet pics and then view those pictures on the same device they shot it on, apple’s incentive isn’t so much about competing with full-frame mirrorless cameras, but instead to make the pictures shot on the iPhone look as good as possible on the iPhone. That way, the shooter gets the dopamine hit when they shoot something that triggers our visual sense in a positive way.

My Sony a7iv gives me the same dopamine hit, maybe more so as there’s no better feeling than getting home, loading your footage into DaVinci and see that your exposure, focus and colours are nailed (on the other side of the coin, it’s a huge punch in the gut to get home and see your focus a little off, giving the opposite effect of a dopamine hit). But it’s more of a process to get there and the average user needs a faster feedback loop from shot to hit.


Ehh, no.

My typical test is to take a photo of the full moon. It works acceptably well on an iPhone (or Android). My recent Pixel phone even adjusts the brightness automatically. Sure, the lens is pretty wide-angle, so the pictures don't have many details.

I had to fiddle around for 20 minutes with settings on my Sony Alpha camera, eventually using manual focus and manual exposure. The pictures are, of course, better because of the lens and the full-frame sensor.

But the user experience is just sad. So I often just don't bother to take my camera with me on my trips anymore.

Also, a note to camera makers: USE ANDROID INSTEAD OF YOUR CRAPPY HOME-GROWN SHITWARE. Add 5G, normal WiFi, GPS, Play Store, a good touchscreen. You'll have an instant hit.


No. That's a hill I'll die on.

Ass end Nikon Z50, 250mm kit lens, hand held, no setup really other than shutter priority ... https://imgur.com/edCyNjV (very heavy crop!)

And a Pixel 6a mutilating a shot: https://imgur.com/290gXkU

I do not want android on a phone. I don't want to update or reboot it. I want to turn it on, use it and turn it off again. And I don't want someone substituting pictures of the moon for stuff online (hey Samsung!)


Wow, that Pixel 6 shot is awesome in all the wrong ways. I have no idea how it could have happened.

Of course, cameras with large sensors and lenses are going to be better than small phone sensors. Physics is physics. It's just that it doesn't matter that much for most people (me included).

> I do not want android on a phone. I don't want to update or reboot it.

I have used Galaxy Camera back in 2014. It was awesome. I could take pictures, and automatically upload them to Picassa (RIP) or share them with people. The UI was also pretty good, but it was clearly a V1 without too much polish.

> I want to turn it on, use it and turn it off again.

I have an Onyx book reader that runs Android. It works just like this. I pick it up, press a button, and it shows the book I've been reading within a second. So it's clearly possible.


Great example of the primary difference here. I've said that the photos out of my mirrorless (also Z50, great camera) are true photographs in the sense that they capture light and show it to me.

My smartphone however does not create photos, it creates digital art based on the scene. Your Pixel image is a perfect example of how algorithms (now called "AI") re-paint a scene in a way that resembles reality when zoomed out.

Comparing smartphone and camera is really apples to oranges at this point, as smartphones aren't even capturing photos, they're entirely repainting scenes.


> Comparing smartphone and camera is really apples to oranges at this point, as smartphones aren't even capturing photos, they're entirely repainting scenes.

Calm down, it's not that bad. Take for example night sight or astrophotography; it's using ML to intelligently stitch together light across time because available light in one moment is not enough to capture anything intelligible. Your end result is an accurate representation of what your eyes see (e.g. my own face in a nighttime selfie) and what is sitting there in the sky (the stars). You can call that repainting, but I disagree, it's more information aggregation over the temporal dimension.

Super resolution is similar, using shakes in your hand to gather higher resolution than you can accomplish with a single frame of data from your low res sensor grid. 2-3x digital zoom with super res technology is actually getting more information and more like optical zoom. It's not just cropping+interpolating.

Now...portrait mode. That's clearly just post-processing. But also...does blurring the background using lens focus have any additional merit vs doing it in post (besides your "purity"-driven feelings about it)?

At the end of the day, I want my mirrorless to do more than be a dumb light capture machine. I spent $X thousand+ for a great lens and sensor, so I want to maximize. It should do more to compensate automatically for bad lighting, motion blur, etc. It should try harder to understand what I want to focus on. As a photographer, I should get to think more about what photo I want taken and think less about what steps I need to take to accomplish that. My iPhone typically does a better job of this than my $X000 mirrorless. So I use my iPhone more.


> Take for example night sight or astrophotography

Oh speaking of astrophotography. It occured to me that all those pretty images of remote planets and nebulas have been doctored to hell and back.

What I don't know is where I can find space images that show the visible spectrum - i.e. what I'd see if i managed to travel there and look out the window.

Is there such a thing?


Well, you're of course using the best example on the one side and the worst on the other side, so that's not really a fair comparison.

Apart from that: The phones generally try to substitute the tiny sensors through highly complex software algorithms, creating something that sometimes only has a broad similarity with the original scene. The cameras, on the opposite, usually have crappy software and rely on their great sensors (and other hardware). So in an ideal world, you'd have a proper camera with good software. That software then doesn't have to do all the (good or bad) stuff which is only there to try to make the best out of less than ideal image input, but instead it can provide more user friendly features which allow making quick and easy photos without having to study tutorials for a week (yes, now I am exaggerating a little on purpose :)).

This software doesn't have to do all the crap that in any way reduces the image quality in the end.

Please don't just think in the extremes, but look for the healthy middle way that would provide the best out of both worlds.

It is not Android that does the image processing itself btw., but special software that the phone manufacturers add on top of Android. So this part would be the responsibility of the camera's manufacturer again, but this time they could focus more on their central use case (help making good pictures) instead of writing everything (like the user interface) themselves. And they could even provide their users with more options to extend their software for even better photos.


Please, no. A camera need to be ready to shoot the moment I flip it on.

I also don't want it to have reduced battery life just so that I can use the god-awful playstore on it.


My eBook reader uses Android, and it's instant-on, after a week on my couch.

Instant-on Android devices are a solved problem.


I bet it's not coming from a cold start every time.


Of course. It does an equivalent of suspend-to-RAM after a few minutes of inactivity. It then can stay in this state for at least several weeks. I'm not sure for how long, I have never left my book reader for more than two weeks.

Cold reboot after updates takes about 20 seconds, like my phone.

This strategy can work for cameras.


Except for when it's fully off and you want to take a picture.


So turn your camera on once when you pack your things for the flight. It can stay in sleep mode for at least a couple of weeks without draining the battery.

Honestly, I don't see a problem.


Does the phone really take a sharp picture of the moon, though, or does it just add detail it knows is there.


At this point I would've imagined Apple would have a moon detection feature and just replace it with a stock image cutout when detected in the field of view.



Samsung actually do that.


I love my d7100 and z5. There are some pictures only they can take over a phone, but it would take a user much longer than 5 minutes to beat their iPhone. I’ve been carrying the current gen iPhone and my larger camera for years and I use the iPhone more and more. The shots are good and easy to set up. I mostly keep a zoom on my larger camera now to give me reach and often use the iPhone otherwise.


The magic of phone cameras lies in their convenience.


Only applies to the small portion of the population that enjoys the process. I could never appreciate digital cameras. Take a bunch of shots , then go home and filter those shots for the good ones, then adjust the color of those shots. No thanks, not my cup of tea.

Funny enough I enjoy shooting film over digital. A lot less work and decision to be made.


I shoot film as well. It's definitely not less work but it is enjoyable.

As for convenience, the DSLRs can be tethered to your phone now. I shoot on mine and they go to Apple Photos.


For myself at least there is less mental load shooting film compared to digital. I am not taking multiple shots of the same thing generally and I don't develop the film myself. Historically the two things I did not like about digital was too many photos to review and having to work on each photo at home. There is something about not having the choice of which shot to pick and how to adjust the colors that is nice.

I have been interested in some of the micro 4/3 cameras that have prebuilt filters in them but I think film for me is king if I have a camera.


You can run a digital camera like that too. I am more interested in composition than camera set up and spend most of my time shooting in aperture priority. At best I'll tweak the white balance but the camera mostly just deals with that for me. I take few photos. I spent 16 days on a trek recently and took about 50 photos in total.


Phone cameras are digital cameras too.


Ok let me clarify for you. In this context I am talking about point-and-shoot cameras, bridge cameras, DSLRs and mirrorless. Everything but a phone camera.


I’ve gotten far more amazing photographs (including exhibit quality large format ones) since smart phones became a thing than I ever had with an SLR. Because I always have the phone handy.

If you’re always walking around with a camera bag? Sure. If you’re regularly in beautiful situations without one? Eh…


Basically every news event in the last 15 years is caught on phone cameras. That's the magic. A device with which you can start streaming to the world in 30 seconds.


It's mainly zillions of photos of kids and pets and food, so it doesn't matter if other viewers aren't impressed, they're impressive to the person who took them.


Please point to some of your favorite examples of this “creative explosion”.

Here’s one I posted several months ago on intentional camera movement photography:

https://news.ycombinator.com/item?id=34858318


Yeah, from what I can tell, the vast majority of people's photos are only ever viewed on a phone.


I can’t help but nitpick that this isn’t really an anthropologists’ domain.


Why not? Studying a culture from hundreds of years ago and measuring its advances in various ways, technological and societal.. seems fitting for the National Geographic's definition of anthropology as "study of the development of human societies and cultures".


The point is moot. Apple and google cloud won't exist and all the photos will be gone.


You do know that people still print photos, right? Some of them will definitely survive. And unless the human civilization collapses I don't see why some digital media wouldn't survive either.


Speaking of photography and the progression of technology (and not spoiling the amazing final scene of the story) I highly recommend reading this poignant prescient classic:

"The Wedding Album" short story by David Marusek

>"Wait a minute!" shouted Benjamin, waving his arms above his head. "I get it now. we're the sims!" The guests laughed, and he laughed too. "I guess my sims always say that, don't they?" The other Benjamin nodded yes and sipped his champagne. "I just never expected to be a sim," Benjamin went on. This brought another round of laughter, and he said sheepishly, "I guess my sims all say that, too."

https://en.wikipedia.org/wiki/The_Wedding_Album_(short_story...

>"The Wedding Album" is a science fiction short story by David Marusek. It was first published in Asimov's Science Fiction in June 1999.

>Synopsis: After their wedding, Anne and Ben realize that they are merely recordings of the real Anne and Ben, destined to relive the hours surrounding the wedding for all eternity.

https://www.goodreads.com/en/book/show/13576562

>With wedding photos and videos and mementos of all kinds, newlyweds attempt to hold on to their special day and to cherish it forever. Someday technology may enable us to record not only our appearance and voices but everything we know, feel, fear, and love at the moment the shutter clicks. Then our wedding mementos, like Anne and Ben’s in this story, take on a life of their own in a world where love may be eternal, but the world is not. Till deletion do us part . . .


Apple and Google seem dominant now, but how much stuff do you have from Yahoo and Myspace and Flickr?


> but how much stuff do you have from Yahoo and Myspace and Flickr?

A fair amount. Not everything, but a decent chunk. You don't store things in multiple locations?


I don't understand why that's relevant. I never upload my photos anywhere. I'm pretty sure my photos that I have printed and/or stored locally don't go anywhere if Google shuts down tomorrow.


I think just the fact that people started practicing every day photography and seeing hundreds of shots done by peers everyday will be a huge factor in raising quality as well.


Yes, nothing really wrong with this. Just pointing out that what hits the sensor is far off from what's being saved.

However, some phones now even apply AI filters to fill in detail it didn't capture. Like adding craters to the moon.

And the thing about contrast, sharpness etc is that "more always looks better" at a quick glance. So when people are doing comparisons between phones etc, the one destroying the picture the most might be declared the winner.


You remind me of a comment I heard from a user, "I like the magic wand button in Photos. Why doesn't it just automatically magic wand every photo?"

Ha ha, I have no idea. Almost a decade ago I worked on the Photos team for a stint, I should have asked.


Maybe similar to the (alleged) reason cakemixes require you to add an egg: it makes it feel more like you baked it yourself if you have to do more than just add water and put it in the oven.

Maybe users feel like they ‘touched up’ their photo if they tap the magic wand button.


I wonder if there is a big enough market for a middle ground device. A good SLR like camera which has a great sensor and a lens but a good control compromise on the software. Not fully managed PaaS but the VM equivalent, if you consider Nikon D7100 as the bare metal Linux box of the 90s.


Honestly the iPhone with an app like Halide is this. In some cases, iphone processed photos and videos get almost indistinguishable from thousands of dollars worth of camera, but sometimes they go overboard with processing. Halide lets you dial that back a bit.


It's not my primary job, but I do some pro and fine art captures as well as video compositing and photo editing– the pro phones definitely have their uses just as my little micro 4/3 and full frame DSLR do, both with still and video work.

With glass and a sensor that small they aren't for everything, but the days of mandatory compression, limited color depth, mandatory "enhancements" and all of that stuff are over. If I'm shooting a handheld gimbal video, of something like a person talking outdoors not in direct sunlight, I'm grabbing that iPhone Pro without thinking twice.


Well, except that log (as well as almost every other format : "gamma" refers to the exponent of a power function !) is the opposite of raw : raw is focused on fidelity to the number of photons, log and others on fidelity to how the (human) eye reacts to these photons :

https://prolost.com/blog/rawvslog/


This is almost entirely due to sensor size. The sensor on the iPhone is smaller than your pinky nail with pixels between 1µm and 2µm in size (depending on which camera is used), the Nikon on the other hand has pixels over twice the largest size on an iPhone.


This video is excellent. About halfway through I was thinking, "Oh so this is like RAW for video" and then seconds later he gets to explaining how it's not exactly RAW.


The concept of Log seems needlessly confusing from (still) digital image processing perspective, which I have some experience.

Firstly the name is called "Log" (for logarithmic) but isn't that what gamma does in color spaces like sRGB since forever? "Normal" video standards like BT.709 also have non-linear transfer functions. I don't get why "log" is stressed here. Maybe it just means a different/higher gamma coefficient (the author didn't talk much about the "log" part in the article).

And the main feature of it, at least according to this article, is that it clips the black and white level less, so leaves more headrooms for post-processing.

This is definitely very useful (and is the norm if you want to do something like, say, high quality scanning), but I failed to see how it warrants a new "format". You should be able to do that with any existing video format (given you have enough bit depth, of course).


For some reason you're getting a lot of wrong or just bad replies. But the answer to your question is yes both sRGB/gamma2.2 & log are non-linear, but almost in the opposite direction. gamma2.2 is exponential not logarithmic. As in, it's spending all its bits on the lower half of the brightness range, whereas log is actually spending more bits in the highlights.

It actually looks more like HLG in this way.

https://www.artstation.com/blogs/tiberius-viris/3ZBO/color-s... has some plots of the curves to compare visually


> almost in the opposite direction

I think you're mixing up OOTFs and EOTFs here. sRGB or HLG can refer to either the stored gamma, but more often means the EOTF "reversed" gamma that is used to display an image. When we refer to "log", this is almost always means a camera gamma - an OOTF. So the reason it's "in the opposite direction" is that it's designed to efficiently utilize bits for storing image data, whereas the EOTF is designed to reverse this storage gamma for display purposes.

As you can see from the graph in [1], Sony's S-Log does indeed allocate more bits to dark areas than bright areas. (Though the shape of the curve becomes more complicated if you take into account the non-linear behavior of light in human vision.)

[1] https://www.enriquepacheco.com/blog/s-log-tutorial


> When we refer to "log", this is almost always means a camera gamma - an OOTF.

Wouldn't this be the OETF? OOTF would include the EOTF, which is typically applied on the display side (as you noted).


You're right, looks like I got the acronym wrong. I'm referring to OETF.


I've seen that "S-curve" in multiple places, but I don't get it still: how is that a logarithmic curve/graph?


That's neither logarithmic nor what log cameras capture. See the link posted by the sibling comment[1] for the actual curves.

If you see an S-curve, that's usually what you will try to map the captured images too because it allows for increased detail in both shadows and highlights, while allowing a natural dynamic range in the middle areas. Log capturing allows you to have a much higher dynamic range (with a given number of bits), and thus more easily map to the S-curve that you want.

1: https://www.enriquepacheco.com/blog/s-log-tutorial


I think that s-curve is the target (i.e. the overall end-to-end system gamma, combining the encoding and decoding transfer functions). If it's linear to reproduce the source, but for various reasons [1] sometimes it's preferable to have a gradual roll-of

[1] https://www.dpreview.com/forums/thread/3081411


> This is definitely very useful (and is the norm if you want to do something like, say, high quality scanning), but I failed to see how it warrants a new "format".

This warrants a separate answer. Cameras are getting to the point where they can capture far more information than we can display. Hence, we need a lot of bit depth to accurately store this added precision. But adding bits to the data signal requires a lot of extra bandwidth.

In principle, we should just store all of this as 16/32bit FP, and many modern NLEs use such a pipeline, internally. But by creating a non-linear curve on integer data, we can compress the signal and fine-tune it to our liking. Hence we can get away with using the 8-12bit range, which helps a lot in storage. With log-curves, 12bit is probably overkill given the current sensor capabilities.

There's a plethora of log-formats out there, typically one for each camera brand/sensor. They aren't meant for delivery, but for capture. If you want to deliver, you'd typically transform to a color space such as rec.709 (assuming standard SDR, HDR is a different beast). The log-formats give you a lot of post-processing headroom while doing your color grading work.


> Cameras are getting to the point where they can capture far more information than we can display.

Haven't professional-grade microphones been in a similar situation for decades now, or is it the magic of remastering that keeps recordings from the 50s sounding so good on modern speaker systems?


> Haven't professional-grade microphones been in a similar situation for decades now

Not really the microphones themselves since microphones today and decades ago all deliver an analog signal which contains way more information than our ears can process (but some amount of noise too which may or may not be audible).

The technology difference is in the analog-to-digital conversion (DAC) which converts that analog signal to a stream of integers.

The difference between audio and video is that essentially since the dawn of digital audio, devices have been able to produce as much information as our ears are able to distinguish. The standard digital sample rate since CDs first shipped is ~44k, which can represent frequences all the way up to 22k, which is beyond the range that almost all people are able to hear. The standard bit depth of 16 bits can likewise represent as much dynamic range as humans are able to distinguish.

(Hi-fi enthusiasts may argue with these claims but I consider that whole area to be almost entirely snake-oil and magical thinking. Actual scientific studies show that 44k 16-bit audio is indistinguishable from higher sample rates or bit depths.)

People working with audio may want higher sample rates and bit depths because, just like with coloring in the article, it gives them more leeway to change the audio while still producing a final result that covers the whole frequency and dynamic range. But for end listeners, 44k/16 is fine and has always been fine.

Video is very different. Our eyeballs can capture a monumental amount of input using a very complex, adaptive system. Eyes don't have a single well-defined "resolution" or "framerate" but basically digital video has been noticeably lower resolution and lower framerate than we're able to perceive for a long time and is only recently starting to approach perceptual limits.


Why would you still assume SDR, aren't we talking about amateur photography here ?

But yeah, I've been wondering why nonlinear formats would use integer values for a while now ?!?


I'm suggesting rec.709 because it's what is a currently expected default for a screen. In your typical setup, your working color space is something like ACEScct or DWG, so you can map to several possible output formats with a little extra work if needed.

The integer values are nice because existing video formats encode things as integers. So we can just stuff our stuff inside a codec we already have rather than having to reinvent the wheel on the codec side as well. Re-purposing existing toolchains for new uses tend to a thing that gets a lot of traction compared to building a new one from scratch. Even if the newly built toolchain is far better.


Aren't most phone screens "HDR" these days, and for years now ? (And Apple had wide gamut with excellent OS compatibility on computers for even longer ?)

Yes, but why existing formats are like this, we have been through quite a lot of new especially video formats in recent years...


The transfer functions in your (rec.709) color space is non-linear indeed. However, the pixel values you store are in a linear relationship with each other. The difference between values 20 and 21 are the same as the difference between values 120 and 121, assuming an 8bit signal. I.e., the information is the same for all pixels. Further down the chain, these values are then mapped onto a gamma curve, which is non-linear.

What the "log"-spaces are doing is to use a non-linear relationship for the pixel values, as a form of lossy compression. If the signal has to factor through 8bit values, using a compression scheme before it hits the (final) gamma curve is a smart move. If we retain less precision around the low and high pixel values and more precision in the middle, we can get more information from the camera sensor in a certain region. Furthermore, we can map a higher dynamic range. It often looks more pleasing to the eye, because we can tune the setup such that it delivers a lot of precision and detail where our perception works the best.

In short: we are storing (8bit/10bit) pixel values. The interpretation of these values are done in the context of a given color space. In classic (rec.709) color spaces, the storage is linear and then mapped onto a non-linear transfer function. In the "log" spaces, the storage is non-linear and is then mapped onto a non-linear transfer function. In essence we perform lossy compression when we store the pixel in the camera.


> The transfer functions in your (rec.709) color space is non-linear indeed. However, the pixel values you store are in a linear relationship with each other. The difference between values 20 and 21 are the same as the difference between values 120 and 121, assuming an 8bit signal. I.e., the information is the same for all pixels.

Difference on what scale? ... because (hint hint) it's not number of photons that hit the sensor. Nor is it photons emitted from the display.

The truth is, it's a linear measure of the voltage which drives the electron beam of a CRT. Not a very useful measure anymore, but we've encoded this response curve into all of our images and, now, this is proving to be a mistake.

Working with images would be so much easier if we stored values that represent linear light (i.e. proportional to photons entering/leaving a device) with no device curves baked in. Log formats do this, but because the order of magnitude of light is more important than the absolute value, it takes the log of the value. It's a more efficient use of bits in the storage / transmission.


Gamma curve is needed due to the fact human eyes are more sensitive to darker areas than the brighter areas (Stevens' power law), so with gamma you can get away with lower bit width without (perceptually) introducing noticeable banding.

The fact it somewhat matched with a response curve of CRT seems to be a historical coincidence, based on multiple sources I've read in these years.

I do agree we should have get rid of it at this point as it introduces many errors in color blending.

> Log formats do this, but because the order of magnitude of light is more important than the absolute value, it takes the log of the value. It's a more efficient use of bits in the storage / transmission.

It literally is what gamma curve does, ina slight different but mathematically equivalent way.

"Order of magnitude of light is more important than the absolute value" is exactly what Stevens' power law describes.


No, gamma2.2/sRGB is how the pixels are stored on disk, not linear values. Linear is almost never used except as an intermediate for processing where you can throw lots of bits at it (eg, fp16/32 on a GPU when applying effects or whatever)

The difference is how the curves prioritize what bits to keep. Rec709 sacrifices the bright end to keep more detail in the darks, whereas log is more like linear perceptual brightness. So it'll have less low light detail but more bright detail by comparison


Thanks for the technical details.

I think I get what you mean (in term of implementation), but can't we just alter the transfer function further so there are more values used for the mid-range colors?

The two-step process you said (the storage is non-linear and is then mapped onto a non-linear) is basically equivalent to a singular transfer function which is the combination of two curves, since the sampling process itself is lossy.


Yes, you can combine the steps to make it more efficient. In that view, it's "just" a different gamma curve. It's far harder to "split" a curve than combining two steps though.


RAW formats on digital cameras are also storing data in a log format. RAW conversion process is normally converting that to a color space along with (for most cameras) doing the De-Bayer algorithm.

The built in converter that produces JPG files in the camera does this too.

Our eyes perceive light as linear when it's really logarithmic.

There is really no difference between video and still here, it's just that it's more normalized at the consumer level to deal with RAW formats at this point for stills.


> There is really no difference between video and still here, it's just that it's more normalized at the consumer level to deal with RAW formats at this point for stills.

Fun trick: convert ProRes RAW to cDNG. You now have 59.94 raw DNGs per second to choose a photo from.


> but I failed to see how it warrants a new "format". You should be able to do that with any existing video format

It's about support.

The .zip format supports LZMA/ZStandard compression and files larger than 4 GB. But if you use that, a lot of software with .zip support will fail to decompress them.

The same way with log. While in theory you could probably make .mp4 or .mkv files with H264 encoded in log, I bet a lot of apps will not display that correctly if at all.


From the article:

>> ... in DaVinci Resolve ... choose Apple Log for the Input Gamma ...

Indeed, it just sounds to be a different choice of the curve, perhaps more suited for the HDR capabilities available today.

PS: I did not read the article in detail. My first reaction was to just search there for 'gamma' in the article to see how 'log' is being compared to it.


> Standard iPhone video is designed to look good. A very specific kind of good that comes from lots of contrast, punchy, saturated colors, and ample detail in both highlights and shadows.

I remarked to my wife showing me a video recently that you could tell it was taken on an iPhone, I don't think it's just the 'punchiness', for me the main thing is the way it seems to attempt to smooth out motion - the 'in' thing seems to be to sort of spin around showing what's around you while selfie-vlogging and tik-tokking and what-notting, and iPhones make it look like you did it with a steadicam rig that's not quite keeping up.


Another thing they've done more recently is HDR video (to my cave man brain, this means brighter brights).

They've paired this with much higher brightness on the screens, which makes the videos look much more realistic. I first noticed this on my M1 Pro screen, which absolutely blew me away (1600 nits peak brightness).

That's the biggest telltale "filmed on iPhone" trait I'm noticing right now. Yes, you can create HDR videos in other ways, and I'm sure it will be more popular on other platforms soon.


And 1600 seemed crazy. And now Google's Pixel 8 Pro has 2400 nits?!


Someone used an iphone to record their desktop screen playing call of duty and the top comment on Reddit was how it made the game look Disneyesque, a spot-on assessment.


Do you have the link by any chance?


> I remarked to my wife showing me a video recently that you could tell it was taken on an iPhone

It's also relatively understood that certain camera companies (Nikon, Canon, Sony, Fuji) have a certain 'look' to them in how they process the raw image sensor data to generate a JPEG (there's a differences in the final colours).


I know exactly what you mean by this! I can always tell if it was taken on an iPhone -- not that it looks bad, or anything, but there's always a few little cues that make it obvious. As you mentioned, I think the motion is a large part of it.


To add, a few generations ago hand held video shot on iPhones was not (or hardly effectively) stabilized. But now iPhone have good stabilization. I think the tradeoff (the too-smooth motion thing) is worth it.


That’s a specific camera mode (action mode I think). Does the standard video mode also do heavy stabilisation?


Does it perhaps auto-enable when it deems it appropriate?

I don't have an iPhone, I've just noticed this (perhaps it's more obvious to me because I don't have one) in others' videos.


The stabilization is partially physical driven on phones this is called OIS. https://www.androidauthority.com/image-stabilization-1087083...

EIS is not usually needed for video but maybe in some cases it’s used?


I suspect iPhones do both. I have a 12 Pro, and when taking video the file is much more stable than the viewfinder while recording.


If I was a prosumer/hobbyist video equipment company, I'd be terrified about what Apple does next. They already have significant penetration into the editing market (both with Final Cut, and codec design), they control a number of the common codecs, and they have _millions_ of devices in the field along with substantial manufacturing capability. The cinema end aren't in trouble yet IMO, but the rest should be concerned...


Cell phones already killed standalone cameras: https://d3.harvard.edu/platform-digit/wp-content/uploads/sit...

This is just the mop-up operation. The only products left are going to be super-telephotos for live sports (sales: a hundred a year, if that?) and 4K+ IMAX digital cine cameras.


Not even close. The pocketable point and shoot cameras? Sure. DSLR’s? Not a chance. I’ve gone the upgrade path from a canon 6D to 5D4 to R6. The R6 especially is phenomenal and there isn’t a single phone that can even try to come close to what it can accomplish even in “auto” mode.


The point isn’t technical ability, it’s market share. And smartphones have decimated DSLR market share despite being less technically able.

I’m a data point in that: I bought a DSLRs and a few lenses probably 15 years ago. Over the years I used them less and less to the point that they’re gathering dust now. It isn’t worth the extra bulk when I head out the door, smartphone cameras are good enough.


Decimated isn't that grim, still leaves 90% of the market. :^)


Words change in meaning over time. This isn't your Roman decimated.


It's the content of the photos that matters though, and on that front unless you're into very specific types of shooting most people don't care about any advantage "Pro" gear brings.

If you're out there shooting a hundred basketball games a year or you're camping out in a swamp every weekend to get a picture of a bird it matters.

But that was never the majority of people buying "Pro" and "Prosumer" camera gear. For the vast bulk of the market the smartphone camera gets the job done at a fraction of the cost, way less stuff to carry around, and a much better workflow.

Too many hobbyist photographers seem to miss the forest for the trees here, no one cares about how sharp the picture is or how much dynamic range there is if the content of the photo isn't compelling.


Also, for someone doing street/landscape/portrait photography just for fun and instagram, iPhones will give you a nice image out-of-the-box. Fuji and Ricoh also do this with their built-in "filters", but it's more involved and specific. And you still need to send the pictures to your phone after.

But my experience with a Sony is that you need to do at least an auto-correct on Lightroom/whatever before sharing.


Canon has nice configurable presets and has had them for a very long time if you want to avoid Lightroom/whatever.

But an iPhone/Android is much much less conspicuous, which is a huge advantage for street photography.


You’d be surprised. Quite a few “amateur” or “instagram” photographers are buying them now because mirrorless tech has greatly improved affordability. The sales numbers show a nice uptrend over the past 3 years.


> The pocketable point-and-shoot cameras? Sure.

The Olympus Tough series may be the last man standing in that category until Apple makes a shatterproof and diveproof iPhone.


I feel like that niche is primarily occupied by the GoPro as most people who put cameras in such extreme situations seem to enjoy posting videos over photos. For anything more than that you might as well get a dive case - the Tough’s water depth rating isn’t that deep and you can get a dive case for an iPhone that will give you 2-3x the depth.


They aren't diveproof though. Even the entry level PADI Open Water qualification allows you to dive deeper than the Tough series is capable of. Past the 15m mark you need a housing, at which point you would be better off with an SLR in a housing.


I thought I read DSLRs are dead? Everyone is making mirrorless now?


The R6 is marketed as a mirrorless DSLR. Obviously the SLR part isn't accurate but the way you use the camera with interchangeable lenses is about the same. So for the user it's just a technology and feature upgrade more so than a new product.


The point he's making is about the sensor and the glass, not the mechanics of the camera body?


It’s essentially become a common term like Kleenex.


Mostly, yes, but not completely. A better generic term might be an ILC (interchangeable lens camera).


Most of the manufacturers are focused on mirrorless, but the DSLR market is still alive.

Pentax has bucked the trend and has continued to release new DSLRs as recently as this year.


ppl also use DSLR to refer to the body type


They killed consumer point and shoots, not professional interchangeable lens cameras.


Smartphones changed the market such that people who just want to shoot good photos of their family don’t need to buy expensive cameras anymore.

But photography with dedicated cameras is alive and well, and won’t go anywhere anytime soon even as these phones get better and better.

The super telephoto market is alive and well, and Wildlife photography in particular is a big contributor to this. When Olympus released their 150-400mm (300-800mm full frame equivalent) super telephoto aimed at wildlife shooters, it was sold out for almost a year.

For me, the new iPhone means I can shoot B-roll footage that looks great, but this will not replace my main camera anytime soon. It’s currently far more viable for high quality video than it is for high quality photographs.


maybe for the average consumer. but how many professional photographers do you see using an iPhone?

sensor size matters for low-light stuff too. sure, an iPhone can do a pretty good job at taking several pictures over say a 2s. exposure, but there _will_ be artifacts in the shot as there isn't physically enough light to form a legible image regardless of post-processing.

this is just one of many reasons why digital cameras are NOT at the brink of collapse yet.


The folks working for your local news org are getting paid to take photos on phones. Almost all of the people you would probably consider “professional photographers” in that industry got laid off years ago.

Watching them take photos on their iPhones at high school sporting events is always painful.


I've never seen a wedding photographer using an iPhone, and the ratio of wedding photographers to news org photographers is probably 100:1 if not more.


If we’re including the journalists using iPhones then no, that’s not going to be the ratio.

For what it’s worth, I’m a professional sports photographer (side gig obviously), and I don’t get paid for iPhone photos. I’m not disagreeing with you that iPhones cannot replace dedicated cameras, but they are a lot closer to replacing them for weddings than they are for sports.


I think the only reason why wedding photographers won’t stop using their dedicated devices is the appearance of professionalism they give.

But yeah, there is only so much advanced computational photography can improve - you can probably do a fairly good job for a slow scene like a wedding, but fast movements are hard to capture with small sensors.


Yea, but wedding photogs are businesspeople, who respond to customer expectations. If they show up with “non professional” equipment, they tend not to get referrals, in spite of whatever photo quality they deliver.


And yet digital camera sales only halved since 2003 ? (But I guess that we should be looking at all cameras for this, not just the digital ones ?)


By unit sales, or by deflated dollars purchasing increasingly niche priced units?


Nitpick:

It’s not the sensor size they matters (larger sensors actually have more noise: that’s why phone photos can look as good as they do). Stop and think for a second: where does the extra light captured by a larger sensor come from?

What actually matters is the physical aperture of the lens. What a large sensor forces you to do is use a larger physical aperture to get the same focal ratio (“f-stop”) and field of view. That’s how you get more light. (the larger physical aperture and constant focal ratio implies a longer focal length, so the math works out)

If you do the math, the larger physical aperture more than compensates for the extra noise of the larger sensor (signal to noise of the system scales as sqrt(sensor_dimension)), so camera systems with larger sensors and the same focal ratio have better noise figures. But it’s not directly due to the sensor.

You can compensate for a lot of that effect by simply installing a lens with a larger focal ratio on a small sensor. That’s because it turns out to be easier to have a high focal ratio when the lens is small: the shorter focal length (for a given field of view) requires a smaller radius of curvature, so controlling chromatic aberration and circle of confusion is easier at higher focal ratios.


I honestly see a heck of a lot of wedding photographers using them in some capacity now.

I also see a lot of outdoor photographers using them to save on weight (and pairing with some type of spotting scope when needed).

Digital cameras are definitely not on the brink of collapse, but I do see phones being used to either augment or replace specific scenarios more and more.


Looks like they killed cameras with built-in lenses. Cameras with interchangeable lesnes, which would've been used by the pros, have kept their market share identical if not grown a bit.


For taking photos and sharing them in the digital only space, sure I’ll buy that for the regular consumer. Making prints will expose all the small sensor flaws that exist quite quickly. I know it gets better every year but I used my phone camera (14 pro) to capture a few important shots that I would do anything to go back and had on a full frame sensor or film for.


I hate pixel peeping on phone photos. Have a 13 Pro Max and it has a really weird mushy postprocessed look. I always bring an oldschool Nikon DSLR with me when I care about getting some nice shots.


Using a third party app and capturing native raw photos helps avoid that mushy post processed look. It trades it off for more noise, but the trade off is worth it imo. Halide is my app of choice but there are a number of good ones.


Well, Apple seems very cozy with blackmagic design but that doesn’t mean that they aren’t going to be sherlocked. Apple already has offerings in all these categories, it’s just that the markets are different and will stay different some time more because of workflows and laws of nature but the laws of nature don’t seem as safe anymore.

Currently, the best editing software for social media appears to be CapCut as its ease of use for the power it provides is miles ahead of anything else.


Cell phones already killed compact point-and-shoots with their small lenses.

But DSLR's with their large lenses aren't going anywhere, because physics. If you want to capture high-quality footage under low light conditions, or work with a variety of lenses, the tiny aperture on a phone is never going to be enough.


The RICOH GRIII/GRIIIx is still alive and kickin'. Really nice sensor, super sharp lens, and small enough to fit into a pocket. I don't really use my iPhone camera anymore.


Agreed. I keep my iPhone in my right pocket and my Ricoh GRIIIx in my left pocket whenever I go out. It’s such a fantastic camera given its size, especially with the APSC sensor in such a compact body.

I also have a Sony A7C but I haven’t used it since getting the Ricoh. Being pocketable is a massive factor in how much I use a camera.


Setting aside the existing pro video pipeline... it's for this reason that if I were Meta, I'd be terrified about what Apple does next, from the perspective of photorealistic capture for VR/AR applications.

Even if the next-gen Oculus had parity with Apple Vision Pro in how vividly they could display VR/AR content - and I doubt this is the case - only Apple (perhaps among any company in the world) can mirror that with a battle-tested supply pipeline for creating pro-level video capture equipment and for integrating those sensors into consumer devices at scale.

While I'd leave the branding to better minds than my own, I'm bullish about Apple's ability to make an "iPad Vision Pro" that has two cameras at eye distance apart as well as laser rangefinding. That would allow for binocular Apple Log capture, and with the advent of Gaussian splatting for point cloud rendering, and increasingly better generative AI for inferring colors/textures of occluded points in the point cloud, you could have professionally color-graded interactive scenes. All that's missing is better ergonomics for this workflow in DaVinci Resolve etc., and one imagines Apple's war chest could go a long way towards incentivizing this.

Apple products will be where high-quality VR content is created, proofed, and consumed. They're not rushing because nobody else has a hope of being close.


> Apple products will be where high-quality VR content is created, proofed, and consumed

Eh... I think this is a bit like claiming "Apple hardware will be where high-quality video games are created, proofed and consumed" in the 90s. You'd be right given the circumstantial evidence, but the industry doesn't really want to play ball. Besides AppleTV and Apple Arcade, I don't think Apple has a huge constituent of VR developers the same way they did for Apps. Simply building the infrastructure won't be enough.

And plus, I don't think your argument rules out a world where Vision Pro is used for recording and Quests are used for content consumption. Unless Apple intends to reinvent the wheel again for 3D video, nothing should stop it from being playable (or at least portable) on other platforms. For most people, the $3,000 price difference will probably not be worth the recording capabilities (especially if they already own an iPhone).

Anything could happen, I guess. Even if you're entirely right though, I don't think Content Creation will be the killer app you think it is. Games and video moves millions of units; productivity and capture are both unproven features poised to reprise Hololens' failure.


I think what is terrifying is that they're better enough to kill the details.

Sort of a contrived example... you're a pro and let's say you NEED a headphone jack, but apple just killed headphone jacks.

But more indirectly, they killed the lower volume, higher margin folks with alternatives that offer a headphone jack, and maybe even XLR headphones and microphones.

An analogy might be tesla giving you a 90% better car experience, except they have killed off the dashboard. (now they've killed control stalks like PRND and turn signals)


I don't know. If I were a pro like that, I would just have a dongle for that headphone jack. It would probably also have a better DAC than iPhones ever had.


A big factor in buying a "pro" camera is controls. It's really difficult changing the focus in a controlled way while shooting with a phone. While in theory you can imagine Apple giving you API control for that and hooking it with an external focus pulling device, it's still a sub-optimal solution.


Cinema is safe from the optics point of view as achieving some of the effects of large sensor + large lens is impossible with a phone size camera. But Apple has it cracked and they could easily crush that market. They have a great sensor with enough resolution per inch, great dynamic range, ability to produce lenses with super low defects and have enough processing power. Sensor wise ~100 megapixels should be enough to replicate fine grain of a good movie film and iPhone 15 sensor's dynamic range of 12-14 stops is on par with film already.


Apple doesn't build sensors. Sony builds the sensors for Apple. https://twitter.com/tim_cook/status/1602516736145858561

And it turns out, Sony already has significantly better sensors in their cinema cameras than what they sell to Apple, which is why almost half of Oscar and Emmy nominations are filmed on Sony Venice. Even the low end of their camera line is extremely capable, e.g. the FX3 which shot The Creator.

Apple is far away from actually competing in that market.


Just wanted to clarify for others that FX3 is the low end of their cinema line, not their camera line.


You're contradicting yourself : can they compete with large sensors + big lenses or not ?


The market for 'actual' pro & prosumer cameras and such is pretty tiny. I think they'll be pretty safe for quite a long time.

But they have pro video editing features! Yes, but it's a subfeature of their general platform, so they can 'count that low' for a hardware feature like that, that will also be useful for their entire userbase since everyone takes videos with their hardware and watches video on their devices anyway.


I don't see this as a high risk. Rather adding log just increases the choices for different types of filmmaking and adding it to the professional workflow. The iPhone hasn't killed the professional stills camera market, instead many photographers have implemented it as a supplemental tool into their work where a larger camera is not suitable.

At the end of the day, a professional camera with good glass and a larger sensor gives you much more control of an artistic vision, and better performance in challenging imaging conditions. I've never seen a cellphone camera that gets anywhere near the image quality I can get from my mirrorless camera in still images. This is likely to be even more apparent in video production due to how the cameras are used.


They owned the editing market with Final Cut and completely dropped the ball with Final Cut X. To the point they had to start selling the old version back. Then Premiere came back from the ashes and took the throne.

One of the most glaring mistakes Apple did to its Pro market. And they did quite a few.


I think FCP is still quite popular in the „YouTuber“ market and it seems Apple also targets their marketing a bit more towards it.


It is. It’s just that it’s a fraction of what it used to be.

They grabbed most of AVID’s and Premiere’s share.

And then Final Cut X happened, or iMovie Pro, as some depreciatively called it


It's always surprised me that there's not more interest in log-scale/floating-point ADCs built directly into camera sensors. Both humans and algorithms care a lot more about a couple-bit difference in dark areas than light, and we happily use floating point numbers to represent high-range values elsewhere in CS.


There was a company that did this circa 2003 - SMaL. Their "autobrite" sensor is built to capture log-scale natively. They've switched owners twice since then, but it seems like they're getting more traction in car vision systems than in professional video.

https://www.vision-systems.com/cameras-accessories/article/1...


Blast from the past. You could do things like that [0] in 2005... Autobrite was the first thing to go out the window, once we knew how the chip worked we did or own exposure control...

[0] https://youtu.be/0UaTYX-ygG8?feature=shared


From an analog design perspective, I don't think that makes sense. Not that I'm an analog designer, but I worked closely with them as a digital designer on CMOS camera sensors.

You're already extracting as much information you can from the analog signal on the least significant bits. It's not like designing a log-scale ADC lets you pull more information from the least significant bits. So you don't really have anything to gain. Why make a more complicated analog circuit to extract less information? It's generally better to let the digital side decide what to keep, how to compress the signal, etc.

And I should mention that CMOS camera sensors can often do a lot of digital processing right there on the chip. So you can do log-scale conversion or whatever you want before you send the data out of the CMOS camera chip.

It might be possible that you could reduce the power consumption of a SAR (successive approximation) ADC by skipping AD conversion of less significant bits if you have signal on the more significant bits. But I doubt the power savings would be very significant.


A lot of the processing steps expect linearity and would have to be reworked for floating point or log scale data. Most HDR sensors are using some kind of logarithmic compression for sensor readout, but I've never really heard of a floating point adc. Google seems to suggest tbey're not readily available.


Yeah there isn't, it's not that simple, even at sensor level to have that much dynamic range

A linear ADC with enough range is usually fine, you can do the math later. But maybe for this case it needs a non-linear element before the ADC? (no idea how log recording needs anything at the HW level)


Is the quantisation error on a modern 14-something bit sensor really that big of a problem compared to something like the inherent shot noise of dark areas?


There is no floating point ADC, just a stereo assigned to two volume levels to be stuffed into a float.

hardware accelerated HDR on cameras are commonplace these days, especially in dashcams and CCTV cameras.


Some sensors do this internally, unusual though. The rest of the high-end ones apply curves manually in software directly at the egress of the sensor. The reason they don't in all cases is that it complicates black level correction, gamut transforms and demosaic operations (without some assumptions).


I feel like professional/prosumer photogs, aka the kind of people who buy fancy SLR cameras and serious business lenses, probably already know this stuff. I also suspect that the vast, vast majority of phone users just want subjectively good-looking photos.


Yeah, 100%. About 99% of the customer base just wants to take a good photo with the smallest effort possible. Which makes it even more remarkable the company cares enough to include this kind of functionality in a consumer product.


> Which makes it even more remarkable the company cares enough to include this kind of functionality in a consumer product.

Maybe people are just starting to get sick of this argument being used against everything.


How is this an argument against anything?


Whenever someone wants a new feature or gets disrupted by the loss of a previous feature, people go, "you have to think of the consumers. Consumers don't use those features, because consumers are stupid. I can't see why anyone would ever add/keep a feature that's not going to be used by those consumers."

The comment I replied to didn't go nearly that far, but it's an argument I've seen so often that I feel compelled to point out that "consumers" are not the only target audience for Apple - they are trying to also market themselves to creators, as well as professionals, and they absolutely know the difference and notice when things like these become available.


> people go, "you have to think of the consumers. Consumers don't use those features, because consumers are stupid. I can't see why anyone would ever add/keep a feature that's not going to be used by those consumers."

Well, I don't think "people" generally do that. It's nothing to do with stupidity; just lack of need and interest.

> The comment I replied to didn't go nearly that far

Not just "didn't go that far", it wasn't anything to do with that. It was just articulating pleasant surprise.

If I were to return this thread to its objective origin, I would agree that if you're selling a mass-market device, it's surprising to cater to a fractional percentage of that user base. I don't see how that's contentious.


> if you're selling a mass-market device, it's surprising to cater to a fractional percentage of that user base. I don't see how that's contentious.

I argue it shouldn't be that surprising because a lot of people find value in this 1%, and each person wants a different 1%.

But maybe it's surprising because most companies are stupid and don't realize this.


Apple sells a very enticing mix of feeling and product, the good old Jobs distortion field. They always had an enormous influence on what's cool, what's premium, what you should want, what's the baseline, etc.

And it's entirely possible that they got to the point that they are now saying pro stuff is cool. Even is 99.x % of users won't really use it.


See but you don't necessarily want full range logarithmic.

If your darkest pixel is halfway down the ADC range, on linear you're throwing away one bit, on logarithmic you're throwing out way more bits. Just using higher bit linear ADC then converting it to logarithmic in post-processing seems more sensible. Hell you could even go signal magic and merge few photos with different shutter speed together to get most detail.

Also proper log-to-in converter like AD8307 costs like $20 so I'd assume doing that woudl bring the sensor price way up if you needed to have a bunch of them.


Most recently microphones/recorders started using it for recording sound


From my understanding, the ADC's are still fixed point and linear. Two (or more) then run in parallel over different signal levels to produce the 32-bit float output.

Encoding audio with different log-scale companding has been around for some time too (since the 1970's) with A-law and mu-law in G.711.


It doesn't really matter HOW they do it, as long as you get the advantages of float encoding (practically infinite headroom). Of course if you zoom in enough there will be something in there that uses integers, but this would be true for e.g. a floating point adder as well.


It should matter that "practiaclly infinite headroom" comes from the fact that the raw samples has 64 bit of dynamic range, than output format being float.

(does that mean there is a crossover in the middle of amplitude range that shows LSBs of one of ADCs poking their heads, I mean, in a hypothetical ultra-naive implementation?)


>the advantages of float encoding (practically infinite headroom)

The best implementations have a dynamic range of about 140-150 dBA. Floating point is not needed to achieve that and it isn't always used (look at Stagetec products).


I'm not entirely sure what you mean by floating point for an ADC.

from a super high level all ADCS do is quantize an analog signal. They take in a voltage from say 0 - 1.8V and quantize that on a 12 bit range. Return a value from 0-4095. You could build one that scales this range with non-linear steps. But this doesn't add any value. We won't get more accuracy at smaller steps. Our noise and accuracy problems won't be solved by this as they are due to thermal noise or mismatch. quantization noise is not the problem. (We already build segmented ADCs to try and do this)


That was in reference overall ADC stage in abstract, not component. As you note quantisation still maps to integers over some range of the input signal.

It's not my area so would love to correction from someone who is deep in the space. My current perception is the 32 bit float hype in the audio capture world is the marketing reality distortion field in effect. Having that representation expand further upstream than the DSP or DAW makes sense, but it's not magic. Even in 32 bit float there's only 24 bits of precision (assuming IEEE 754).

What is interesting, useful, and lost in that noise is devices have refined the multi-ADC design to enable full usage of that precision matched to the overall dynamic range of the analogue front-end. Previously the ADC would be the bottleneck, but that's now shifted to the upstream circuitry or transducer.


I didn't know it could record straight to USB-C storage! That gets rid of a major reason to spend crazy money on a 1TB phone, and it's definitely a game changer for anyone shooting 4K ProRes.


Afaik it’s actually not even possible to record that directly to the phone, it has to be into an external usbc drive. If I had to guess it’s probably because of overheating concerns with the high write rate.


The article is slightly misleading here.

The 4k60 ProRes mode is not available for shooting on the 128gb model until an external drive is added, but for any larger capacity iPhone Pro, the mode is available for shooting without the external drive. This doesn't affect the Pro Max as that isn't available in a 128gb configuration.

Notably, a similar limitation existed with the iPhone 14 Pro using 4k30FPS, at the time the reasoning was that it simply fills the device too quickly to be useful.


I have a 512 GB 15 Pro Max and it won’t let me record 4k60 ProRes to internal storage


You are 100% correct and my original data is wrong. (imore.com)

Apple’s website lists the external drive requirement for 4k60FPS:

4K at 60 frames per second (fps), iPhone 15 Pro and iPhone 15 Pro Max only, when using an external storage device that supports speeds of at least 220MB per second and maximum power draw of 4.5W

Unfortunately there is no way to edit my above comment, so my apologies for being the source of incorrect information here.


That requirement is for 4k60 Log only. 4k30 log will write to disk but takes up around 100mb/sec. From some videos I shot last weekend.


To capture any 4K ProRes footage with 128GB 15 Pro you need an external drive, this is presumably because 128GB model has a single memory chip and data write speeds are insufficient.


I might be doing something wrong, but when I tried to set BlackMagic Camera to ProRes Log 4K60 it had no problem recording it. I was even able to apply different free LUT files and export it to photos. It is 1.6Gb for 8 seconds, but did not require an external disk. [1]

[1] https://imgur.com/a/9vUdPcg

Edit: iPhone 15 Pro 128G


Huh, interesting, I just tested it too and does work in Blackmagic camera (iPhone 15 pro 1TB). I guess it’s only a limitation of the official Camera app while 3rd party apps can do otherwise.


Wow, very nice, I didn't know. Thanks for this feedback!


Is that Mb or MB?


4k30 Log is approx 100MB/second


OMG I never knew this was even a thing. You all just made my life better


> With its high bit depth and dynamic range, log footage has many of the benefits of raw. But Apple Log is not raw, and not even “straight off the sensor.” It’s still heavily processed — denoised, tone-mapped, and color adjusted.

I wonder if this is because, at the end of the day, it is still a tiny little camera with a small sensor and small lens, and so with none of the processing magic the image would look pretty terrible under most circumstancdes.


RAW video isn’t like RAW photography. The sheer size of raw footage is insane - it’s normal for cameras to be unable to record RAW footage natively without an external recorder.

Thats not to say processing isn’t part of it, but even $2k mirrorless cameras don’t record RAW video internally.


This is how all phone cameras work now.

The sensors and lenses are small, and now processors are very fast. And as it turns out, the vast majority of people do not want photos or videos that are "accurate" or "real". They want ones that look good.

So, processing has been the name of the game. It's all about making an image people will like to look at, regardless of how different it is from reality.


Probably true, and I enjoy using my phone for social snapshots. But the way in which everything ends up looking "iphony" annoys me for anything where I care about the image, and so I'm shooting more with my dumber digital cameras that are about documenting what was actually in front of me rather than spinning up a "good looking" image. I like being able to remember what a scene actually looked like, rather than what the AI in my phone thought it should look like ; )


Even high-end cameras that can shoot in Log, like the Sony A7 series, apply some noise reduction on their end. This is important for most compressed formats.

However, most people would be horrified to see how noisy and unsharp the images from top cinema cameras are when most post-processing is disabled.


Log seems like a strong reason to finally switch from Android to iPhone if you're a photography/filmmaking enthusiast like myself. The ecosystem is so much more mature and the gap seems to be growing not shrinking.

Android has Raw Video with MotionCam which also produces insanely good results¹ (even better than iPhone's ProRes video), but everything else just sucks.

[1]: https://youtu.be/O5fnGDR4i9w?feature=shared


> a strong reason to finally switch from Android to iPhone if you're a photography/filmmaking enthusiast

Correct me if I'm wrong but there's nothing stopping Android of supporting Log (or similar). I'm not a video engineer but it really doesn't seem that magic that it couldn't be supported outside of iphone 15, right? My guess that if this gains any real traction it'll show up in the next Android flagship.


There are several apps on Android that do this already. Since 2021 at least. https://youtu.be/UEedYitrSiw?si=Ufj0HXW_07PfXIxg


I think you're spot on, there just needs to be enough demand for manufacturers to compete on it.

But the fragmentation does work against it. If some company does it it would be limited to their camera app and their format.

Would be interesting if some company just decided to put c-mount on their phone so you could use actual proper lens...


> Log seems like a strong reason to finally switch from Android to iPhone if you're a photography/filmmaking enthusiast like myself

On Android you have mcpro24fps app that supports multiple log profiles, shooting 10 bit video and more.


I've been a long-term mcpro24fps and a user of Filmic pro before that. It's a great app, no doubt about it. The issue is not the app, but the OEMs who makes things difficult, artificially limiting the capabilities of the devices and even removing features in updates. Nothing is consistent and each device works differently from the next one, even from the same manufacturer. A long running joke in the McPro24fps Telegram chat is to never upgrade!


MotionCam is great! They’ve been flying under the radar of RED lawyers (patent for compressed raw video) - long may it continue


I'm the dev of MotionCam. AFAIK the app is not infringing on that patent because I use my own form of lossless compression.


If RED has a patent granted with a claim on compressed RAW data streaming/storage, then it doesn't matter which algorithm. (Though of course one could argue it's too broad, but it's not cheap to make this argument.)


I am not a lawyer but I believe their patent is regarding visually lossless compression of RAW data. MotionCam uses a form of bit packing to compress frames in real time losslessly. AFAIK that is not the same thing and does not infringe on the patent. Again I could be wrong and if I turn out to be wrong I can always disable compression entirely or just run it through zstd.

I doubt their parent extends to just compressing some integers because zipping a RAW frame would not be possible which is clearly nonsense.


> visually lossless compression

Does it mean they apply some kind of human perception model (like audio codecs apply a psychoacustic one) to determine what detail can be omitted without future viewers potentially noticing the difference?

> zipping a RAW frame would not be possible

why? I mean it's just a big blob, if there's a lot of similar substrings in it, it might give a few percent compression ... also, nonsense doesn't mean non-patentable, right?


> Does it mean they apply some kind of human perception model (like audio codecs apply a psychoacustic one) to determine what detail can be omitted without future viewers potentially noticing the difference?

I have not looked into it too deeply but it appears to be based on wavelet compression (more or less a copy of JPEG2000). They are able to achieve much better compression ratios. I am restricted to lossless compression (and in real time on a mobile device).

> why? I mean it's just a big blob, if there's a lot of similar substrings in it, it might give a few percent compression ... also, nonsense doesn't mean non-patentable, right?

What I mean is that it is unlikely that any form of compression of RAW video data is encompassed by their patent. But who knows.


Perhaps it could also be argued their patent covers cameras and their manufacturers, not 3rd party software that users can install on their phones? Also don’t think MotionCam has enough users for their lawyers to care. Either way thank you for your software, it’s dope


Just chiming in to say thank you for doing such a product.

Have you ever consider reaching out to any mirrorless manufacturer (maybe some form of a partnership?) about recording it's camera's sensor data? I have a Nikon and I'm still salty my Z 6I doesn't have real RAW :)


Thank you :)

No I have not, it is likely not possible to do that anyway. MotionCam works fairly well because smartphones these days are very fast and have very fast storage. I imagine dedicated cameras are mostly made up of specialized hardware that is fairly restricted.


Gosh why didn’t Sony’s attorneys think of that.


I am not an influencer. I am not a fashion model. I am not an interior designer. I don't use my cellphone camera to generate "content". I use it to document things. I need it to take clear pictures that accurately represent things that I see. We are now moving away from auto-focus and auto shutter speeds toward on-the-fly retouching, editing, of material by the camera. This is dangerous. Pictures taken buy such cameras can no longer be considered accurate representations. Correction of shadows, the replacement of dull color with vibrant, the smoothing of textures ... every photo is now a crafted work of art by the machine. They are a distorted representation. This will come back to haunt us.

Think of this: a cop body camera that auto-adjusts faces to display them more clearly at night. Sounds like a good idea. Then something happens. The cop says "I couldn't see the guy's face" but the body camera shows the face clear as day. Yes, the camera did take a more clear and useful photo, but it is not a proper depiction of the reality experienced by the officer.


> every photo is now a crafted work of art by the machine.

This was always the case. Unless you have a very specific camera setup where you're trying to avoid this, there have always been certain characteristics that come through in photos from cameras. In fact, it's the main selling point of some cameras. Hasselblad, polaroid, cannon, sony all have their own 'looks' when it comes to output.

> The cop says "I couldn't see the guy's face" but the body camera shows the face clear as day.

I'll use a similar but opposite argument here. Ever since iPhones came out they could never really capture dark-skinned people as we see them through our eyes. Unless you had perfect lighting, you could clearly see issues with the sensor catching the contrast in their face. With all the retouching you speak of, iPhones have gotten much better at showing some people more closely to how we see them in reality. So when that cop claims "I couldn't see the guy's face, the damn camera is too good!", I'd be very hesitant to believe him.


> We are now moving away from auto-focus and auto shutter speeds toward on-the-fly retouching, editing, of material by the camera

This is a great point but it's not what the article is about. This is about bringing existing features of digital cinema cameras to a portable phone.


Yup. And the move from a camera to a cinema camera incorporating cinematography trickery represents a marked change in what a personal camera is and does.


This isn't any trickery, this is a file format. One that actually avoid the uncontrollable computational photography done by default and actually changes the nature of the picture without the input of the user. This is helping conserve what we think of pictures as a digital capture of light. Rather than what it's becoming, which is the product of hallucinated details by a neural network.


> I need it to take clear pictures that accurately represent things that I see. We are now moving away from auto-focus and auto shutter speeds toward on-the-fly retouching, editing, of material by the camera. This is dangerous.

You could argue that up until now you were not able to take photos or video that accurately represented the world you see but instead only using the rose color lenses of the device manufacturer. The photos and videos that you take today with your phones or cameras have distortions applied automatically based on presets provided by the software used to capture the media. Sometimes you get options like Vibrant, Indoor, Portrait, and Landscape mode to choose how the images or video are manipulated. You don't get to see what the camera actually saw, only what the device manufacturer wants you to see.

Log video is like Raw photos. As this capability becomes more prevalent, I could see it becoming a requirement for criminal investigators and other to capture evidence using a Log or Raw mode.

What I would argue is that, if it's not there already, we need signatures and metadata stored in the EXIF of photos and video captured that tells how the image was capture. With that you could determine to what extent the media has been manipulated.


Rose colored filtering across an image is one thing, something we all understand. Nobody would say that a black-and-white photo is an accurate depiction but we all understand what a black-and-white photo is. Alterations to specific aspects of a scene, per-pixel changes that are not used across the image, are something else. A camera that detects and alters images where people close their eyes or fail to smile, that will not be recognized. A camera that corrects a scene to make it look as if it were in daylight rather than interior office lights, that too will not be recognized by the vast majority of viewers.


> A camera that detects and alters images where people close their eyes or fail to smile, that will not be recognized. A camera that corrects a scene to make it look as if it were in daylight rather than interior office lights, that too will not be recognized by the vast majority of viewers.

Those are two very different things and neither is new. Of the two, the later is closer to taking a sepia or black and white photo. It's simple grading that's been done for decades. Log video is mostly an extension of Raw photography which has been available to consumers for decades. The former is the more concerning technology and it's been available for at least 5 years on consumer grade devices.


This has famously already been raised in court during the Rittenhouse trial:

https://journals.library.columbia.edu/index.php/stlr/blog/vi...

And discussed on HN here: https://news.ycombinator.com/item?id=29187820


Your camera (including film cameras) never could take a fully accurate picture to represent what you see. Digital sensors and film don't perceive what our eyes do. It's always been up to you the photographer to ensure that. If you choose to shoot on auto, that's your choice to let the camera guess at the accuracy. Most people don't like what is actual reality so they under, over, long and short expose to choose what reality they represent. They light things artificially and they put make up on. The might even create stage scenes. Even in pure film days, humans have been altering the output. Whether it be for realism or artistic purposes, dodging/burning were effectively retouching practices in film.

Yes smart phone cameras are using computation to get a more "correct" output, unless it's being marked as a feature to alter the image such as face smoothing. Camera makers are always trying to make their camera sensors (or film makers) better perceive the range our human eyes can or at least give use the choice through data to make the decision on realism or art.

Your bit about the police officer is 100% irrelevant to your main point.


I was trying to take a passport photo, and one of the requirements is "has not been touched up". But when I took the photo with my phone, I noticed that it had been very helpful in touching up my face by removing almost all of my wrinkles and made my skin nice and soft. Even with all "enhancements" off. This was on an Samsung S10, I tried with an iPhone SE, it was slightly less visibly touched up, so I used that, but still definitely had a "beauty" filter built in. It's probably implemented in ASIC so you basically can't turn it fully off.


If your need to document the world accurately is important to you, you should be using a dedicated device for that purpose.


Almost all the benefits mentioned in the video are (a) lack of post-processing and (b) high dynamic range. Is that what "log" means in videography?


Log is lower contrast so it's less likely to clip (be a fully saturated color or pure white or black). And clipping inherently limits your max dynamic range.

Log also means a "look" is not baked into the image so, since you're starting from scratch, it's 1) easier to tweak the images so you can cut between two cameras from different manufacturers without distracting differences and 2) you can give the image more of your personality.

As a general note, I've found that in the world of "cinematography", tech terms aren't used very rigorously and there's a lot of cargo cult which comes from the benefit of one tech being conflated as a benefit of something else. It's often hard to sift through the noise when learning.


But clipping occurs before the log transformation. The sensor's ADC is still linear and has a fixed dynamic range regardless of the output encoding format.


Significant clipping happens there yes but more clipping happens when the "look" is applied and contrast is added.


In videography the term "log" is heavily overloaded and you'd want to ask for more detail in order to figure out exactly what is meant.

A pixel value, be it integer or floating point, means little on its own. There's a context for that value which is a color space. In the typical process, you have several color spaces in play: the camera has one for capture. There's one for color processing (the "working" space). And there's one for the display. When a pixel goes through the pipeline, it's processed via color space transformations.

In the "classic" color spaces, the pixel values have a linear relationship, and all of them carry the same amount of information. The "log" color spaces all have a non-linear (gamma) curve: they retain less information at very low and very high pixel values, but subsequently retain more information in the middle. It's a form of compression.

The human eye doesn't respond equally to all levels of brightness, so throwing away detail at the ends for more detail in the middle is usually a great choice. We retain information in the signal at the brightness level where the eye is able to perceive small details and texture, while throwing away information in the signal where it isn't.

We can now map more dynamic range into the same amount of bits, due to our non-linear compression. How large a dynamic range is given by underlying color space we are operating in.

If you go up in camera quality, you will typically see pixels use 10bits or more for their values. Combined with a log-curve, this leads to more information density, which allows capture of an even higher dynamic range. In turn, post-processing can now fix e.g. exposure to a much larger extent.

Finally, a LUT is linear approximation. "Real" color space transformation will use the underlying mathematical curves for much greater precision.


>In the "classic" color spaces, the pixel values have a linear relationship

Almost all color spaces (e.g. srgb or in the video world bt.1886) use a non-linear transfer function? I believe the difference is that gamma and log are different types of nonlinearities, see https://news.ycombinator.com/item?id=37843965 and see https://www.researchgate.net/figure/Linear-response-dashed-v... for an image comparing the two with "linear"

What's confusing is that many people will justify gamma encoding with the claim that visual perception is "logarithmic". I think this is misleading, because the perceptual justification is actually a power-law (Steven's Power Law) as contrasted by the opposing view that perception is logartihmic (Webner-Fechner law, see https://www.appstate.edu/~steelekm/classes/psy3203/Psychophy...). In practice I believe the actual justification for it was that it happened to match the transfer function of CRTs, and these days it's mostly kept around for compatibility, and as an optimization to avoid wasting bits (whether or not it truly fits the human model of perception).

As mentioned by https://computergraphics.stackexchange.com/questions/10315/t..., the real reason why log encoding is nice is because each stop of light gets roughly the same amount of bits. (Log encoding probably also isn't too bad a fit in terms of perception. In an alternate world where we weren't burdened by CRT baggage it'd be a replacement for the now-standard 2.2-esque power gamma).

Also the only reason why log encoded video "looks flat" is because traditional video workflows are not ICC color managed. If you properly applied the inverse transfer function (as any color managed system would automatically do) to display it on an e.g. sRGB screen, the video would appear close to what it did in real-life.


I also found this page which seems to have accurate information (and perhaps the only page I could find acknowledging that compares both log and gamma encodings): https://imatest.atlassian.net/wiki/spaces/KB/pages/114161421...


No, ”log” just means some form of logarithmic response curve when encoding color data. You don’t necessarily get better dynamic range per se, but you get a more useful distribution of the light samples your sensor is taking.


> Is that what "log" means in videography?

> a) Lack of post processing

No. Absence of processing (modifications to make it look 'better') is the default for all non-consumer devices.

> b) high dynamic range

Yes. In practice log is about choosing which bits of color information to retain and which to throw out, to optimize for space.

Log optimizes for retaining detail in very dark and very bright areas by sacrificing detail in the midtones.

Non-log optimizes for midtones. That's all it is.

So if you have a high contrast scene (bright blue sky, someone sitting in the shade), you'll want to use log. In an average/regular contrast scene, you use non-log, that way you get more detail in the midtones.

In photography, there is no need to optimize for space (video is at least 24 frames/sec, photography is a few frames/sec at most, usually), so log is not a thing - we just capture all the things, all of the time.


Thanks for contributing this comment. It's what finally made it click for me, and how logarithms might come into the picture. Especially the difference between video and still photography was helpful to me as a still photographer!

So basically, when you can't afford the space/bandwidth requirements of "raw" data for video, you need to convert the sensor readings to an actual video format right away (the equivalent of "shooting jpeg" on a still camera.)

If you do that conversion using a monotonic concave function (e.g. log) you do get an actual video, but it looks crappy because the tones are not what we would expect. However, it also retains more of the low-end distinctions of the raw data so it's more flexible in processing.

Hypothetically, I could do the same with still photography, by taking the raw data and converting it to a crappy-looking jpeg and distribute that to someone else, who would then have more freedom in processing than with a regular jpeg, but less than the raw data. I think I got it!


A little bit. The log format is non linear. This means there are more details in the shadows relative to the really bright areas. This mimics the human eye and brain which also do not have a linear range of sensitivity.

Basically, the common unit of light in cameras (a stop) is one click on the aperture wheel. E.g. going from 1/11 to 1/16 halves the amount of light. Some cameras of course have a few settings in between. It looks linear to us but it is effectively logarithmic. The dynamic range of the human eye is much larger than the typical camera, screen, or print medium. The human eye has a range of about 20-22 stops (between black and white). A good camera might get between 12 and 14 stops. A decent screen might get to something like 8-10 and print medium is more like 5-7. Taking photos and shooting videos involves a lot of creative choices about what looks natural to us. HDR is basically taking and combining multiple exposures in a way that still looks natural to us on a medium that has less dynamic range than our eyes (-ish, a lot of HDR photography looks a bit unnatural for this reason).

Digital photo processing is about compressing and moving light around to make the most of the much more limited dynamic range of the screen or print medium you are targeting relative to the camera that you used to capture that.

When you do that, most of the interesting information is going to be captured in the darker portions of the image. You typically expose for neutral grey values which is only about 18% of the light. That means half of the darker information (shadows) is in that 18% range of values. And the other half is in the brighter part. Except our eyes are much more perceptive of the darker bits. So, a linear format is not ideal to store that. A log format allocates more bits to the dark half and less to the other 82%. That's a good thing because that allows you to do things like brighten shadows and pull out detail there.

The log format does this by applying a log function to the raw sensor readings. That's why the format looks so flat because all the values end up being relatively close to the 18% mark (neutral). You "undo" this by applying a suitable lut that multiplies the values suitably. You deepen the shadows to near black and brighten the bright stuff to near white. The difference is that you now have full control over this process; can move the white, grey, and black points around. And you can apply color math to the log values before you apply the lut. This is not that different from how you'd process a linear format except now your starting point is better as you are using more bits for the darker parts of the image than for the lighter parts. This gives you more of the captured dynamic range to play with in post processing.

The weakness of the iphone is that while it stores log format, it's not really capable of switching between LUTs on camera or while you are shooting. I'm guessing this just takes too much CPU/battery. So, you have to wait until post processing to see what the end result is going to look like. Some high end cameras have a lot of in camera processing that you can tweak in post processing.


Who is a target audience? Most Apple users wont spent time in postproduction and colour grade their footages. Pros will stay with dedicated technology made for cinematography.


“Pros” is a wide space these days. Hell, my mate has a film released on hbo max which was entirely shot in 4K on iPhones probably 6 years ago now.

Think of the sheer amount of ‘content’ (used pejoratively) these days. That is not being made by traditional cinematographers. It’s videographers, maybe with a pro camera, but maybe with an iPhone, or maybe one pro camera and an iPhone or two for backup. Think of weddings or similar as well, massive demand. Apparently everything needs to be video so why not this. As an aside, aforementioned director shot our wedding as a favour to us (in 2019) on a 4K lumix and an iPhone.

It’s the camera you have on you, innit.


People vlogging would probably prefer the ergonomics and weight of a phone over something more serious, so I wouldn't be surprised if this competes with the GoPro on functionality and image for 'walk and talk' people?

Guerrilla film-makers will probably love it - iPhones aren't really noticed and are allowed in plenty of places where serious cameras aren't.

I can also see it being useful for some specific commercial ad work, I've seen people specifically shoot on phones to get the relatable phone 'look' for specific shots.


Steven Soderbergh an awful lot on an iPhone. He's made at least two movies with them. I'd argue that he is a 'pro'.

https://www.indiewire.com/features/general/steven-soderbergh...


Student filmmakers and the like -- ultra-low budget independent films, comedy webisodes on YouTube, and so forth.

Basically everyone who wants to make films but doesn't have money for a real pro camera.


“Enthusiasts”, YouTubers, …


Why? Whats the benefit of having slightly more options on postproduction when you put output to compressed, 8bit, low bitrate platform that most people consume on their tiny mobile screens with blue light filter?


This is not slightly more options. Log is a lot of extra data for post processing.

Most people won't notice the difference that is true but this is true even for real cinema. Most people don't notice or care about things such as shooting on real film, aspect ratio etc.

The why is that people who create care about these things. Color grading and post shot creative manipulation, these things benefit from extra data. If I can't recover shadows or pushing white balance results in banding, those things are going to still be visible on a phone by myself and the small amount of people who do care.


Log is just a curve that save image differently. Extra data are in chroma sample (4:2:2), bitrate and bit depth (10bit). For example log in 8bit is absolutely useless. You wont get higher dynamic range, that is what size of sensor does.


And that curve allows you to process colors/contrast with way less artifacts than a file saved with a more contrast/saturated colors especially if you compound your after effects.

Btw are you a photographer or a video editor?


It might in some situations, but generally you will deal with banding anyway (8bit). Log needs higher ISO which means lower dynamic range [1]. Flat footage lacks depth so you'll need edit it, which create a new space for error.

Yes, I do audiovisual production and postproduction for living.

[1] https://www.photonstophotos.net/Charts/PDR.htm


Because some people watch YouTube in 4K on their Apple TV?

Also, compression doesn't lessen the effect of color grading.


first some people use it as a status symbol, naturally they will buy the pro no matter if useful or not

then there are tons of semi-professional (semi wrt. photo/video capture, not wrt their job) people on platforms like YT, TickTock and similar which do only use their phone for capturing, they probably love this

lets also not forget the people which aspire success on YT, TickTock and others

similar a lot of hobby photographers don't bother with dedicated technology anymore so they might like that, too

and the people which don't carry around a laptop (e.g. on holiday trips or even business trips) but might want to send slightly improved photos from there isn't small either

anyway I guess the main selling point is like "not locking poor" for thos with confidence issues or more likely bad luck to live in less healthy social circles


That was my intuition as well - even starving film students will rent an arri or a red with whatever lenses for student projects, and everyone else just doesn't care.


Most iPhone users probably use 20% of phone capabilities (each probably a different 20%).

It's about being perceived as the best phone.

Most Lamborghini buyers don't drive it at top speed either.


Does this also disable the excessive sharpening of iPhone's video processing?

Even 'ProRAW' photos are sharpened and aggressively denoised, which ruins detail.


The sharpening is just the default display setting for ProRAW files in many apps. If I edit a ProRAW file in Lightroom and turn down the sharpening and denoising then it looks roughly as you’d expect (though there is some denoising inherent to multi-shot blending). It’s a tricky one, because when you’re aligning and stacking multiple captures, you definitely do want some sharpening (as the result is significantly softer than a single capture would be). But the default is maybe a little aggressive.


It’s mentioned in the article - turns sharpening way down. The footage still goes through iPhones ISP - with denoising etc - just with less processing and log profile


This was a great read, clear, concise, and entertaining. It also struck me how "over powered" phones are these days. I get that the latest chip with the latest GPUs/APUs/IPUs/whatever are really capable, but the weird "I don't need a cinema capable device in my phone" feeling starts to get overwhelming. If nothing else I feel like we're going to force Apple (and others) to go back to easily replaceable battery technology because a "phone" will meet its user's needs for a decade or more and the parts that wear out will need to be replaced several times.


Yes, it’s almost like a post to explain why you wouldn’t be getting the most out of this model phone if are you aren’t doing high end photography.


That’s a feature in the prosumer market, not a bug


The video is so good I watched it to the end despite not even having iPhone nor having any plans to have it or shoot videos. Packed and succinct.

But makes me wonder how soon we would see an... SSD iPhone cases? Because you can always duct tape the external drive to the phone but it would block the screen. *grin* Sure, you can use double sided tape, but... And slightly tangential - how short and compact USB-C cable can be? Sure there are tons of the angled on the market, but I assume they aren't guaranteed to give you full 10gbps Gen2 speeds.


I think that would be not a short cable, but more like Mophie juice pack (it’s a case with external battery). Also, back in the days I bought a card reader for an iPad 3, for 30 pins connector. Could be something similar, as SD-cards are quite big these days as well.


I’d assume that SD cards don’t provide the required write speed, limited by their flash memory or even the interface specs.


> more like Mophie juice pack (it’s a case with external battery).

>> SSD iPhone cases

But yeah, this is what I vaguely had in mind though I did forgot those rigid cases even existed, because my phones lasts at least two days on their own finger guns.

Considering the case would be quite fat and rigid for the SSD alone, it would make sense to skip cable shenanigans and just use a 'dock style' connector.


> Sure, you can use double sided tape, but...

You got it wrong. Camera people LOVE the tapes, look at any picture of a movie shooting behind the scenes and you'll see every camera has 4-5 pink/yellow/green sticky notes on it.

So taping an SSD to your iPhone while shooting a movie it's actually a cool thing. Makes you look authentic "don't give a shit" kind of thing.


I'm thinking a hand-held mini Steadicam with a hard disk as the counterweight.


I don't know anything about photography, but curious to know what the cheapest "pro" alternative is. The phone is now $1200! This feature is cool, but if you wanted that feature, is it cheaper in a purpose-built device?


The original Blackmagic Pocket Cinema was about $500 on sale, but only shot in 1080p. The new generation is about $1300. And then you have to buy lenses :)

So, overall it's not bad value. It's not going to replace pro gear any time soon, but it's great that it can do stuff like this now.


Canon EOS M can be bought for around $100 used and shoot 1440p raw with Magic Lantern. Also it can use cheap old 16mm film lenses with crop mode.


You can buy a two year old Android phone to do the same thing, which is very inexpensive. Just disable every app except the camera app.


Log does not work great in heavy computational video right now. It was the case with RAW on phones till manufacturers find ways to bake computated (i.e. stacked, stablised, etc.) data into RAW, like xiaomi, huawei, pixel, and apple.

That's a weakest point of phones as by exposing to log curve, you exactly showcase the poor latitude of phone sensor. With the price and additional rigging required (and cooling), just save and get a bmpcc.

The argument of getting out a phone and handheld for night photography (first saw in Huawei P30) or slow-motion and get really great image is valid.

The argument of randomly getting out a phone with cases, cooling, external SSD, mounted battery, matte boxes (especially for strong glaring on iPhone), and a camera stabliser (because a phone stabliser is not tailored for this weight), will be over 1.5kg. That does not sound like valid.

Log is quite useless other than controlled or professional shooting as exposure matters a lot more, and lack of IRE exposure (false color i.e.) makes it not feasible.


Great video btw. Well explained and as succinct as possible.


I am by no means someone who shoots a lot of videos, but I have been playing a lot in the past few days with the camera app mentioned in this article, Blackmagic Camera, and I am super excited to do some shots that might seem a bit more professional.


It's a bit of a gimmick. These phones just don't have the noise performance to make log video work outside if very very specific conditions. I had it also on my old LG V30 and it only was remotely useful in full sunlight (and since we're talking about very low processing, not much changed since then).

This is inevitable because the noise floor is just too high to have a large usable dynamic range unless illumination is high.

Combined with video compression it's just not great. It's not really even unique to smartphones, many DSLRs/MILCs when they first started supporting log video had similar issues, but obviously it's going to be much worse for a smartphone.


The sensor size is the limiting factor here re: noise performance and dynamic range, but the iPhone does better than most with some ML and other “computational photography” tricks for denoising that other small sensor cameras don’t have. And ProRes is a great codec that doesn’t really have compression artifacts at high bitrates.

It’s not going to replace your Alexa or even full frame dSLR but can be useful and is a welcome evolution.


ProRes is actually worse at the same bitrate when compared to H.265 or AV1, and sometimes even worse than H.264. The advantage is that it's easier to edit because it's all intraframe, not that it has higher quality - it's worse. Even intraframe HEVC is going to be better than ProRes. H.265 really is going to give you consistently better pSNR and SSMI than ProRes, and that shouldn't be surprising - interframe compression is better for quality.

The only advantage here is slightly better denoising, yes. But denoising doesn't change the dynamic range limit, it just hides the noise visually. The SNR is actually going to be lower.


This is not true. We've compared RAW video from smartphone cameras to a couple of professional cameras and the difference is not as much as you might think.


I don't understand what you're trying to say. Yes, in absolutely perfect light, if it has the focal length you need, it's going to be good. But that's a very particular situation, we both know that as soon as the ISO comes up it's not even close, and soon enough shooting in log then becomes a gimmick.


Have you done any comparisons that you can point to? The iPhone 15 hasn't even been out all that long. The video in the sibling comment notes that the iPhone does surprisingly well in terms of dynamic range, for example (7:12).


There are two limiting factors in dynamic range. Those are the dynamic range of the sensor itself (which is and has been great, even on my V30), and the difference between the highlights and the noise floor - and the noise floor is fixed. The iPhone 15 doesn't have a meaningfully lower noise floor than the iPhone 14 and it's only barely better than the V30 - or any other smartphone with a BSI sensor and a large aperture.

The video is exactly consistent with what I'm saying. All examples are either in broad daylight or strong studio lighting.


As far as I can tell, you haven't done a comparison in low light conditions of the iPhone 15 against any other video camera. Of course one would expect a camera with a larger sensor to do better in low light, all else being equal. But all else is not equal. Professional video cameras aren't stacking multiple exposures to construct a single frame, and don't have anything like the computational power of the iPhone 15. So it would be much more interesting to see a real comparison than to hear people repeating abstract theoretical points over and over again.


> Professional video cameras aren't stacking multiple exposures to construct a single frame

We are talking about video.

> and don't have anything like the computational power of the iPhone 15

They have far more, because the processing is done in post with powerful workstations. For stills you have a point because of multiframe techniques you can't easily do in post, but that doesn't work at all for video.

> repeating abstract theoretical points over and over again

You don't have to. There are dozens of practical comparisons done on dozens of smartphones over the past years. No one has really done it on the iPhone 15 (yet) because it just came out, but there is no reason why it would be different. People have done the comparison with the iPhone 14 and even without log formats results are far worse in anything but perfect light, even compared to hobbyist-grade video cameras costing less than the phone. This will be even worse for log video by the nature of the logarithmic transformation.


>We are talking about video.

The iPhone does multiple exposure HDR in video too. That's why the dynamic range is so good.

>They have far more, because the processing is done in post with powerful workstations.

If you are shooting RAW video, sure. Otherwise a significant amount of processing has to be done on the camera.

By all means link to any comparison that you think is relevant. But if it doesn't involve an iPhone 15, it doesn't tell us much about the iPhone 15. It especially doesn't tell us much about a video mode that's only available on the iPhone 15.


> The iPhone does multiple exposure HDR in video too. That's why the dynamic range is so good.

This doesn't do anything for the SNR dynamic range limitation. Smartphone sensors nowadays are going to be limited by noise for dynamic range very rapidly. Multiple exposures in video mode reduces total exposure and is only worth it with more-than-perfect lighting. It also means you can't guarantee a 180* shutter angle so will need to disable it for smooth movement.

> If you are shooting RAW video, sure. Otherwise a significant amount of processing has to be done on the camera.

The only additional preprocessing is debayering and color transformation, none of which prevents the type of processing we're talking about. It doesn't have to be in-camera.

> By all means link to any comparison that you think is relevant. But if it doesn't involve an iPhone 15, it doesn't tell us much about the iPhone 15. It especially doesn't tell us much about a video mode that's only available on the iPhone 15.

Log video is not a video mode that's exclusive to the iPhone 15. Various phones have had it since 2017. The only difference is the denoising and sharpening and that's a known quantity.


See https://www.youtube.com/watch?v=vUn68iFMNXY. Remarkably similar. Big difference seems to be unavoidable differences arising from a small sensor like bokeh and DOF, and especially the consistency of colour from the ARRI which makes grading a breeze. But the fact that iPhone footage can be intermingled with 100$k camera footage (after grade) without being noticeable is shocking


Why should it be? The V30 came out in 2017 and people made exactly the same videos about it, about how you could intermingle the footage with an Alexa or a RED. You can, yes, only in perfect light and if the focal/aperture works for the scene. That's rare, and that's why it hasn't caught on.


Okay. Well, um, I guess we should all just pack up and go home. We're done.


Here's a great list of free LUTs. I use them with darktable when editing raw photograph files:

https://github.com/cedeber/hald-clut


Constant puzzle to me:

Video is just made up of single sequential still images. Why are the color issues / tuning of videos like 100x more complicated and involve so many more tools than those of still images?


> Video is just made up of single sequential still images. It might seem like this, but modern video codecs are very advanced and video files produced by modern cameras cannot really be described as sequential still images.

Codecs such as HEVC used many techniques to reduce the file size while preserving the image quality, like various frame prediction techniques and encodings. This makes workflows completely different.

Images produced by smartphone cameras use different software and hardware features to produce the best possible image, and many of those cannot be equally applied to video.


I’m very far from photography. Can someone explain why this is revolutional or something? I thought that for “pro” photos you just get the original pixel values from the sensor and work from there. X, Y and some color format of a specific bit depth (is that RAW?). What does “Log” do basically? Save space? Limit the exposition? Or is it just a format that existed for decades and iphone decided to support it? Or is the industry in such state that integrations like this are huge? It’s not clear from the video. Thanks!


The article does a pretty good job of explaining it. I went from complete ignorance to layman’s understanding in the 10 minutes it took me to read it.


It actually doesn't, it doesn't explain it at all.

Because camera data is already encoded logarithmically in common image and video formats.

The article didn't explain what its "log" is at all. Is it the same as gamma? The same as HDR? Something professional? Or something new?

If you don't know anything about photography, it seems like they're explaining something new. If you do know things about photography, you realize the article is full of buzzwords that aren't actually explaining anything at all.


See my response in https://news.ycombinator.com/item?id=37877599. "Log" is basically just another transfer function (like srgb, pure-power gamma, etc.). It's confusing because despite the seeming similarities video and still-image people don't use any of the same workflows and thus don't have any shared terminology to talk about things (video people don't use an ICC color managed workflow, and separately think about an EOTF and OOTF. They use the term LUT to describe the transformation a CMS would normally do to convert between color spaces. On the other hand with an ICC-style color management an EOTF and OOTF are by definition inverses of each other).

The main benefit of log as opposed to gamma encoding I can see is that log shares the nice property that each f-stop gets allocated the same number of bits; this results in the fact that while gamma encoding tends to allocate less bits to the brightest parts of an image to allocate things in a "perceptually" fair way (at least according to Steven's power law model), log encoding tries to allocate things in a "physically/optically" fair way, which I guess ends up working out better for editing?

Of course in terms of benefits I suspect that "log" mode also reduces a lot of post-processing.


As far as I naively understand, the number of distinct colors in a digital image is limited, so the "log" mode is using the available color space in such way that there are more details preserved in highlights and shadows. but then "color grading" or "color look up table" needs to be applied, to recover the original colors, because the "log" video looks greyish and washed out.


Speaking of their raw formats. What's up with the iphone HEIC photo format? I only convert all HEIC to JPEG on Macbook and have no idea what advantage this format can offer me.


I use it on my Android phone to use less storage space. From what I remember of initial testing, the pictures were half the size.


HEIC isn't anything close to RAW. It's "normal" processed photos but with different encoding format.


Yeah, what the heic?


I always wondered why some of the raw vs edited video on social media shows the raw one as a very washed and unsaturated picture. I even thought they made the raw looks so bad, so that the edited one looks great.

I never owned a pro-camera, only a smartphone. So, reading this article now I learned that it was washed and unsaturated for a good reason. Is this Log thing 15 pro specific or its software so that we can use it on an older iPhone?


This was a huge problem even for professionals at one point - there was a time starting a bit before 2010 where more cameras started to switch from doing all the process in camera to raw and log recording, and people didn't understand how to work with it, properly expose it, etc.

I remember for example with RED's first cinema camera, seeing people do shootouts (camera comparisons) where they'd record and compare the low-quality, partially decoded monitor output that didn't have a proper LUT applied, to HD cameras that did all in-camera processing. Later cameras could do all the 4K processing and apply proper LUTs in the hardware in real-time, but earlier ones didn't have the processing power, you had to do it all in post. People just didn't get it, and it worried people when things come out all washed out before applying any kind of LUT.

Crazy some of this is trickling down into phones.


Great video - Concise and informative, without any guff.


A question to the experts here: What happens if we take the "log" video and compress it using current compression algorithms and ship the LUT with it?

If I understand this correctly, the quality of dark scenes should then improve.

Would that then also allow users to edit the LUT or supply their own to get a better picture on their respective monitors?


Storing data on a logarithmic scale requires a higher bitrate otherwise it will actually look worse.

So for a finished product, the way it is now is better.

Plus storing extra data in the dark areas for display is pointless. You only do it because you want to manipulate the data and maybe bring something out of the dark area, but in the final product, dark is dark.

If you, for whatever reason, wanted to give the user more data, then you would provide a 10-bit or 12-bit linear file.


Thank you for your explanation.

Do I understand correctly, that this idea wouldnt work because you either end up with the same quality, but bigger files or with the same file size, but worse quality?

I had this idea that you could increase the amount of black colors a monitor could display if you combine 4 pixels. Having 3 black, and one of them "one bit" less black. This way you could have a more fine grained black-white transition.

I thought a "log" video might be able to contain the information to allow this without to much overhead.


Log + LUT as a final product would look the same but take up more file size.

Right now we can transmit levels of black with a regular 8-bit PNG file. The issue is that you can’t display it unless you are using an OLED screen, which your phone probably has but your computer monitor doesn’t (usually due to $$ for your average person).


Can title be “Logarithmic encoding is the …”? Pretty confusing as “Log is the…”


"Log" is the term in video. You wouldn't spell out "Solid State Drive," would you?


There are about 20 other uses in TFA which don’t use it as an acronym


"logarithmic encoding" appears exactly once in TFA, but ok


You OK? Having a bad day?


I am betting in 5 years apple will come up with a 1" or 3/4" sensor for their phones. iPhone 20 maybe.


Is this essentially RAW at 24-60fps?


It is less processed, but TFA points out that it is definitely not RAW, particularly in the section titled "Log is Half Baked."


That'd be 'ProRes RAW', which I don't think an iPhone can shoot in. Log is still processed video, just graded in a flatter profile so you can do more degrees of adjustments in different dimensions like color and exposure.

RAW footage, can barely be called video. Those files don't even have any White Balance and ISO data baked in, just raw data from the sensor, providing even more amount of control in post production, at the expense of working with extremely large files.


Also, just to muddy these waters, Apple has the ProRAW format--which isn't a video format at all, it's a still-picture format.

https://support.apple.com/en-us/HT211965


Comparison of RAW and LOG on the same site: https://prolost.com/blog/rawvslog


No, it's demosaiced[1]. Not raw data.

[1] https://en.wikipedia.org/wiki/Demosaicing


So then why not just use the raw data? Demosaicing triples the number of pixels, so expands the data by 3x.


(Conceptually, the number of pixels remains the same but the result of demosaicing is RGB pixels so what triples is the number of channels.)

I hear it's good to perform demosaicing, denoising and super-resolution in one step, so perhaps that's what's happening here?

EDIT: on the video (section "Log is Half-Baked"), they also mention the processing includes tone mapping, color adjustment and lens distortion correction.


A lossy compressed intra-frame codec such as ProRes 422 HQ still gives a lower bitrate.


The video is still compressed, but the colors are RAW. Sort of.


Slightly off-topic, but is there a way to disable some of the photo processing on iPhones?

For example, iPhones automatically brighten people once they detect a face, which is especially noticeable when taking photos against the light. It ruins the contrast and makes the photo look really bad.

Is there a way to turn that stuff off?


The purists answer is to shoot RAW, but many just want a solution that doesn't require any post-effort off the device, for that the answer is "sort of":

1. Taking Live photos can be a good work around, since this mostly maintains the individuals frames which are used to build the full image. Use the built in photo editor to go to a different frame in the live photo and set this as the Key Photo - this dodges a lot of the HDR/image adjustment process.

2. Using the AE/AF lock can prevent a fair bit of the automated adjustments from taking place. (Tapping the screen on dark areas or hot points will adjust the exposure, holding your finger for a moment will turn on the AE/AF lock keeping it there as you move the phone around.)

3. If it's not just for one shot, but all shots you take. Go to the Camera settings and turn on preserve "Exposure Adjustment", in the settings which starts each camera session with the exposure settings preserved from earlier - and simply keep this on 0.0. This can also make it more straight forward (one tap) to undo much of the automatic levelling without locking AE. Similarly you may want to also disable night shooting for darker scenes, by preserving that setting too.

4. On older iPhones, disable the HDR mode.


Adding on to this, newer iPhones have a ProRAW feature. Various tech youtubers have documented ways to create a shortcut in the Shortcuts app to automatically convert them to HEIF or JPEG, which might impose slightly less processing.

Similarly, ProRAW photos imported to Apple Photos on Mac can be reduced in processing by clicking Edit Photo, and then hitting Save Changes without necessarily changing any settings (or just changing something inconsequential would work too). Doing editing forces the Photos app to discard the preprocessed image embedded in the ProRAW and replaces it with a software processed version which I find slightly more natural. However, ProRAW is still processed by the phone, so you need to download a third party app and shoot classic non pro RAW (then use the same conversion shortcut maybe) on top of it.


This is a slightly off topic response as well, but some Android phones are currently capable of shooting RAW video in 12MP, sometimes at 60fps (more than 4K 8.3MP). Google shot a music video [1] (unfortunately only available in 1080p) on the Pixel 7 Pro using a third party app called MotionCam Pro. The app shoots RAW video which can be imported into DaVinci Resolve or similar, or rendered to a mp4 in-app with any log profile you prefer optionally, and has no processing applied.

According to the developer of something called AMVR [2], the quality obtainable is much higher than that of even the aforementioned log files from iOS.

I asked whether it would be feasible to shoot DNG on iOS as video and was told that iOS lacks a camera API that is performant enough, resulting in several fps only. I haven't personally tested it though, so maybe this could be a fun project.

[1] https://www.youtube.com/watch?v=SyeS_xYxCLI&pp=ygUfYW15IHNoY...

[2] https://www.youtube.com/watch?v=_Xra4ATrWZ4


I'm the developer of MotionCam Pro. I have no affiliation with AMVR, it bothers me that they present themselves as somehow associated with me.

As a side note, whenever Apple takes steps to release a feature aimed at professionals there is a significant uptick in users trying out MotionCam. I think there is a small but very vocal group of users that have wanted something like this for years but have not been catered to.


> whenever Apple takes steps to release a feature aimed at professionals there is a significant uptick in users trying out MotionCam.

Ha, and here I am, learning about your app for the first time, and about to install it... :)


It might be worth trying halide? I know they used to have "we make photography better" with a pro-photographer angle, but I guess to an extent it depends on what the camera is providing, but I recall halide kind of implied they got raw data?


Halide has several modes, one of which is giving you the full unadulterated RAW, and optionally doing in-app jpeg conversion with less offensive processing. These photos have more noise due to the limitations of the sensor.

Halide, like the official camera app, can also shoot ProRAW but ProRAW is not totally RAW, it has been processed with the frame stacking to reduce noise, but introduces sharpening artifacts.


I think you gotta buy a more professional camera app to fix stuff like that, and even that doesn’t work for some things.

I’m always pissed about bad the iPhone is at taking photos of rainy days.


I think blackmagics iPhone App disables a lot of automatic processing; however I haven't tested post-processing behaviour myself.

Given that it gives such fine-grained control of the sensor, it wouldn't surprise me though.


Blackmagic Camera app doesnt take pictures. Its for video.


Quite right, this shows how little I actually looked at it.


well, there are some other apps like Halide, but I doubt they are free.


Halide costs 12$ a year (or 60$ OTP I believe). I don't like subscriptions, but given the app has replaced 1500$ worth of camera I used to carry with me, I'm ok eating the 1$ a month.


Best thing is to shoot in Raw or ProRraw and post-process yourself

Edit: Fix typo


I don't like this new modern color keying in videography, to me everything looks yellow and washed out.

What's wrong with contrast?


What screen are you looking at it on? There are all sorts of colour grading styles, often it does go through trends but punchy images are still very common, and I’m not sure what you’re talking about with yellow ones…


Look at the picture of the dog: some of its fur is overexposed, and you can’t get the values back. A logarithmic scale means you lose less detail at the extremes (bright and dark), so the log picture isn’t overexposed.


I think the question was more about the general tendency to make movies with that teal/orange look, that's still going on strong after 12 years.


> Apple shocked us all by addressing this head-on: The iPhone 15 line charges via USB-C instead of Lightning, and this standard USB port can do a lot


Even more so than their revolutionary new design?


[flagged]


Blammo's best seller!


the way i see it is it's the video equivalent of a RAW file for a still picture.


The article links to another article comparing them.

https://prolost.com/blog/rawvslog


LUTs are a pretty basic stuff and videogames have been using them for ages, plus HDR, tone mapping and color grading. Old stuff.


I have implemented all these algorithms myself, but thanks for the down votes :D


Could someone please confirm or deny Samsung has it better because they included 10 times larger sensors, 20x better zoom, titanium since 2017 and hired BTS for their ads?

Reference: https://youtu.be/dLHJl7mwY7M?si=e0Cm2q4bnn_u14Qf


Since Apple sources their camera chips from Sony, the more relevant question is how Sony's flagship (the Xperia 5 V?) stacks up with its Sony IMX888 vs the Sony IMX803 in the apple. It used to be that Sony's flagship had a slightly better imaging sensor than the Apple flagship, but I don't think that's the case any more. At any rate, Android doesn't seem to provide an API that exposes log/raw-ish sensor data for video (although for stills, it's been there since 2015).


How could that possibly be confirmed or denied? If you’re not asking a question don’t use a question mark: just make your point.


> hired BTS for their ads

Hardly a selling point.

> titanium since 2017

Source?

> 10 times larger sensors

4 times (48mp vs 200mp), and zooming into the highest quality images produced by both cameras yields little difference.

This comment feels like grasping at straws. The iPhone and Samsung both have excellent cameras, and excel in different places. Camera comparisons by professional photographers show that much. In my non-professional opinion, samsung wins in some edge cases, but the iphone generally has a better colour profile and undeniably better video capabilities, but debating about minute differences in camera processing in a text only forum feels counter productive, especially when plenty of professional comparisons exist.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: