Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is interesting to me because you have all the right facts and are reasoning well with them. But, we end up at: "Yeah you're right it wasn't killed, just a rebrand, so they'll probably just delete the code for it"

I worked at Google, and I can guarantee ya people don't go back and change names in old code for the latest rebrand done for eyewash 4 layers above me. Not out of laziness, either, it just has 0 value and is risky.

Also, video conference perf was/is a pretty big deal (c.f. variety of sibling comments pointing out where it is used, from gSuite admin to client app). It is great on ye olde dev machine but it's very, very hard on $300 WintelChromebook thrown at line-level employees

FWIW, they shouldn't have hacked this in, I do not support it. And I bet they'll just delete it anyway because it shouldn't have been there in the first place. Some line-level employee slapped it in because, in the wise words of Ian Hickson: "Decisions went from being made for the benefit of users, to the benefit of Google, to the benefit of whoever was making the decision."



Sure, I was sloppy in my use of the term "dead". Hangouts the product/brand ceased to exist, Hangouts the codebase lives on. It was ever thus. I worked at Google too, y'know ;)


Cheers


Google videoconferencing runs astronomically better on a $300 Chromebook than on a $2500 Intel Mac.


Heh, 100% agree. I switched to Chromebook went WFH started because of it. It couldn't handle it on an external display but at least it wasn't painfully bad


This decision was to the benefit of users if it got videoconferencing off the ground before Zoom came along.

(I swear, sometimes I think the Internet has goldfish-memory. I remember when getting videoconferencing to work in a browser was a miracle, and why we wanted it in the first place).


Okay.

Pretending you said something conversational, like: "is that quote accurate in this case? The API may have literally enabled the creation of video conferencing. I, for one, remember we didn't used to have it."

I see.

So your contention is:

- if anyone thinks a statsd web API, hidden in Chrome, available only to Google websites is worth questioning

- they're insufficiently impressed by video conferencing existing

If I have that right:

I'm not sure those two things are actually related.

If you worked at Google, I'm very intrigued by the idea we can only collect metrics via client side web API for statsd, available only to Google domains.

If you work in software, I'm extremely intrigued by the idea video conferencing wouldn't exist without client site web API for statsd, available only to Google domains.

If you have more details on either, please, do share


Scoping the data collection to Google domains is a reasonable security measure because you don't want to leak it to everybody. And in general, Google does operate under the security model that if you trust them to drop a binary on your machine that provides a security sandbox (i.e. the browser), you trust them with your data because from that vantage point, they could be exfiltrating your bank account if they wanted to be.

But yes, I don't doubt that the data collection was pretty vital for getting Hangouts to the point it got to. And I do strongly suspect that it got us to browser-based video conferencing sooner than we would have been otherwise; the data collected got fed into the eventual standards that enable video conferencing in browsers today.

"Could not have" is too strong, but I think "could not have this soon" might be quite true. There was an explosion of successful technologies in a brief amount of time that were enabled by Google and other online service providers doing big data collection to solve some problems that had dogged academic research for decades.


To be more clear:

After your infelicitous contribution, you were politely invited to consider _a client side web API only on Google domains for CPU metrics_ isn't necessary for _collecting client metrics_.

To be perfectly clear: they're orthogonal. Completely unrelated.

For some reason, you instead read it as an invitation to continue fantasizing about WebRTC failing to exist without it


What would the alternative be?

(Worth noting: Google Hangouts predates WebRTC. I think a case can be made that big data collection of real users machine performance in the real world was instrumental for hammering out the last mile issues in Hangouts, which informed WebRTC's design. I'm sure we would have gotten there eventually, my contention is it would have taken longer without concrete metrics about performance).


I made this.

  +------------------+
  |   Web Browser    |
  | +--------------+ |
  | |  WebRTC      | |
  | |  Components  | |
  | +------+-------+ |
  |        |         |
  | +------v-------+ |    +---------------+
  | | Browser's    | |    |   Website     |
  | | Internal     | |    | (e.g. Google  |
  | | Telemetry    | |    |  Meet)        |
  | +------+-------+ |    |               |
  |        |         |    |  (No direct   |
  | +------v-------+ |    |   access to   |
  | |  CPU Stats   | |    |   CPU stats)  |
  | |  (Internal)  | |    |               |
  +------------------+    +---------------+
           |
           | WebRTC metrics
           | (including CPU stats as needed)
           v
  +------------------+
  |  Google Servers  |
  | (Collect WebRTC  |
  |    metrics)      |
  +------------------+
Another attempt, in prose:

I am referring to two alternatives to consider:

A) Chrome sends CPU usage metrics, for any WebRTC domain, in C++

B) as described in TFA: JavaScript, running on allow-listed Google sites only, collect CPU usage via a JavaScript web API

There's no need to do B) to launch/improve/instrument WebRTC, in fact, it would be bad to only do B), given WebRTC implementers is a much less biased sample for WebRTC metrics than Google implementers of WebRTC.

I've tried to avoid guessing at what you're missing, but since this has dragged out for a day, I hope you can forgive me for guessing here:

I think you think there's a _C++ metrics API for WebRTC in Chrome-only, no web app access_ that _only collects WebRTC on Google domains_, and from there we can quibble about whether its better to have an unbiased sample or if its Google attempting to be a good citizen via collecting data from Google domains.

That's not the case.

We are discussing a _JavaScript API_ available only to _JavaScript running on Google domains_ to access CPU metrics.

Additional color commentary to further shore up there isn't some WebRTC improvement loop this helps with:

- I worked at Google, and it would be incredibly bizarre to collect metrics for improvements via B) instead of A).

- We can see via the rest of the thread this is utilized _not for metrics_, but for features such as gSuite admins seeing CPU usage metrics on VC, and CPU usage displayed in Meet in a "Having a problem?" section that provides debug info.


I also worked at Google, and this kind of telemetry collection doesn't seem surprising to me at all. I don't know if you are / were familiar with the huge pile of metrics the UIs collect in general (via Analytics). I never worked on anything that was cpu-intense enough to justify this kind of back-channel, but I don't doubt we'd have asked for it if we thought we needed it... And you'd rather have this as an internal Google-to-Google monitor than punch a big security hole open for any arbitrary domain to query.

JS is easier to debug (even with Google's infrastructure), and they have no need of everyone else's videoconference telemetry (which when this was added, would have been, iirc, Flash-based).

I believe the things they learned via this closed loop let Google learn things that informed the WebRTC standard, hence my contention it got us there faster. Unless I've missed something, this API was collecting data since 2008. WebRTC was 3 years later.

I think you've misunderstood my question regarding "What would the alternative be?" I meant what would the alternative be to collecting stats data via a private API only on Google domains when we didn't have a standard for performance collection in browsers? We certainly don't want Google railroading one into the public (with all the security concerns that would entail). And I guess I'm just flat out not surprised that they would have dropped one into their browser to simplify debugging a very performance intensive service that hadn't been supported in the browser outside plugins before. Is your contention that they should have gone the flash route and done a binary as a plug-in, then put telemetry in the binary? Google had (And mostly still has) a very web-centric approach; doing it as a binary wouldn't be in their DNA.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: