Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> "Can you hear me? I can't hear you"

Would you mind explain me why there isn't anything (or only little) done in that area? I guess we could implement some assistive checks for input and output audio streams? What is blocking us to implement a "handshake" that will verify that all is fine? - let's say communicators will send a recorded sample "Hello World" message over wire and check the microphone, volume, noise etc. In my opinion the pattern how we resolve the "I can't hear you" problem is a repeatable checklist that we more or less confidently do, so why we can't implement it as some algorithm that takes care of those problems in automatic way?



because your microphone can't hear what's happening in your headphones, and your computer doesn't know if you're using headphones.


A system that has access to both the microphone and the headphones could coordinate both based on past experiences (e.g., learning). I don't know how much data you would need to make it reliable though. If you know how the data is going to be transmitted, and assume little network delays, recording yourself before a video session might be of great help. Probably the same problem that TV broadcasters have. It's hard work to get quality real time content.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: