A custom, DIY smart monocle from off-the-shelf parts, 3d printing, and custom electronics: 1080p60hz, 8-11hrs of battery life on a belt-clip battery + computer combo, has wifi & lte/cellular, can run ML models on device. One third the weight of upcoming Apple AR/VR glasses and one-sixth the cost. Just having it working has increased my efficiency a ton without obstructing vision or requiring me to look at secondary monitors or phone.
Working on replacing my wireless keyboard and trackpad with some "gloves" so I can use it while on hikes or just generally outside. Then, gonna integrate some custom AR and ML/GPT.
Which part? The glass is an Epson BT-40 cut in half with some soldering to bypass needing both eyepieces; this cuts the weight & power consumption by ~40%. Mounted onto a printed carbon-fiber nylon frame similar to bone conduction headphones. The computer is a single-board computer I had lying around, but I will upgrade to 12-core 30W SBC next week. The battery is one made for video cameras, and I gave it a belt clip and strapped the SBC onto it. The SBC has camera & mic inputs as well as GPIO for whatever I want to add.
Congrats on the launch Shah! I can tell you stayed up late giddy for this launch :D. As another peer building for video creators, I am delighted to see more efficiency features like this released.
This approach was the one I tried first also (I also tried the frequency one fwiw, which has its own, worse drawbacks). But using loudness runs into issues if the source loudness isn't (relatively) even across the entire source media. Using a single sensitivity setting like this would be a problem if:
* recording gain is set to automatic, and there are sudden changes in noise floor like wind (if recorded in 24-bit or lower)
* crew adjusts gain partway through recording (big no-no but happens)
* talent/host moves in and out of microphone sweet spot
* talent/host adjusts themselves in a squeaky chair during silence or transition-to-silence (or coughs, or breaths loudly, or ambulance goes by...)
If you apply the edit w/ a single sensitivity and something like the above is true, it would cut in the wrong place. Unfortunately, you would have to watch the entire show, skipping to boundaries with your full attention to know that ever got a cut wrong.
The single-level approach is what Recut does too, and it tries to take a guess at a threshold with clustering but it's not always perfect. Maybe a better way to go would be a dynamic noise gate or kalman filtering or something.
Vidbase is looking awesome btw! I bet it's going to be huge. It looks like you've paid an insane amount of attention to the details.
this is super useful insight for us, thank you for sharing. yeah another product we're working on is "auto audio leveling", which I hope solves some of this, but we'll see.
and yes, I was very excited, thank you for checking it out, Van!
The black rectangles exist because the video is a different aspect ratio than your monitor. Not all displays have the same ratio, a monitor with a matching ratio will have no black bars on any side. So by nature, closed captioning has to be within the video bounds.
I'm aware of the reason for the black bars. The point is that the captions that are not part of the video stream (I believe on DVDs they are not) can be presented below the video if that space exists and the user desires.
An offline-first web version of AE/Prem is what we're working on @ Vidbase. Internally, it works today better than the desktop apps imo. Others are working on similar tools as well.
I no longer see the warning on the readme, but, this relies on SharedArrayBuffer, so it is not currently supported on mobile
(except Firefox for android) and some other browsers: https://caniuse.com/sharedarraybuffer
> Your browser doesn't support SharedArrayBuffer, thus ffmpeg.wasm cannot execute. Please use latest version of Chromium or any other browser supports SharedArrayBuffer.
Most of the devs quit WalmartLabs a long time ago (including myself, Eran, etc). Sponsored in the sense that WalmartLabs paid us to use it to build their services and we developed hapi.js/joi to support our jobs.
I'm genuinely curious if anything produced by Walmart Labs had any sort of "commercial" success or was even adopted within the main Walmart ecosystem. I've certainly heard of hapi, but don't know if it ever gained all that much adoption. Or was it mostly a recruiting tool to make Walmart attractive to a better class of devs?
While I was there, hapi.js was extensively used by the Global eCommerce department and was responsible for fronting the entire mobile API. You can check Eran's blog for stories of how it handled all of Walmart's mobile API traffic especially during the massive thundering herd of Black Friday traffic. It was used on many other projects including some "big name" projects, however, I don't know if hapi's involvement was made public for those so can't name them directly.
Thanks for update and nice work for the community. I didn't end up taking job at WL, but Hapi greatly increased my technical opinion of them and was valuable for recruiting devs too.
The latter should work but the former requires an extra step for now iirc: https://bun.sh/docs/cli/install#lifecycle-scripts