idk man, I work at a big consultant company and all I'm hearing is dozens of people coming out of their project teams like, "yea im dying to work with AI, all we're doing is talking about with clients"
It's like everyone knows it is super cool but nobody has really cracked the code for what it's economic value truly, truly is yet
My personal POV is that I've been coding from scratch when coding for creativity. It has been wildly refreshing to go back to handwriting some cool little HTML/CSS/jQuery(sometimes) static websites.
From a professional & learning standpoint? Some good points in this thread:
- Realizing I still need to code from scratch when setting up initial architecture/directory stuff like kryptoyagi said. GitHub Copilot fails at this constantly (or I suck with it)
- cpach is right... I'd guess that a >50% majority of software devs worldwide are barely using AI lol. I graduated CS not even 10 years ago and like 7 out of 8 of my friends dont want to use it (or are working on old-ass repos/govt repos)
- aslansahn is spot on, I feel like this has been the most efficient approach. Plan everything out, have it go one-by-one for you, step in manually once in awhile to course correct if necessary
tysons is a good example as well. I always think the development of the DC metro is some of the most impressive in the sense of 'cities' popping up along the train lines.
I haven't travelled the entire country but I've never seen anything quite like Silver Spring, Bethesda, or as you say, Reston. Super interesting.
super neat & fun idea! as a fellow weekend art-project tinkerer (but probably a bit more amateur), what is your flow for making apps these days? I've been building a few things myself with help from GitHub copilot but I don't have a lot of other perspectives on what people are using to whip up their neat ideas. Cursor? replit?
I know "lol" type comments aren't super typical or accepted on HN but I need to reply just to acknowledge that this comment made me legitimately laugh out loud in the workplace LOL (good luck explaining that one to my non-tech coworkers xD)
Zed team is writing their own in-house GUI stack [1] that leverages the computer's GPU with minimal middleware in-between. It's a lot of work short-term but IMO the payoff would be huge if they establish themselves. I imagine they could poke into the user-facing OS sector if their human-agent interaction is smooth. (I have not tried it yet though)
I am very sensitive to input latency and performance but after comparing Zed and VS Code for a while I really couldn't find any reason to stick with Zed. It's been a year or so since I last tried it but VSC just lets me do way more while still, IMO, having a nice, clean UI. I never notice any performance or key input latency with VSC.
Sorry for jumping on this off-topic but I'm a junior engineer hoping to build out some of those small online businesses but I've been a bit unsure of how to go about it. When you say small online businesses do you mean like micro-SaaS kind of things? Or like tangible items? Sorry, just curious :)
Micro-SaaS and digital products. Just figure out a good stack to work with for billing and try to crank out one little thing a week that will be useful to someone.
One of my best projects just sells some pdf files you can submit to the government to achieve a thing you would usually unnecessarily hire a lawyer for.
Another in a similar vein simply offers an easy-to-fill PDF version of a government form that does not exist online, and a nice HTML interface that will help you mostly automatically fill it.
Most of these took less than a day to build and take next to no maintenance. Both of the above earn more than $100k annually.
Just make sure your customers can get in touch with you very easily so you don’t end up with broken websites running on autopilot charging customers for broken stuff, I made that mistake once and ended up having to call a bunch of people to apologise when I discovered what had been happening.
I've never dabbled in audio programming but am both a tech-minded musician and developer. Care to share any starter links, orgs, platforms, tools, or even your own work to point me in the right direction?
( rly, feel free to plug your own work :D )
I am unfamiliar with the differences between these two at all, as someone who uses audio plugins but does not develop with them. What are the main differences and why is OP claiming that there are far better methods of doing so?
The short answer is that there really aren't. All extant audio plugin APIs/formats are basically ways of getting audio data into and out of a `process()` (sometimes called `Render`) function which gets called by the host application whenever it needs more audio from the plugin.
Every API has its own pageantry not just around the details of calling `process()`, but also exposing and manipulating things like parameters, bus configuration, state management, MIDI/note i/o, etc. There are differences in all of these (sometimes big differences), but there aren't any real crazy outliers.
At the end of the day, a plugin instance is a sort of "object", right? And the host calls methods on it. How the calls look varies considerably:
VST2 prescribes a big `switch()` statement in a "dispatcher" function, with different constants for the "command" (or "method", more or less). VST3 uses COM-like vtables and `QueryInterface`. CLAP uses a mechanism where a plugin (or host) is queried for an "extension" (identified by a string), and the query returns a const pointer to a vtable (or NULL if the host/plugin doesn't support the extension). AudioUnits has some spandrels of the old mac "Component Manager", including a `Lookup` method for mapping constants to function pointers (kind of similar to VST2 except it returns a function rather than dispatching to it directly), and then AU also has a "property" system with getters and setters for things like bus configuration, saving/loading state, parameter metadata, etc.
I'm not sure why OP is claiming that AU is somehow unopinionated or less limited. It doesn't support any particular extensibility that the other formats don't too.
Historically, there was an API translation layer that VST2 plugins used (called Symbiosis), but these days the vast majority of plugin devs use a framework like JUCE which has multiple different "client-side" API implementations (of VST2, and VST3, and etc) that all wrap around JUCE's native class hierarchy.
There's a few other frameworks floating around (Dplug for writing in D, a few others in C++), but JUCE is far and away the most common.
Me three would like to know. I'm producer/mixer who favors AU over VST3 plugins. Not for any opinionated reason. Merely because my experience is that they're slightly less error prone in my DAW.
It's like everyone knows it is super cool but nobody has really cracked the code for what it's economic value truly, truly is yet
reply