I find it immensely helpful for boilerplate code and ‘autofilling’ trivial things.
It’s like a next generation intellisense.
On average, instead of spending about 5 mins copying and modifying code, I can spend about 2-3 mins typing out a comment and letting the LLM generate the code.
It gets the code 100% correct about 90% of time. Sometimes it’s about 90% correct about 10%.
I was never very good in high school. If I tried very hard, my best score would always be close to 80%. I never liked being told how and what to study, that kind of environment never fit well with me.
Uni was a much better fit, being able to study and learn in my own time, and on my own way.
I still remember my first two years at Uni. Barely passing almost all tests/exams with a 50%-60% despite spending many hours at night in the library to study before going home to do the same. I felt like an imposter faking in class.
One day, after struggling so much, I realised that trying to force myself to learn. Instead, try to understand the material by experimenting with it.
If I was given a piece of code, I would modify it and re-write it from scratch in my own way. Slowly my grades started to improve, but most importantly I began to enjoy Uni much more. I met many people, and found friends to have fun and muck around with.
Looking back now that I'm working full time, it's like a fond memory where I did many embarrassing things with friends.
I'm still early in my career, but almost all my skills/knowledge I've developed came from me experimenting and exploring in my own way in my own time.
I'm not sure if this helps, but that's my rough journey.
For me recently, it was learning about backpropagation with batch normalisation.
I realised that I had many gaps understanding what and how to partially derive with vectors and making sure their dimensions aligned with the 'with respect to variable'.
Most of my exposure was just to 'variables/letters' and not vectors. So thinking in dimensions during derivation caught me off guard.
Edit: It took my 3 days on and off to figure it out...
One advantage that I have with the M1 Pro 32 GB RAM over my gaming desktop is that I'm able to run large ML models such as Bloom, Whisper, and Stable Diffusion with reasonable performance.
How are you running them so well? What do you use for your device since CUDA is obviously not supported and 'mps' is not very impressive, compared it to just about any NVidia GPU including the aging 1080ti[1]
Out of curiosity what are your desktop specs? I have stable diffusion running quite well on a few different systems of varying spec. That said, it's great to be able to take it on the road with you.
With little effort, I was able to model my personal finance trajectory and build very a useful overview of my life in terms of financials.
Within the first 5 minutes of using it, I immediately paid for a premium account.
Thanks for your efforts!