A week later, when the GetCpuUsage function was added to process.cc, the CloseHandle call at the end of GetProcessMemoryUsage was deleted/pushed into GetCpuUsage.
The unified diff for that change doesn't point out that code was deleted from GetProcessMemoryUsage (the CloseHandle() call is now in GetCpuUsage).
Someone ok'd those changes...
Also MS's PREfast/PREfix static code analysis tools missed that bug as well. I'd update my static analysis perl script to check for that case, but one of the comments to TFA said chatgpt will catch the problem, so I'll punt.
Minor nit: The ProcessInfo struct declares fields in an awkward fashion if you're debugging, esp. if you're viewing memory of the struct. One should put small size fields at the start of the struct, large size fields at the end. As is, the large name character array will occupy a large amount of space and if you want to see the values of the small fields, one would probably have to scroll to locate these small fields after the large character array.
Minor nit: What's with the magic number 1024? At least in recent versions of the process.cc file, they've gone from a fixed 1024 element array (each element == MAX_PATH * TCHAR + sizeof(DWORD)*3 bytes) to a dynamic array.
Great analysis, but the solution is so simple: RAII.
And this is offered as `unique_process_handle`[1] in the Windows Implementation Library (WIL)[2].
This is a fantastic addition to the Windows developer's toolkit, which everyone should be using from the outset. The Windows C API is unnecessarily verbose (although this can be said of any OS, Windows' is particularly bad), and simple mistakes can and will happen.
Let RAII help you, let the mighty close-scope token `} ` be your best friend.
> Sometimes I think it would be nice to have limits on resources in order to more automatically find mistakes like this
I was actually fairly disappointed when Visual Studio (not code) went to 64-bit. Because I knew its memory usage was now going to be unconstrained. Still way better than the unapologetic gluttony of Rider but experiences showed it to be a bit leaky over time (tip: Ctrl-Alt-Shift-F12 twice does a full garbage collection https://learn.microsoft.com/en-us/visualstudio/ide/visual-st...)
Also remember that all your references (pointers) are going to double in size so right off the bat it will use more, potentially a lot more depending on how reference-heavy your data is
Even if it stayed 32 bit that wouldn't have helped with memory resources, because experience has proven that if we care about host stability extensions should be running on their own processes with OS IPC.
Which is something that VSCode does, and VS has been increasingly adopting, and I bet VS 2026 will most likely double down on it.
Which in the end means lots of processes going around to support IDE related tasks.
The code base for VSCode seems to be huge. With plug-ins, bloat, all the different things that it does, and large number of installations, it seems an ideal target for vulnerabilities and supply chain attacks.
There are job objects which are similar to Linux cgroups, including the ability to set a limit on the number of processes. But I'm not sure if that limit will be tripped in this case or not because the child processes have exited, whereas the job object parameter is specifically called LIMIT_ACTIVE_PROCESS
OpenProcess retrieves a handle to an existing process rather than creating a process so it won't be governed by JOB_OBJECT_LIMIT_ACTIVE_PROCESS, the bug here is that it's leaking handles, not processes.