Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Regarding the impact of LLMs on non-programming tasks, check out this one:

https://www.ft.com/content/4f20fbb9-a10f-4a08-9a13-efa1b55dd...

    > The bank [Goldman Sachs] now has 11,000 engineers among its 46,000 employees, according to [CEO David] Solomon, and is using AI to help draft public filing documents.

    > The work of drafting an S1 — the initial registration prospectus for an IPO — might have taken a six-person team two weeks to complete, but it can now be 95 per cent done by AI in minutes, said Solomon.

    > “The last 5 per cent now matters because the rest is now a commodity,” he said.
In my eyes, that is major. Junior ibankers are not cheap -- they make about 150K USD per year minimum (total comp).


This is certainly interesting and I don’t want to readily dismiss it, but I sometimes question how reliable these CEO anecdotes are. There’s a lot of pressure to show Wallstreet that you’re at the forefront of the AI revolution. It doesn’t mean no company is achieving great results but that it’s hard to separate the real anecdotes from the hype.


Claims by companies with an interest in AI without supporting documentation are just that, claims, and probably more PR and marketing than anything.


I mean that's such a text heavy area anyway. I am not an expert in filing S1 but won't a lot of it be more or less boilerplate + customisations specific to the offering? Any reasonably advanced model should be able to take you a good chunk of the way. Then iterate with a verifier type model + a few people to review; even with iterations that should definitely shorten the overall time. It seems like such a perfect use case for an LLM - what am I missing that is hidden in the scepticism of the sibling comments?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: