Hacker Newsnew | past | comments | ask | show | jobs | submit | prosunpraiser's commentslogin

Reuse is not always necessary - sometimes things are just done for fun and exploration, not for appeasing thirsty VCs and grabbing that market share.


Reinventing the exact same thing and shouting from the rooftops about it is exactly how you appease thirsty VCs and grab that market share.


Why would John Carmack who is so rich that he does things for shits, giggles and personal development, give a hoot what a VC cares about?


Might I interest you in my new startup, that is a bus, but with technology?


Sheep mentality hard at work at companies. Just because Google does it (processes, technologies, systems etc), lets also adopt it without thinking whether its relevant in our context and use-cases. I bet the same devs from these firms who are asking to traverse a minimum spanning tree would fumble at even the slightest variation of the problem appearing in daily life.

A general rant.


Unfortunately the solution is quite simple - don’t release in EU.


People keep saying this, but I've yet to see a company follow through.


I am from EU and we have been getting a lot of things much later and having to use VPNs to get past this. Sometimes even a VPN is not enough since we need US phone number etc. Couldn't get access to Claude and OpenAI advanced voice mode for a long time. It feels very frustrating since others can already start building with cutting edge things, giving them a huge advantage.


Have yet to see a case where they didn’t also include the UK and other decidedly none EU states, which makes me think this has little to do with regulation and more trying to manage load during the initial release. Regulation is of course an amazing scape goat, especially if one intended to convince people against any legislation not written by and for the current front runners of the industry.


Yes, unfortunately in this world not being "ethically constrained" can give you a big advantage. Still doesn't mean one should not strive to do the right thing.


If your ethics is innately tied to the nuance of fringe intellectual property law in a brand new domain, it may be time to relax some things

That is to say, a lax perspective of intellectual property law can be less or more ethical than one which supports the undeterred corporate collation of powers behind those laws.

Those in control do not have a monopoly on ethical goodness.


In this case the potentially infringed parties would be YouTube creators, who in most cases are not corporations or backed by them.

(I myself personally happen to think it's fine to train a commercial AI on content from the internet like this, but the framing of your argument just feels misleading or even manipulative. "Copyright" -> "IP law" -> "big business" -> "bad vibes", when for cases like this the affected people are almost all small individuals and the responsible entities are almost all big corporations.)


It is subjective.


It looks like opportunity.


Meta and Apple both declined to release AI tools in the EU due to regulations:

https://www.cnet.com/tech/services-and-software/meta-follows...


“Delayed” seems more accurate. At least Apple already released some AI features and plans to release more later this year.

> Mac users in the EU can access Apple Intelligence in U.S. English with macOS Sequoia 15.1. This April, Apple Intelligence features will start to roll out to iPhone and iPad users in the EU.

https://www.apple.com/uk/newsroom/2024/10/apple-intelligence...


I live in the EU and pretty much every major AI tool gets either delayed or not released here, it is terrible.


Another anecdote, I don’t mind. I’m happy to let the rest of the world beta test AI, I’ll wait and enjoy more transparency (and perhaps a more polished experience).

Then again I’m not a pro AI user and I was never in a situation when I wanted to use an AI product and it wasn’t available.


Or just lie


Short term: fine; long term: fines.


You prove the point that these are just token generation machines whose output is psuedo-intelligent. It’s probably not there yet to be blindly trusted.


More to the point; I wouldn't blindly trust 99% of humans, let alone a machine.

Though to be fair we will hopefully quickly approach a point where a machine can be much more trusted than a human being, which will be fun. Don't cry about it, it's our collectives faults for proving that meat bags can develop ulterior motives.


This. I use the same workflow. Also I am too lazy write and maintain notes - so I just use joplin for tags / metadata and typora (wysiwyg editor for md). Thinking and taking notes over typing on Typora is a godsend. Best $15 I have spent.

Writing todos as checkable list items in markdown and hitting them off one by one and tracking notes on the same md under different headings works like a charm. No more JIRA / excel / context switch.


Not sure if this helps but this is from tinkering with Mistral 7B on both my M1 Pro (10 Core, 16 GB RAM) and WSL 2 w/ CUDA (Acer Predator 17, i7-7700HK, GTX 1070 Mobile, 16GB DRAM, 8GB VRAM). - Got 15 - 18 Tokens / sec on WSL 2 with slightly higher on M1. Can think of that to about 10 - 15 words per second. Both were using GPU. Haven’t tried CPU on M1 but on WSL 2 it was low single digits - super slow for anything productive. - Used Mistral 7B via llamafile cross-platform APE executable. - For local-uses I found increasing the context size increased the RAM a lot - but it’s fast enough. I am considering adding another 16x1 or 8x2.

Tinkering with building a RAG with some of my documents using the vector stores and chaining multiple calls now.


how does 7b match up to Mistral 8x7B?

coming from chatgpt4 it was a huge breath of fresh air to not deal with the judeo-christian biased censorship.

i think this is the ideal localllama setup--uncensored, unbiased, unlimited (only by hardware) LLM+RAG


I haven’t seen on how it fares on uncensored use-cases, but from what I see Q5_K variants of Mistral 7B are not very far from Mixtral 8x7B (the latter requires 64GB of RAM which I don’t have).

Tried open-webui yesterday with Ollama for spinning up some of these. It’s pretty good.


Execution traces have a goroutine profile which outputs the count of goroutines as well. That can be used for an alert as well - though it would require parsing the trace output. They recently made some changes to give a structured API over trace data - maybe use that?


(This is from when I last evaluated Cadence - which is now temporal.io. The state must have changed since then.)

Workflows are not zero-cost, they have their own tradeoffs compared to microservices. State management / bootstrapping logic becomes non-trivial, execution order though easier to visualize is also slightly not deterministic, workflows are not as well suited for request-response style replies due to the latency involved in total execution etc (but I think they are great alternatives to async / background workfllows) - and shared underlying infrastructure means increased chances of SPOFs.

The state must have improved much since then. Also, adoption of anything new to require remodelling your application into a different paradigm must be worth the value delivered. For example, modular monoliths became popular because they reduced operational complexity by reducing # of pieces involved. At the time, that value prop vs effort involved was unclear to our teams IMO


Temporal is designed to handle lower latency use cases than data pipeline systems like Airflow. It also has added a feature recently called Update designed for request-response style interactions that allows for communication with Workflows on the order of tens of ms.


There is a lot of knowledge / community support around microservices already so people find it hard to gravitate automatically towards workflows (same didn’t exist for those. Same thing as GraphQL vs REST)


In enterprise where there is poor documentation and lots of tribal knowledge, noting down just those 2 lines for every new info is a quick way to break down knowledge gaps created by just that.

It is exactly due to such disdain for documentation that most people find it hard to navigate large codebases. Documentation is not just for noting things down pedantically but also a thinking tool and a temporary thought buffer.

And no one pushes code to production to validate assumptions. Not if you have 100 clients and you are not doing CD.


> And no one pushes code to production to validate assumptions.

I always have, with the rare issue occurring and by and large rewarded for it.

> Documentation is not just for noting things down pedantically but also a thinking tool and a temporary thought buffer.

Sure, but why not treat your codebase as a temporary thought buffer? I do, and it’s consistently worked and improved every system I’ve worked with. No teammate has ever complained about this strategy. If anything it’s typically adopted by teammates.

eg “oh this list is never modified” rather than taking a 2 line note, I’ll push a code change to use an ImmutableList.

The knowledge is now documented, enforced by the compiler/code reviews if the type changes people talk about it, and allows me to keep improving that part of the code base months later without code conflicts or needing to re-make my changes from a notes file.

1-2 line reactors - exclusively better than 1-2 line notes. This scales to any number of lines where the size of the note is equivalent to the size of the possible code change.

Meanwhile, please do take notes and document when it’s at least 10x shorter to grok than the current code or possible code change.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: