Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Code quality only matters in context (2019) (adamtornhill.com)
128 points by behnamoh on May 28, 2022 | hide | past | favorite | 74 comments


At some point I started referring to code as expensive/inexpensive instead of good/bad, the metric being "how much money did it cost to first implement and then maintain this feature?". It makes business people pay attention.

I've found that it takes roughly the same time to write expensive and inexpensive code - additional expenses in the former case pile up later on in the form of time spent debugging and/or infrastructure working way too hard for the desired effect.

Unfortunately it takes knowledge and experience to write inexpensive code and it's not rewarded proportionally, so while you might cut expenses with your code by e.g. four, you're likely to be paid at most twice that of a regular developer, provided you get recognition.

Conversely, you can write expensive code and have an average salary - many people go this route, because pretty often it's hard to assign blame should something become expensive a few months later.


Agree. Do the same with meetings. How expensive is this meeting? If the meeting takes an hour then the cost of the meeting is the sum of the hour rate of the people participating. Then ask if the meeting will generate enough additional revenue to pay for itself?


How do you quantify the revenue generated from coordination, alignment, transparency and any intrinsic motivation gained by including people in the decision making process?

I agree a lot of meetings are pointless and not everyone present needs wants or maybe even should be there but I'm struggling to understand the quantifying you're supposedly able to do.


A couple years ago someone wanted to switch us off of Saucelabs because it was Expensive. I don't know how many 4-10 person meetings we had about it but it was a farce, and eventually I called them out on how many $5-10k meetings we'd had in order to try to save $25k a year.

The opportunity cost in these situations is usually harder to quantify. But for me it was distracting from cloud infrastructure work that had price charts associated with it, and contracts that also had dollar values associated with deadlines.

I think what annoyed me the most was that I had to point all of this stuff out to finally shut it down. Were were in a transitional period and people were trying to evaluate every sacred cow at the same time. That sounds like progress to some but like a recipe for burnout to me.


Yep it is crazy. Most employees have no idea how their work impacts the cash flow and revenues of the company they work for.


Any ideas on how to train people in thos area?


Ask people how they think their work contribute to the cash flow of the company. Ask them to consider what impact them not working would have on the cash flow. If their salary is greater than the income their work generate, then their work is actually not beneficial to the company. If they interrupt people who do have a positive impact on cash flow (unnecessary meetings) then it is even worse.


Most of the time this quantifying is impossible as you say. But sometimes half a dozen well-paid professionals do spend two hours discussing the colour of a bike shed.


Heh in a lot of companies that’s what is happening most of the time.


Does the meeting move a revenue generating project forward or solve a customer issue? If not then it is 100% cost. BS feel good ideas like “coordination”, “alignment”, “transparency” and pretending to include people in the decision process has zero value unless it results in progress on revenue generating projects. The more people you include in a decision the slower you make progress. The sooner you make progress the sooner you get feedback and learn how to improve. The least productive and unsuccessful companies I have worked for spend most of their time on exactly the BS ideas you mentioned. The most successful companies I have ever worked for spent zero time on “coordination”, “alignment” etc. They were too busy killing the competition in the market place.


I've recently started being include in some meetings I didn't realise were happening. It's night and day between the crapshoot planning was before hand. Now when I suggest solutions it's based on what the company needs instead of what I think is best for a given problem. Those things don't align more than you'd think.


I like this, I've been looking for alternative language to express this to other developers on my team! I've noticed people tend to shut down when you say their code is "bad"


I think this is a great way to summarize how it works out in the real world. I think I’ll adopt that mindset too.


> Or maybe I decide to squeeze in an extra if-statement in already tricky code.

And that's how the long tail becomes long tail. Nobody touches that part of the repo, because you have made it untouchable. People find working around easier than understanding and modifying existing code. Mess becomes messier.

Writing clean code is not about introducing big abstractions, large refactors. It's about leaving the place better than you found it.


Sometimes you have a root cause deep in the architecture. As long as that cause exists you will have to work around it. Each work around will have to be removed if you ever fix the root cause.

I saw this as a developer and had thought it was better to just leave the architecture alone and keep the work arounds as tidy as possible. Then I started working in manufacturing, a production line has many similarities to a running program, and saw the true cost. We fixed a root cause at the beginning of the line and all the work we put into working around the original problem was more of a mess to clean up than fixing the root cause.

I'm now a believer in trying to fix the source if possible.

That said a legacy code base with no tests is less likely to get a structural change from me.


If you are implementing a workaround, then that is a potential opportunity to solve a root problem deep in the architecture.

The point being made is that going in to fix that root cause because you think that there will be a need for workarounds in the future is premature and can cause problems and unnecessarily break working code.


> Each work around will have to be removed if you ever fix the root cause.

Do they? I feel like many workarounds will just hang around, checking for situations like inconsistent state that no longer arise. You might be better off removing checks for conditions that no longer occur (or not...), but it's not exactly urgent.


>Writing clean code is not about introducing big abstractions, large refactors. It's about leaving the place better than you found it.

Writing clean code and leaving the place better than you found it requires time. Time we generally just do not have. There comes a point where things just have to work, and all of your ideas about what is "right" and "clean" have to be set aside to make that happen.


This seems a very short sighted attitude to me, like saying that there is no time for testing, there are too many bugs to fix.


> This seems a very short sighted attitude to me, like saying that there is no time for testing, there are too many bugs to fix.

This is not true at all. The definition of "clean code" is highly subjective and context- and experience-dependent. Your personal opinion and feelings towards a code style are not the same as bugs, nor is a lack of compliancd with your personal taste a potential liability similar to not adding a test.


If the business fails because it couldn't get to market in time who cares if the code quality was good?

Context is critical to knowing when to invest in quality.


At an ISP, when I was much younger, at the coffee machine I asked the Technical Director why we didn't invest in some technical cleanup. He replied that any available finances are much more profitably spent in advertising, which was directly correlated to sales, which were critical to survival. I guess that's why there are two sorts of companies: those that survived, and those with great code quality !


There are two types of programmer.

There is a programmer that reads, absorbs the system as a whole and deals with it as it is.

Then there is the programmer who just wants shit written that can get them most of the way there and he'll fix the other shit to do as he wants eventually.

The software industry is generally run by the latter.

The former are those that see the most value in high quality code, because quality only matters when you can't reerite the thing without applying a cost function.

Context is not critical to knowing when to invest in quality. That is paying lip service to the fundamental nature of what business programming is.

Running a business staffed with programmers is all about balancing onboarding, spin up, time to contribute, etc, etc.

With high quality architecture that business loop is far more efficient. Costwise, time or money, all of that meta-businessy crap dwarfs the actual implementation of new features.

...And you can never rely on the person you need to be there when the chips are down to stay there when it happens, because Murphy finds a way, no exceptions.

You do it right from the beginning, or you write shitty software. There is no middle.


> The software industry is generally run by the latter.

Obviously. The latter adds value way faster, instead of wasting time "absorbing" stuff he will never need to touch.

Also, the latter understands code is ephemeral and the stuff you're wasting your time trying to "absorb" might not even be around once the next ticket is worked on. Even if it is, it will only need to be worked on if it ever gets in the way of a business requirement.

There are plenty of reasons why "goldplating" is a highly pejorative concept in software development.


So you've introduced the context of large scale mission critical software. Not all software is large scale and not all software is mission critical.

Sometimes a crappy script that saves an administrator 2 hours a day can be a Huge win.

Sometimes lives are on the line and bugs are not acceptable. Other times the cost of a bug is some internal user has to deal with a little frustration.

All investments are about weighing the cost with the potential benefits. Developing software is an investment.


>This seems a very short sighted attitude to me, like saying that there is no time for testing, there are too many bugs to fix.

It is. The priority for a programmer goes: make it work -> make it fast -> make it clean. And you either have time for all three of those, or you don't. Generally in reality though, when dealing with business needs and product managers, the pipeline becomes: make it work -> alright now make this work -> alright now also make this work.


I would argue that for most code, making it clean should be a priority over making it fast (optimizing for simplicity and readability over algorithmic performance).

It obviously depends on the application.


It's distressing how often I see this argument. I've been a proponent of keeping code tidy and tech debt under control for my whole career and all I've ever seen from other developers who do the same is much faster sustained development and far fewer bugs that waste time later. It's the teams who have let tech debt spiral out of control and jump from one quick change to another that I see drifting towards constant firefighting and never having time to do things properly. The argument made in defence of their abysmal performance when that happens is always some vague claim about needing to ship for business reasons at some important moment in the past and yet never considers that there will probably be many more important moments in the future (at least if the business is going to survive and prosper). I'm still waiting to see a startup fail because its developers wrote good code in the early stages.


Nonsense. You have to spend that time, either once now or 9 times later. You just need to be more firm about using it.


> Time we generally just do not have.

Am I the only one who thinks this old trope is a fallacy in most situations?


maybe it depends on how you are being managed...

if your constantly ticketed and measured on an hourly basis, there really isn't "time" available outside of what was allocated to you

on the other hand, if your workload is a collaboration, then you have the opportunity to negotiate "time" to fix issues that are slowing you down (compile times, test iteration, debugging, tooling) which will buy your more time (for features/improvements) later


There comes a point where impossible to leave code in a better state without a major refactor.

The choice is between writing a few ugly if statements making the code a little worse or spending 10x the time refactoring hoping you don't introduce any regressions.


The choice is between writing a few ugly if statements making the code a little worse or spending 10x the time refactoring hoping you don't introduce any regressions.

Those few ugly if statements might make the code only a little worse and take 1/10 of the time to do the job properly if the code was previously good. However the trouble with technical debt is the interest can compound rapidly. Your random set of if statements combined with three other developers' random sets of if statements from previous hacky changes (two of them in other parts of the code that the part you're modifying now implicitly and surprisingly depends on as a result) could be a story with a very different ending.


> And that's how the long tail becomes long tail.

Not really.

Developers should only touch production code if there is a good business reason to touch it, whether fixing a bug or adding a feature. If there is no good reason to touch a bit of code, you should not be touching it. Otherwise you're just adding noise to the audit trail, perhaps along with bugs in otherwise perfectly fine code, without any justification.

The long tail is a long tail because there are no bugs not reasons to mess around in those parts of the code. Feeling adventurous is not a good reason to mess with it. If you have to implement a feature or fix a bug, complaining that it's old code won't make things go away.


> If you have to implement a feature or fix a bug, complaining that it's old code won't make things go away.

sure, but you might decide to work on a different feature/bug because this part of the code is too arcane to touch


I’m coming up to almost a decade of programming and have 100% bought into this mindset. Simple understandable code tends to be easier to change and delete. Abstractions and fancy features should have a high bar to introduce them. They can be useful, but is the cost worth it? Usually not, but sometimes it pays off 10x.

Perhaps if timelines for delivering software slows down this mindset will be less advantageous. But in today’s climate this ensures you have more time for design and testing.


I accidentally discovered something similar 10 years ago in my coding. At one point I had crippling procrastination. My mind was protesting me. New frameworks, languages, etc were just becoming "how many nuances can I remember", and I wasn't learning or going anywhere.

To get over the procrastination I would set up a Pomodoro timer and set it to 20 minutes, and just write something, any code. I made it a joke to write the crappiest code I could, to make it fun, as long as it works. Inline copy pasta, etc. Later on it turned into the goal of the most understandable code, over everything. I would never have done this out of the university, but this was 10 year later. Before I would come up with a "plan" or "design", but at that point, I would just hack something up.

What I found was with starting with an excellent blank CI/CD project, and a good BDD end-to-end testing strategy, I could write "dirty code first" to "just get it done" ... then once I got into a OODA loop, refactor it into amazing products and services while having fun. This reduced my cognitive load and emotional overhead. Coding is actually 5% of the work, the rest is QA, requirements gathering, meetings, UX/UI designs, and overhead.


This was a fun read for really personal reasons. The idea that, basically, bad code no one ever has to touch again is in fact good code, is in fact "better" in a true sense than carefully engineered code accomplishing the same thing, has been a really valuable guiding insight for me in my career. I couldn't remember where I got it though, or if it even had one single source.

Then when he shows the visualization I was like "hey that looks like the d3 script I got out of some git analysis book years ago and still use at every job I work."

It's the same guy! Looks like he productized the scripts distributed with that book, which nice. I'll definitely try it and push for places I work to pay for that instead of the bundle of customized scripts I've been dragging around for years.

I really endorse that book too! I read it at the right time in my career I think, where I had truly seen some shit and so had the experience to understand the value of that approach, but not so far in that I had become set in my ways.


> The idea that, basically, bad code no one ever has to touch again is in fact good code

How do you know no one would ever have to touch that code again, at the moment of writing it?

Nevertheless, generally I agree that isolated complexity is much better than complexity that spreads everywhere through explicit or hidden dependencies (e.g. global state). So dirty complex code hidden behind a simple API is actually not bad code.


It's less a guideline for writing code than one I use when deciding where to spend my efforts with existing code. I've mostly worked in long-lived codebases of profitable software, so nearly everything is a strong candidate for refactoring off of "quality" alone.

When you find something real blood-curdling but the last commit in that file was three and a half years ago, you just close it and pretend you didn't see. Better to spend the effort somewhere it will definitely benefit someone soon, rather than maybe some day.


This is literally the 'O' in SOLID.

The key idea is to break code into "chunks" that each do one thing.

Then, if you have to add a new feature, it goes into another chunk, instead of editing/modifying existing code.

The same logic applies to system design at different scales, whether fine-scale OOP or coarser-scale (micro)service architecture. The ideal size of an individual "chunk" is somewhat subjective & debatable, of course.

It's like Haskell-style immutable data structures, but applied to writing the code, itself.


So there is a script or something with the book that you can run on your Git repo to see a chart of the hotspots?


It's not as neat as that unfortunately. You use this to extract different data from the version control history: https://github.com/adamtornhill/code-maat

Then visualize it however. I have some d3 scripts that came with the book that I've modified and you can track down somewhere on github I'm pretty sure. I mostly use those for demoing it to devs unfamiliar with the techniques though, since it looks cool and is immediately obvious what it's for.

For serious use I dump it into sqlite and use a mix of different scripts and techniques to figure it out. It's been kind of a language playground for me over the years so is in a lot of different languages and is "learning code" in most of them. Cleaning them up and sharing is one of those "maybe some day" things though.


While I'm extremely sympathetic to this idea, one question nags at me: Long-tail code can also be long tail because _it was well-written from the beginning_. The author is arguing that long tail code lives in a stable, unchanging corner of the business. This can absolutely be true, but it can also be true that a well-written abstraction may service new needs without needing to be changed. (In practice, this is hard.)


This.


I wonder if you can formalize this a bit more.

Let W(init,DC) be the initial cost (hours/effort) of writing Dirty Code (DC). Let's assume that the code works for the intended purpose and doesn't have any bugs.

Let W(init,CC) be the initial cost of writing Clean Code (CC). You'd expect it to be related to W(init,DC) by some proportion: W(init,CC) = (1+alpha)*W(init,DC).

Then there is a probability p that you will want to / extend the code.

Let W(ext,DC) be the amount of effort required to extend the code if the initial code is dirty.

Let W(ext, CC) be the amount of effort required to extend the code if the initial code is clean. I'd expect W(ext,CC) = (1-beta)*W(ext,DC).

Then you can compute E[W(total)|CC) vs E[W(total)|DC]. This will define some trade-off curve based on alpha, beta, p, W(ext,DC) and W(init, DC).

Lots of assumptions here, but I wonder if thinking this through will provide any insights. Obviously, if the probability of extending the code is low, then it always makes sense to write dirty code. If the probability of extending the code is high, then you'll want to write clean code.


I would also include a probability of an bug being introduced while extending the code. Code quality tends to be harder to track as the number of system interfaces expands. Particularly when it coordinates efforts between physical systems, even "clean code" can cause coordination failures because the context has changed by the extension. The example my mind always goes to is the Ariane V failure; it was a failure of 'clean code' in new context.[1]

Then there is the severity of the potential failure. So the additional risk should also encompass both the probability of failure when extending code and the severity of that failure.

[1]http://sunnyday.mit.edu/accidents/Ariane5accidentreport.html


> I would also include a probability of an bug being introduced while extending the code.

The number of sprints you spend changing a bit of code might also be a metric worth tracking as well:

https://www.microsoft.com/en-us/research/wp-content/uploads/...


It's an interesting metric, the "churn" or "hot spots" in code. Why do certain areas and not others exhibit high churn?

I had a quick look through the codebases showcased on the CodeScene site, and across them, the files with most churn tend to have quite generic names (like core, daemon, helper, internalEngine, etc).

It sort of supports my intuitive take on the answer: the "churniest" areas of the code are the ones that were initially difficult to think of in specific terms, ie, the ones that don't tend to implement one thing, but boundaries between things. They're the catch-alls, the areas where our conceptual abstractions don't fold together as neatly as we'd like.


There are five questions I like to ask about any codebase I’m given to work with:

1) suppose I have a bug to fix, how deep should I, on average, go to get to the cause? How many files do I need to jump through to determine a trivial flow of data from a to b?

2) there’s a straightforward feature or edge case, how easy is it to add it?

3) (for dynamic languages) is it possible to get the full list of usages for a particular function/method I’m changing?

4) is it possible to correctly identify and eliminate a portion of code that has been unused for a while?

5) are there areas no one knows how they work and everybody is afraid of touching because nobody is sure what would break and how?

I use these things to determine the code quality. Perceived cleanliness doesn’t affect the quality.


In many big tech companies, people care too much about code quality.

They will nit-pick every part of the code.

But when you want to write an end-to-end test, they would be like "oh no we don't do that here"...


I think the conclusion here is back-to-front.

Code that never needs to be touched is fine. Although the odds are that such code is either completely trivial or subtly wrong (or both).

Code that is touched every day is likely to be fine as well. It's should get smoothed out naturally, like a pebble in a stream. If not, you probably already have lots of alarm bells telling you it's a problem without the need for any further analysis. It's not the code quality that matters here so much as the test coverage.

The changes you really need to worry about are to code nobody has touched for 3 years and whoever wrote it no longer works for the company. Especially if that code was written with the mindset of "nobody will ever care about this code".

A better metric for deciding where you should focus the most effort on code quality is not frequency of modification, it is frequency of appearance in the runtime call-graph. Each call probably also need a multiplier for how deep in the call-stack it was, since that's where unintended consequences of a small change are likely to have the biggest blast radius.


> A better metric for deciding where you should focus the most effort on code quality is not frequency of modification

Frequency of modification is an excellent predictor though:

https://www.microsoft.com/en-us/research/wp-content/uploads/...

and efforts to get good predictions out of attestations of quality have so far have not been able to make as good predictions as that for sure.

> Code that never needs to be touched is fine. Although the odds are that such code is either completely trivial or subtly wrong (or both).

I'm not convinced of this: I think the longer code lives, the easier it is to convince myself it's probably correct (or at least, correct from the business perspective). Can you explain how you get your odds?

> The changes you really need to worry about are to code nobody has touched for 3 years and whoever wrote it no longer works for the company. Especially if that code was written with the mindset of "nobody will ever care about this code".

I think this is predicated on whether that code needs to be changed at all: You said you think it's likely it will need to be changed, but I don't see why you think that.


> A better metric for deciding where you should focus the most effort on code quality is not frequency of modification, it is frequency of appearance in the runtime call-graph.

On th flip side, a lot of code that isn't common in the main codepath often exists to fix edge case bugs that people have lost context on. In my experience, this code is a landmine with an unknown blast radius, only touch it with a 10-foot pole and only when you absolutely must.


Something about this rubs me the wrong way. I think the argument that code quality matters less in the tail doesn't account for the fact that you are working on the code right now. By construction, the code you are writing at any point is the most recent code in the code base, so likely in an area that will be touched again soon.

I modulate my code quality a little bit, but for the most part I have one gear (write what I consider "high quality" code) and it has served me well. I find cutting corners tends to lead to bugs or other regrets the next time I have to work in the code.

At a very minimum, writing quality code helps my thought process while I'm working on it and respects the time of my code reviewers who will have to read it.


That's somewhat true. You never create big ball of mud in one go, it's a process. Process of introducing small "dirty code" changes into existing code base.

I treat writing clean code as "kata" or "habit". I feel like once I start writing dirty code more often it will translate to clean code side of things. I will continue to think of clean code as a practice, not necessity.

Also there are many small thing you would NEVER do even in dirty code like if statements that are hundred lines long, or having some unexpected side effects, etc. So if we agree that clean code is very very subjective, we must agree that "acceptable amount of dirty code" is subjective too.


I was let go once for having too many commits in a PR. When we squash merged… I’ve come to the same conclusion about code quality. If it’s something that others interact with, make it polished. If it’s something only you interact with, make it commented.


> I was let go once for having too many commits in a PR.

You were fired for having too many commits in a PR? That seems like an extreme overreaction unless there's more to the story.


The fundamental performance complaint you can make about a developer is that their PRs are bad… whether that’s too little/too late, too many problems, or too unwieldy to reasonably review.

You don’t fire someone over one incident. But if someone declines to internalize the feedback and continues to make PRs that are very far from acceptable or even reviewable, yes that is grounds for termination.


> You don’t fire someone over one incident. But if someone declines to internalize the feedback and continues to make PRs that are very far from acceptable or even reviewable, yes that is grounds for termination.

I agree; it's just that the way OP phrased it ("I was let go once for having too many commits in a PR", emphasis mine) made it seem like it was just one incident. Unless it was something completely egregious like a 50-commit PR to make a one-line change, I don't see how firing is a reasonable response. Which is why I want to hear the rest of the story :)


I agree. A volume of commits to get to a workable/reviewable PR is in relation to the code quality and ease of change of the codebase. If I made 20 some odd commits to get to a good working PR (which you compare with develop, or main, or whatnot) you only see the complete change view, not the individual.

I think it’s something to discuss. Why did it take so many commits? Is our testing broken? Is our runtime broken? Or is it the fact that we programmed in a lot of hard coded values that only work on the codebase owners machine? Hmmmm…


There definitely has to be, on the other end of the commit count spectrum I've seen people being reprimanded for having too few commits. Looks like they had bad management anyway, he might have dodged a bullet there


An especially important piece of context is the stage the company is at. At the early stage, pre product market fit, bad code quality can actually be a good signal -- ie not merely not bad, but actively better than good code.

Why? It's evidence that the team is moving quickly and is not afraid to change things in the code in less than ideal ways, because they are focused more on just making stuff work on the business end. This is exactly where you want to be at the early stage of a company, you do not want to be precious about doing things that scale - including writing good code and keeping data models clean and fully coherent.


This mindset is also true for many researchers who code.

Professional programmers make fun of research code but actually the dirty way is desirable considering that research is about prototyping, tweaking, and in small groups.


Professional programmers make fun of research code but actually the dirty way is desirable considering that research is about prototyping, tweaking, and in small groups.

Until say a once-in-a-lifetime global pandemic comes along and facing the prospect of literally millions of people dying governments turn to those researchers for advice. Then suddenly some program that is thousands of lines long in a single file and reads like an entry to the IOCCC that has evolved over many years with no real peer review or test strategy or anything resembling formal verification to implement the researcher's calculations becomes the foundation for major public policy decisions that can profoundly affect the way of life of entire societies for years. At least we all know that hacky code written by non-expert programmers without any tests, review or other verification never has bugs that could cause it to output incorrect results.

Code written by academics in "research" style is fine when the results are only of academic interest but it's insanely dangerous to apply the same low standards if the results are going to be used for something that actually matters.


To that I say: there's dirty code -- then there's researchers' code. It can be a whole new level, far beyond just copy-paste and other superficial sins.


Research code has different priorities.

I have a set of repos I pull down for vivisecting software projects.

They would be horrific to anyone else. I do horrible things to other peoples' code. Idioms clash, the style and messaging and comments can seem schizophrenic, and you'll run into things that'll send professionaal coders running like custom instrumented versions of the language runtime. Everything that can be done wrong in them, generally is, and it's tuned for one thing.

Figuring out how to read it, why it works, and where to poke it to change it. I'm one of those people who'll sit down with an entire rcosystem of code and go Dr. Moreau on it.

...Then I turn around and push the well documented, polished, minimum viable changeset into the professional repo, and lock away the horrible atrocities I have wrought far from the eyes of Man, hopefully well enough that only God will eventually pass judgement.


"We developers mostly read code, so let's optimise the writing part by writing bad code that will be slower to read."

Am I the only seeing a problem here?


The author confused cause and effect.

Author: because this code sees a lot of churn it is important. Reality: this code sees a lot of churn because it is bad.


> this code sees a lot of churn because it is bad

That’s one possibility. It’s also possible that it changes a lot because it’s related to business rules, which are typically more volatile than infrastructure code.



Even if that's the case the author's point still stands, focusing the code quality in this code makes more sense, if it's bad.


everything only matters in context. In general.


(2019)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: