Hacker Newsnew | past | comments | ask | show | jobs | submit | swisniewski's commentslogin

I use Cosmic on a DGX Spark, as my daily driver, and it works pretty well.

They don’t have a pop os iso for arm64, but they do have arm64 Debian repo. So I just took DGX os (what Nvidia ships on the device), added the pos os “releases” repo, and installed cosmic-session.

It works like a charm and provides a super useful tiling experience out of the box.

This is replacing my M3 Pro as my daily driver and I’ve been pretty happy with it.

I recently upgraded to an ultrawide monitor and find the Cosmic UX to be hands down better than what I get in the Mac with it.

If you want a Linux desktop with the productivity boost of a tiling window manager with a low learning curve, it’s pretty good.


Just as an FYI - I just looked at the download page and Pop 24.04 now has an official arm64 ISO (including Arm w/ Nvidia)

Generally with large acquisitions, product integration tends to precede infrastructure integration by years to decades.

Look at GitHub as an example, they were acquired in 2018, and are just migrating to Azure now after 7 years.

Microsoft shipping integrations with GitHub in 20108.

This is definitely the case with several Salesforce acquisitions (early product integration, little, no, or much later infrastructure integration).

So… I predict some level of content integration within a few months.

But infra integration is likely years away.


What infra needs to be integrated? They just get the rights, add the movies and shows to the Netflix CDN, turn off whatever b.s. infrastructure Warner were previously operating.


look up the parent tree... There was this statement:

> From a Hacker News perspective, I wonder what this means for engineers working on HBO Max. Netflix says they’re keeping the company separate but surely you’d be looking to move them to Netflix backend infrastructure at the very least.

The HBO Max service has something like 128M subscribers. This is < half of the 301M subscribers Netflix has, but is still a large number.

Certainly there's going to be some duplication, but it would be unwise to suddenly disrupt the delivery vehicles that you have 128M paying customers using in favor of a different delivery vehicle.

So, you should expect all the various HBO Max clients in existence to continue working for at least 5 years after the acquisition closes, if not longer.

Suddenly turning that off and saying "go use the Netflix app" wouldn't be good.

In any case, moving all the WB content onto the Netflix CDN and making it available on all the Netflix clients is "product integration", not "infrastructure integration". You are likely to see that very quickly. Weeks to months after the acquisition closes.

But, getting rid of all the HBO Max client software that talks to the HBO Max Servers running in whatever data center or cloud WB is using, and downloading video from whatever CDN WB has, and all the associated infra stuff, that's infra integration and it won't happen for a while. I think that will take 5-10 years.


Sigh…

Saying the code doesn’t have conditions or booleans is only true if you completely ignore how the functions being called are being implemented.

Cycle involves conditionals, zip involves conditionals, range involves conditionals, array access involves conditionals, the string concatenation involves conditionals, the iterator expansion in the for loop involves conditionals.

This has orders of magnitude more conditionals than normal fizz buzz would.

Even the function calls involve conditionals (python uses dynamic dispatch). Even if call site caching is used to avoid repeated name lookups, that involves conditionals.

There is not a line of code in that file (even the import statement) that does not use at least one conditional.

So… interesting implementation, but it’s not “fizzbuzz without booleans or conditionals”.


Not sure why this got downvoted.

The technique could be implemented without conditionals, but not in python, and not using iterators.

You could do it in C, and use & and ~ to make the cyclic counters work.

But, like I mentioned, the code in the article is very far from being free of conditionals.


I didn't down vote, but it does seem like unnecessary pedantry. Maybe it could be better phrased as "without writing any conditionals"


I think that’s kind of vacuously true. Like, good luck writing this in any language where the resulting assembler all the way at the bottom of the runtime has zero branch operations. And I bet even then that most CPUs’ microcode or superscalar engine would have conditionals underlying the opcodes.

I’d settle for just not writing conditionals in the user’s own code. Range doesn’t have to be implemented with branches. Hypothetically, Python could prefill a long list of ints, and range could return the appropriate slice of it. That’d be goofy, of course, but the main idea is that the user doesn’t know or really care exactly how range() was written and optimized.


Another interesting thing… random data has a high likely hood of disassembling into random instructions, but there’s a low probability that such instructions (particularly sequences of such instructions) are valid semantically.

For example, there’s a very high chance a single random instruction would page fault.

If you want to generate random instructions and have them execute, you have to write a tiny debugger, intercept the page faults, fix up the program’s virtual memory map, then re-run the instruction to make it work.

This means that even though high entropy data has a good chance of producing valid instructions, it doesn’t have a high chance of producing valid instruction sequences.

Code that actually does something will have much much lower entropy.

That is interesting…even though random data is syntactically valid as instructions, it’s almost certainly invalid semantically.


There's a much simpler way to do this:

If you want your library to operate on bytes, then rather than taking in an io.Reader and trying to figure out how to get bytes out of it the most efficient way, why not just have the library taken in []byte rather than io.Reader?

If someone has a complex reader and needs to extract to a temporary buffer, they can do that. But if like in the author's case you already have []byte, then just pass that it rather than trying to wrap it.

I think the issue here is that the author is adding more complexity to the interface than needed.

If you need a []byte, take in a []byte. Your callers should be able to figure out how to get you that when they need to.

With go, the answer is usually "just do the simple thing and you will have a good time".


The author is trying to integrate with the Go stdlib, which requires you produce images from an 'io.Reader". See https://pkg.go.dev/image#RegisterFormat

Isn't using the stdlib simpler than not for your callers?

I also often hear gophers say to take inspiration from the go stdlib. The 'net/http' package's 'http.Request.Body' also has this same UX. Should there be `Body` and `BodyBytes` for the case when your http request wants to refer to a reader, vs wants to refer to bytes you already have?


The BodyBytes hypothetical isn't particularly convincing because you usually don't actually have the bytes before reading them, they're queued up on a socket.

In most cases I'd argue it really is idiomatic Go to offer a []byte API if that can be done more efficiently. The Go stdlib does sometimes offer both a []byte and Reader API for input to encoding/json, for example. Internally, I don't think it actually streams incrementally.

That said I do see why this doesn't actually apply here. IMO the big problem here is that you can't just rip out the Bytes() method with an upcast and use that due to the wrapper in the way. If Go had a way to do somehow transparent wrapper types this would possilby not be an issue. Maybe it should have some way to do that.


> The BodyBytes hypothetical isn't particularly convincing because you usually don't actually have the bytes before reading them, they're queued up on a socket.

Ah, sorry, we were talking about two different 'http.Request.Body's. For some weird reason both the `http.Client.Do`'s request and `http.Server`'s request are the same type.

You're right that you usually don't have the bytes for the server, but for the client, like a huge fraction of client requests are `http.NewRequestWithContext(context.TODO(), "POST", "api.foo.com", bytes.NewReader(jsonBytesForAPI))`. You clearly have the bytes in that case.

Anyway, another example of the wisdom of the stdlib, you can save on structs by re-using one struct, and then having a bunch of comments like "For server requests, this field means X, for client requests, this is ignored or means Y".


Thinking about that more though, http.Client.Do is going to take that io.Reader and pipe it out to a socket. What would it do differently if you handed it a []byte? I suppose you could reduce some copying. Maybe worth it but I think Go already has other ways to avoid unnecessary copies when piping readers and writers together (e.g. using `WriterTo` instead of doing Read+Write.)


> If body is of type *bytes.Buffer, *bytes.Reader, or *strings.Reader, the returned request's ContentLength is set to its exact value

Servers like to know Content-Length, and the package already special-cases certain readers to effectively treat them like a `[]byte`.

Clearly it does something differently already.

Also, following redirects only works if you can send the body multiple times, so currently whether the client follows redirects or not depends on the type of the reader you pass in... if you add a logging intercepter to your reader to debug something, suddenly your code compiles but breaks because it stops following redirects, ask me how I know.


In this case, there is not any functionality you can't get through other means: You can set GetBody and the content length header manually, which is what you probably wound up doing if I had to guess (been there too, same hat.) I think Go does this mainly to make basic usage more convenient. Unfortunately though, it makes this unnecessarily subtle footgun in return.

Maybe Go 2 will finally do something about this. I would really like some (hopefully efficient) way to make "transparent" wrapper types that can magically forward methods.


The request body on the client do a lot of other things than reading the body once (an io.Reader can only be read once).

There's Content-Length, and there's also the need to read it multiple times in case a redirect happens (so the same body need to be sent again when being redirected).

As a result, the implementation in stdlib would check a few common io.Reader implementations (bytes.Buffer, bytes.Reader, strings.Reader) and make sure it stores something that can be read multiple times (if it's none of the 3, it's read fully into memory and stored instead).


This is the same basic reply as the other one but my thoughts are roughly the same. The only comment I have aside from what I replied on the sibling comment (this just being another case of wrappers not being transparent biting us in the ass) is that they could've done this in a more generic way than they did, at the downside of requiring more interfaces.


Yea I saw your other reply later and agree on most of it. But I'd say there's a balance between simplicity of the API and more specific cases. For example they can make an optional api to io.Reader to provide size info, and maybe another optional api to io.Reader to make it able to be read more than once, etc.. But at the same time, if you have all those info, that _usually_ means you already have either a []byte or string, and you would most likely use one of the 3 types to convert that into an io.Reader, so that special handling is enough without adding more public apis, and the go team is notoriously conservative when adding new public apis.


It is, but one of the virtues of the Go ecosystem is that it's also often very easy to fork the standard library; people do it with the whole TLS stack all the time.

The tension Ted is raising at the end of the article --- either this is an illustration of how useful casting is, or a showcase of design slipups in the standard library --- well, :why-not-both:. Go is very careful about both the stability of its standard library and the coherency of its interfaces (no popen, popen2, subprocess). Something has to be traded off to get that; this is one of the things. OK!


> people do it with the whole TLS stack all the time.

It's the only way to add custom TLS extensions.


Adding custom TLS extensions plays badly when the standard library implements them.


How does using the stdlib internally simplify things for callers? And what does that have to do with tanking inspiration from the stdlib?

On the second point, passing a []byte to something that really does not want a streaming interface is perfectly idiomatic per the stdlib.

I don’t think it complicates things for the caller if the author used a third party deciding function unless it produced a different type besides image.Image (and even then only a very minor inconvenience).

I also don’t think it’s the fault of the stdlib that it doesn’t provide high performance implementations of every function with every conceivable interface.

I do think there’s some reasonable critique to be made about the stdlib’s use of reflection to detect unofficial interfaces, but it’s also a perfectly pragmatic solution for maintaining compatibility while also not have the perfect future knowledge to build the best possible interface from day 0. :shrug:


Because it forces the reader to read data into a temporary buffer in its entirety. If the thing this function is trying to do doesn't actually require it to do its job, that introduces unnecessary overhead.


What? Where else would it be?

It’s either in the socket(and likely not fully arrived) or … in a buffer.

Peak is not some magic, it is well a temporary buffer.

Beyond that, I keep seeing people ask for a byte interface. Has anybody looked at the IO.reader interface ???

type Reader interface { Read(p []byte) (n int, err error) }

You can read as little or as much as you would like and you can do this at any stage of a chain if readers.


You are still doing a copy, and people want to avoid the needless memory copy.

If you are decoding a 4 megabyte jpeg, and that jpeg already exists in memory, then copying that buffer by using the Reader interface is painful overhead.


Getting an io.Reader over a byte slice is a useful tool, but the primary use case for io.Reader is streaming stuff from the network or file system.

In this context, you can either have the io.Reader do a copy without allocating anything (take in a slice managed by the caller), or allocate and return a slice. There isn't really a middle ground here.


And you are going to work on all 4mb at the time? Even if you were to want to plop it on a socket you would just use IO.copy which would be no overhead, as no matter what you are still always going to copy bits out to place it in the socket to be sent.


>And you are going to work on all 4mb at the time?

Yes? Assume you were going to decode the jpeg and display it on screen. I assume the user would want to see the total jpeg at once.

Consider you are working on processing a program that has a bunch of jpegs and is running some AI inference on them.

1. You would read the jpegs from disk into memory. 2. You decode those jpegs in into RGBA buffers 3. You run inference on the RGBA buffers.

The current ImageDecode interface forces you to do a memcopy in between steps 1 and 2.

1. You would read the jpegs from disk into memory. 2. You copy the data in memory into another buffer because you are using the Reader interface 3. You decode those jpegs in into RGBA buffers 4. You run inference on the RGBA buffers.

Step two isn't needed at all, and if the images are large, that can add latency. If you are coding on something like a Raspberry Pi, depending on the size of the jpegs, the delay would be noticable.


That's how leaky abstraction of many file std implementations starts.

Reading into a byte buffer, pass in a buffer to read values, pass in a buffer to write values. Then OS does the same thing, has its own buffer that accepts your buffer, then the underlying storage volume has its own buffer.

Buffers all the way down to inefficiency.


Seems pretty crazy to force a bunch of data to be saved into memory all the time just for programming language aesthetic reasons


When you are working with streaming data, you really should be passing around io.Readers if you want any sort of performance out of it.

A []byte require you to read ALL data in advance.

And if you still end up with []byte and need to use a interface taking io.Reader, then you wrap []byte in a bytes.Buffer which implements io.Reader.


100% this, that is the easiest and less error prone way to do it.

Even if the author still insisted on using a single interface, he could also do what he wants by relying on bytes.Buffer rather than bytes.Reader.


A good API should just accept either,e.g. the union of []byte and io.Reader.

Both have pros and cons and those should be for the user to decide.


Nah, a good API doesn’t push the conditionals down. You don’t need to pass a union to let the user decide, you just need to present an API for each (including a generic implementation that monomorphizes into multiple concrete implementations) https://matklad.github.io/2023/11/15/push-ifs-up-and-fors-do...


Ah, but go doesn't have union types.


You can just expose two different functions, one of which takes a byte slice and one of which takes an io.Reader.

Given how the code works (it starts by buffering the input stream), the second function will just be a few lines of code followed by a call to the first.

Perfect example of how complex type systems can lead people to have unnecessarily complex thoughts.


One option would be to accept an interface{} and then switch on the type.


It's frightening how quickly the answer in golang becomes "downcast to interface{} and force type problems to happen at runtime".


You don’t need to downcast to interface, io.Reader is already an interface, and a type assertion on an interface (“if this io.Reader is just a byteslice and cursor, then use the byteslice”) is strictly safer than an untagged union and equally safe with a tagged union.

I wish Go had Rust-like enums as well, but they don’t make anything safer in this case (except for the nil interface concern which isn’t the point you’re raising).


Presumably the questions that have simple and easy answers don't get long comment chains.


An io.Reader is already an interface, so you can already switch on its type.


My comment is explaining how

> A good API should just accept either,e.g. the union of []byte and io.Reader.

could be done. Can you elaborate on how the fact that io.Reader is an interface lets you accept a []byte in the API? To my knowledge, the answer is: you can't. You have to wrap the []byte in an io.Reader, and you are at the exact problem described in the article.


I see what you’re saying. You’re correct that you can’t pass a []byte as an io.Reader, but you can implement io.Reader on a []byte in a way that lets you get the []byte back out via type switch (the problem in the article was that the standard library type didn’t let you access the internal []byte).


An idiomatic way to approach this would be to define a new interface, let's call it Bytes with a Bytes() []byte method.

Your function would accept an io.Reader but then the function body would typeswitch and check if the argument implements the Bytes interface. If it does, then call the Bytes() method. If it doesn't then call io.ReadAll() and continue to use the []byte in the rest of the impl.

The bytes.Buffer type already implements this Bytes() method with that signature. By the rules of Go this means it will be treated as an implementation of this Bytes interface, even if nobody defined that interface yet in the stdlib.

This is an example of Go's strong duck typing.


That's really interesting, thank you for explaining that! Somehow I've never thought to implement interferences to describe types that already exist.


Personally I rarely use or even implement interfaces except some other part needs them. My brain thinks in terms of plain data by default.

I appreciate how they compose, for example when I call io.Copy and how things are handled for me. But when I structure my code that way, it’s extra effort that doesn’t come naturally at all.


I use them for testing, where I can have a client that is called by the code under test and can either just run a test CB, send a REST call to a remote server, send a gRPC call to a remote server, or make a function call to an in-process gRPC server object.


Yea, mocking is generally the most use I get out of interfaces


Plain data is really convenient for testing though.

I think the reason that your example is so useful is not generally because of testing, but because the thing you're interacting with has operational semantics. It's a good use case for object orientation, so interfaces and mocking are the natural way of testing your logic.


Has anyone else been able to get "secrets" to work?

They seem to be injected fine in the "environment setup" but don't seem to be injected when running tasks against the enviornment. This consistently repros even if I delete and re-create the enviornment and archive and resubmit the task.


I see this a lot ("if you are a startup, just ship a monolith").

I think this is the wrong way to frame it. The advice should be "just do the scrappy thing".

This distinction is important. Sometimes, creating a separate service is the scrappy thing to do, sometimes creating a monolith is. Sometimes not creating anything is the way to go.

Let's consider a simple example: adding a queue poller. Let's say you need to add some kind of asynchronous processing to your system. Maybe you need to upload data from customer S3 buckets, or you need to send emails or notifications, or some other thing you need to "process offline".

You could add this to your monolith, by adding some sort of background pollers that read an SQS queue, or a table in your database, then do something.

But that's actually pretty complicated, because now you have to worry about how much capacity to allocate to processing your service API and how much capacity to allocate to your pollers, and you have scale them all up at the same time. If you need more polling, you need more api servers. It become a giant pain really quickly.

It's much simpler to just separate them then it is to try to figure out how to jam them together.

Even better though, is to not write a queue poller at all. You should just write a Lambada and point it at your queue.

This is particularly true if you are me, because I wrote the Lambda Queue Poller, it works great, and I have no real reason to want to write it a second time. And I don't even have to maintain it anymore because I haven't worked at AWS since 2016. You should do this to, because my poller is pretty good, and you don't need to write one, and some other schmuck is on the hook for on-call.

Also you don't really need to think about how to scale at all, because Lambda will do it for you.

Sure, at some point, using Lambda will be less cost effective than standing up your own infra, but you can worry about that much, much, much later. And chances are there will be other growth opportunities that are much more lucrative than optimizing your computer bill.

There are other reasons why it might be simpler to split things. Putting your control plane and your data plane together just seems like a head ache waiting to happen.

If you have things that happen every now and then ("CreateUser", "CreateAccount", etc) and things that happen all the time ("CaptureCustomerClick", or "UpdateDoorDashDriverLocation", etc) you probably want to separate those. Trying to keep them together will just end up causing your pain.

I do agree, however, that having a "Users" service and an "AccountService" and a "FooService" and "BarService" or whatever kind of domain driven nonsense you can think of is a bad idea.

Those things are likely to cause pain and high change correlations, and lead to a distributed monolith.

I think the advice shouldn't be "Use a Monolith", but instead should be "Be Scrappy". You shouldn't create services without good reason (and "domain driven design" is not a good reason). But you also shouldn't "jam things together into a monolith" when there's a good reason not to. N sets of crud objects that are highly related to each other and change in correlated ways don't belong in different services. But things that work fundamentally differently (a queue poller, a control-plane crud system, the graph layer for grocery delivery, an llm, a relational database) should be in different services.

This should also be coupled with "don't deploy stuff you don't need". Managing your own database is waaaaaaay more work that just using Dynamo DB or DSQL or Big Table or whatever....

So, "don't use domain driven design" and "don't create services you don't need" is great advice. But "create a monolith" is not really the right advice.


> This distinction is important. Sometimes, creating a separate service is the scrappy thing to do, sometimes creating a monolith is. Sometimes not creating anything is the way to go.

I think this hits the nail on the head. People are trying to find the "one true way" for microservices vs monoliths. But it doesn't exist. It's context dependent.

It's like the DRY vs code duplication conversation. Trying to dictate that you will never duplicate code is a fool's errand, in the same way that duplicating code whenever something is slightly different is foolish.

Context is everything


The article is bullshit.

AWS has a pretty simple model: when you split things into multiple accounts those accounts are 100% separate from each other (+/- provisioning capabilities from the root account).

The only way cross account stuff happens is if you explicitly configure resources in one account to allow access from another account.

If you want to create different subsets of accounts under your org with rules that say subset a (prod) shouldn’t be accessed by another subset (dev), then the onus for enforcing those rules are on you.

Those are YOUR abstractions, not AwS abstractions. To them, it’s all prod. Your “prod” accounts and your “dev” account all have the same prod slas and the same prod security requirements.

The article talks about specific text in the AWS instructions:

“Hub stack - Deploy to any member account in your AWS Organization except the Organizations management account."

They label this as a “major security risk” because the instructions didn’t say “make sure that your hub account doesn’t have any security vulnerabilities in it”.

AWS shouldn’t have to tell you that, and calling it a major security risk is dumb.

Finally, the access given is to be able to enumerate the names (and other minor metadata) of various resources and the contents of IAM policies.

None of those things are secret, and every dev should have access to them anyways. If you are using IAC, like terraform, all this data will be checked into GitHub and accessible by all devs.

Making it available from the dev account is not a big deal. Yes, it’s ok for devs to know the names of IAM roles and the names of encryption key aliases, and the contents of IAM policies. This isn’t even an information disclosure vulnerability .

It’s certainly not a “major risk”, and is definitely not a case of “an AWS cross account security tool introducing a cross account security risk”.

This was, at best, a mistake by an engineer that deployed something to “dev” that maybe should have been in “prod” (or even better in a “security tool” environment).

But the actual impact here is tiny.

The set of people with dev access should be limited to your devs, who should have access to source control, which should have all this data in it anyways.

Presumably dev doesn’t require multiple approvals for a human to assume a role, and probably doesn’t require a bastion (and prod might have those controls), so perhaps someone who compromises a dev machine could get some Prod metadata.

However someone who compromises a dev machine also has access to source control, so they could get all this metadata anyways.

The article is just sensationalism.


The simplest approach is to create an account and install our agent.

That provides actionable insights out of the box.

Our current poc supports datadog and cloudwatch, but in principle we can use just about any telemetry provider.

If you drop an email on the website I am happy to meet to answer any questions you might have.


Digital content is not “published” in the same way as traditional content.

Digital content is published by placing data on a computer, connecting that computer to the intent, then running software on that computer that allows software on other computers to connect to it and download that content.

Attempting to ban ads is an attempt to censor the content of that communication. It’s analogous to attempting to ban the things people can say over telephone calls. It would be a clear violation of the 1st Amendment.

The Author’s points about “Dopamine Megaphones” and “tracking” don’t hold up.

Posting something online is not the same as yelling through a megaphone. And restrictions on tracking are about behavior, not speech.

One can outlaw both of those things without unreasonably restricting speech.

But banning ads is absolutely unreasonable restraint of free speech rights.

If I speak on the telephone, I am allowed to hand the phone to someone else for a moment and let them speak. Banning such a thing would be unconstitutional.

Many online ads work in the same way.

Similarly, I can take money from someone, and in response speak things they want me to speak. Restraining that is also a violation of free speech rights.

Just because online ads are horrible, doesn’t mean they can be outlawed without trampling on fundamental rights.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: