Hacker Newsnew | past | comments | ask | show | jobs | submit | tomtom1337's commentslogin

Are the ikea zigbee devices going to stop being sold? Massive shame if so, they are extremely reliable and easy to use.


IKEA's whole smart home ecosystem is presently being overhauled from Zigbee to Thread/Matter, with a product availability gap in the meantime.

https://www.ikea.com/global/en/newsroom/retail/the-new-smart...


Oooh, thank you for sharing! New product lineup looks interesting, but I echo other concerns here about it thread maybe eventually requiring internet.


What gap though? Our local IKEA has plenty of lights, smart plugs, etc. available still.


I just bought some spare pieces (remotes, bulbs) just in case


Personally, I find their contact sensors (the tall-ish thin ones) to be quite unreliable. I live in a modest home with plenty of zigbee devices as repeaters nearby and the contact sensors often stop reporting at random. I’ll pop it off the door, click repair on my coordinator and then hit the reset switch on the sensor; back online.

I like them because they can use rechargeable AAA batteries but if I still have to touch them every few weeks to repair, I’d rather switch to a different brand that is more reliable and uses less ideal battery formats.

That said, the newish Inspelling plugs in the EU market are fantastic. They report reliably, can handle larger loads, and cost about €10. For that price, it’s hard to complain that they are a bit larger than other options.


Super quick feedback - opening that link on my phone shows me two options next to each other, seemingly with the same name / description (followed by …) and same pricetag. I had to turn my phone sideways to see that there is a windows and a Mac version.

I think you can afford the extra characters to show the whole page in portrait mode. (iPhone 16 pro Safari)

https://imgur.com/a/aTxO3sp


I will change the description. Thank you!


Could you provide a bit more context here? I’m looking at the math example (https://github.com/ohmjs/ohm/blob/main/examples/math/index.h...) and would like to learn a bit more.


maybe have a look at "PEG: Ambiguity, precision and confusion": https://jeffreykegler.github.io/Ocean-of-Awareness-blog/indi...


If you’re young, new to data science and hoping to get a job in it after some time, then I absolutely recommend learning Python instead of Julia.

But if you are just interested in learning a new language and trying it in data science OR are not currently looking to enter the data science job market, then by all means: Julia is great and in many ways superior to Python for data science.

It’s just that «everyone» is doing data science in Python, and if you’re new to DS, then you should also know Python (but by all means learn Julia too!).


Look, this article is absolutely excellent, and answers your questions. Please read the article before commenting this sort of thing.

As someone who has had to use geopandas a lot, having something which is up to an order of magnitude faster is a real dream come true.


Out of curiosity, what makes a rust library easier to use? Could you expand on that?


He means that he wants our Rust library as easy as our Python lib. Which I understand as our focus has been mostly on Python.

It is where most of our userbase is and it is very hard for us to have a stable Rust API as we have a lot of internal moving parts which Rust users typically want access to (as they like to be closer to the metal), but has no stability guarantees from us.

In python, we are able to abstract and provide a stable API.


I understand the user pool comment but don’t understand why you wouldn’t be able to have a rust layer that’s the same as the Python one API-wise.

I say this as a user of neither - just that I don’t see any inherent validity to that statement.

If you are saying Rust consumers want something lower level than you’re willing to make stable, just give them a higher level one and tell them to be happy with it because it matches your design philosophy.


The issue with Rust is that as a strict language with no function overloading (except via traits) or keyword arguments, things get very verbose. For instance, in python you can treat a string as a list of columns as in `df.select('date')` whereas in Rust you need to write `df.select([col('date')])`. Let's say you want to map a function over three columns, it's going to look something like this:

``` df.with_column( map_multiple( |columns| { let col1 = columns[0].i32()?; let col2 = columns[1].str()?; let col3 = columns[3].f64()?; col1.into_iter() .zip(col2) .zip(col3) .map(|((x1, x2), x3)| { let (x1, x2, x3) = (x1?, x2?, x3?); Some(func(x1, x2, x3)) }) .collect::<StringChunked>() .into_column() }, [col("a"), col("b"), col("c")], GetOutput::from_type(DataType::String), ) .alias("new_col"), ); ```

Not much polars can do about that in Rust, that's just what the language requires. But in Python it would look something like

``` df.with_columns( pl.struct("a", "b", "c") .map_elements( lambda row: func(row["a"], row["b"], row["c"]), return_dtype=pl.String ) .alias("new_col") ) ```

Obviously the performance is nowhere close to comparable because you're calling a python function for each row, but this should give a sense of how much cleaner Python tends to be.


> Not much polars can do about that in Rust

I'm ignorant about the exact situation in Polars, but it seems like this is the same problem that web frameworks have to handle to enable registering arbitrary functions, and they generally do it with a FromRequest trait and macros that implement it for functions of up to N arguments. I'm curious if there are were attempts that failed for something like FromDataframe to enable at least |c: Col<i32>("a"), c2: Col<f64>("b")| {...}

https://github.com/tokio-rs/axum/blob/86868de80e0b3716d9ef39...

https://github.com/tokio-rs/axum/blob/86868de80e0b3716d9ef39...


You'd still have problems.

1. There are no variadic functions so you need to take a tuple: `|(Col<i32>("a"), Col<f64>("b"))|`

2. Turbofish! `|(Col::<i32>("a"), Col::<f64>("b"))|`. This is already getting quite verbose.

3. This needs to be general over all expressions (such as `col("a").str.to_lowercase()`, `col("b") * 2`, etc), so while you could pass a type such as Col if it were IntoExpr, its conversion into an expression would immediately drop the generic type information because Expr doesn't store that (at least not in a generic parameter; the type of the underlying series is always discovered at runtime). So you can't really skip those `.i32()?` calls.

Polars definitely made the right choice here — if Expr had a generic parameter, then you couldn't store Expr of different output types in arrays because they wouldn't all have the same type. You'd have to use tuples, which would lead to abysmal ergonomics compared to a Vec (can't append or remove without a macro; need a macro to implement functions for tuples up to length N for some gargantuan N). In addition to the ergonomics, Rust’s monomorphization would make compile times absolutely explode if every combination of input Exprs’ dtypes required compiling a separate version of each function, such as `with_columns()`, which currently is only compiled separately for different container types.

The reason web frameworks can do this is because of `$( $ty: FromRequestParts<S> + Send, )*`. All of the tuple elements share the generic parameter `S`, which would not be the case in Polars — or, if it were, would make `map` too limited to be useful.


Thanks for the insight!


Ah, of course. Slightly ambiguous English tricked me there. Thank you Ritchie!


I apologize for that, English isn't my first language. Glad it was explained so well!


I interpret your question as «given that I am doing many conversions between temperature, because that makes it easier to write correct code, then I worry that my code will be slow because I am doing many conversions».

My response is: these conversions are unlikely to be the slow step in your code, don’t worry about it.

I do agree though, that it would be nice if the compiler could simplify the math to remove the conversions between units. I don’t know of any languages that can do that.


That's exactly the problem, in the software I have in mind, the conversions are actually very slow, and I can't easily change the content of the functions that process the data, they are very mathematical, it would take much time to rewrite everything.

For example, it's not my case but it's like having to convert between two image representations (matrix multiply each pixel) every time.

I'm scared that this kind of 'automatic conversion' slowness will be extremely difficult to debug and to monitor.


Why would it be difficult to monitor the slowness? Wouldn’t a million function calls to the from_F_to_K function be very noticeable when profiling?

On your case about swapping between image representations: let’s say you’re doing a FFT to transform between real and reciprocal representations of an image - you probably have to do that transformation in order to do the the work you need doing on reciprocal space. There’s no getting around it. Or am I misunderstanding?

Please don’t take my response as criticism, I’m genuinely interested here, and enjoying the discussion.


I have many functions written by many scientists in a unique software over many years, some expect a data format the others another, it's not always the same function that is called, but all the functions could have been written using a unique data format. However, they chose the data format when writing the functions based on the application at hand at that moment and the possible acceleration of their algorithms with the selected data structure.

When I tried to refactor using types, this kind of problems became obvious. And forced more conversions than intended.

So I'm really curious because, a part from rewriting everything, I don't see how to avoid this problem. It's more natural for some applications to have the data format 1 and for others the data format 2. And forcing one over the other would make the application slow.

The problem arises only in 'hybrid' pipelines when new scientist need to use some existing functions some of them in the first data format, and the others in the other.

As a simple example, you can write rotations in a software in many ways, some will use matrix multiply, some Euler angles, some quaternions, some geometric algebra. It depends on the application at hand which one works the best as it maps better with the mental model of the current application. For example geometric algebra is way better to think about a problem, but sometimes Euler angles are output from a physical sensor. So some scientists will use the first, and the others the second. (of course, those kind of conversions are quite trivial and we don't care that much, but suppose each conversion is very expensive for one reason or another)

I didn't find it a criticism :)


If I understood the problem correctly, you should try calculating each format of the data once and reusing it. Something like:

    type ID {
        AsString string
        AsInt int
        AsWhatever whatever
    }

    function new type ID:
        return new ID {
            AsString: calculateAsString()
            AsInt: calculateAsInt()
            AsWhatever: calculateAsWhatever()
        }
This does assume every representation will always be used, but if that's not the case it's a matter of using some manner of a generic only-once executor, like Go's sync.Once.


But the data changes very often in place with the functions calls on it.

I agree that would be a good solution, despite that my data is huge, but it assumes the data doesn't change, or doesn't change that much.


Relevant xkcd: https://xkcd.com/2205/


You have a typo: In your last sentence you effectively wrote «from int to float» twice in contradicting ways. «To float from int than (…) from int to float».


there was an error made when I went back to edit what I wrote...


Could you expand on this? It sounds a bit preposterous to save a text, as json, inside an image - and then expect it to be immediately usable… as an image?


Not OP, but PNG (and most image/video formats) allows metadata and most allows arbitrary fields. Good parsers know to ignore/safely skip over fields that they are not familiar with.

https://dev.exiv2.org/projects/exiv2/wiki/The_Metadata_in_PN...

This is similar to HTTP request headers, if you're familiar with that. There are a set of standard headers (User-Agent, ETag etc) but nobody is stopping you from inventing x-tomtom and sending that along with HTTP request. And on the receiving end, you can parse and make use of it. Same thing with PNG here.


They're not saving text, they're saving an idea - a "map" or a "CAD model" or a "video game skin" or whatever.

Yes, a hypothetical user's sprinker layout "map" or whatever they're working on is actually composed of a few rectangles that represent their house, and a spline representing the garden border, and a circle representing the tree in the front yard, and a bunch of line segments that draw the pipes between the sprinkler heads. Yes, each of those geometric elements can be concisely defined by JSON text that defines the X and Y location, the length/width/diameter/spline coordinates or whatever, the color, etc. of the objects on the map. And yes, OP has a rendering engine that can turn that JSON back into an image.

But when the user thinks about the map, they want to think about the image. If a landscaping customer is viewing a dashboard of all their open projects, OP doesn't want to have to run the rendering engine a dozen times to re-draw the projects each time the page loads just to show a bunch of icons on the screen. They just want to load a bunch of PNGs. You could store two objects on disk/in the database, one being the icon and another being the JSON, but why store two things when you could store one?


Save text as JSON as comments but the file itself is a PNG so that you can use it as an image (like previewing it) as they would ignore the comments. However, the OP’s editor can load the file back, parse the comments, and get the original data and continue to edit. Just one file to maintain. Quite clever actually.


this is useful for code that renders images (e.g. data-visualization tools). the image is the primary artifact of interest, but maybe it was generated from data represented in JSON format. by embedding the source data (invisibly) in the image, you can extract it later to modify and re-generate.


no, GP meant they add the JSON text to the meta data of the image as comment.


Check what draw.io does when you download a PNG.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: