Hacker Newsnew | past | comments | ask | show | jobs | submit | ycui1986's commentslogin

everyone uses cellphone that transmit on the same frequency. they don't seem to cause interference. once enough lidar enters real word use. there will be regulation to make them work with each other.

Completely different problem domains. A mobile phone is interacting with a fixed point (i.e. cell tower) that coordinates and manages traffic across cell phones to minimize interference. LIDAR is like wifi, a commons that can be polluted at will by arbitrary actors.

LIDAR has much more in common with ordinary radar (it is in the name, after all) and is similarly susceptible to interference.


No, LIDAR is relatively trivial to render immune to interference from other LIDARs. Look at how dozens of GPS satellites share the same frequency without stepping on each others' toes, for instance: https://en.wikipedia.org/wiki/Gold_code

Like GPS, LIDAR can be jammed or spoofed by intentional actors, of course. That part's not so easy to hand-wave away, but someone who wants to screw with road traffic will certainly have easier ways to do it.


> No, LIDAR is relatively trivial to render immune to interference from other LIDARs.

For rotating pulsed lidar, this really isn't the case. It's possible, but certainly not trivial. The challenge is that eye safety is determined by the energy in a pulse, but detection range is determined by the power of a pulse, driving towards minimum pulse width for a given lens size. This width is under 10 ns, and leaning closer to 2-4 ns for more modern systems. With laser diode currents in the tens of amps range, producing a gaussian pulse this width is already a challenging inductance-minimization problem -- think GaN, thin PCBs, wire-bonded LDs etc to get loop area down. And an inductance-limited pulse is inherently gaussian. To play any anti-interference games means being able to modulate the pulse more finely than that, without increasing the effective pulse width enough to make you uncompetitive on range. This is hard.


I think we may have had this discussion before, but from an engineering perspective, I don't buy it. For coding, the number of pulses per second is what matters, not power.

Large numbers of bits per unit of time are what it takes to make two sequences correlate (or not), and large numbers of bits per unit of time are not a problem in this business. Signal power limits imposed by eye safety requirements will kick in long after noise limits imposed by Shannon-Hartley.


> For coding, the number of pulses per second is what matters, not power.

I haven't seen a system that does anti-interference across multiple pulses, as opposed to by shaping individual pulses. (I've seen systems that introduce random jitter across multiple pulses to de-correlate interference, but that's a bit different.) The issue is you really do get a hell of a lot of data out of a single pulse, and for interesting objects (thin poles, power lines) there's not a lot of correlation between adjacent pulses -- you can't always assume properties across multiple pulses without having to throw away data from single data-carrying pulses.

Edit: Another way of saying this -- your revisit rate to a specific point of interference is around 20 Hz. That's just not a lot of bits per unit time.

> Signal power limits imposed by eye safety requirements will kick in long after noise limits imposed by Shannon-Hartley.

I can believe this is true for FMCW lidar, but I know it to be untrue for pulsed lidar. Perhaps we're discussing different systems?


I haven't seen a system that does anti-interference across multiple pulses...

My naive assumption would be that they would do exactly that. In fact, offhand, I don't know how else I'd go about it. When emitting pulses every X ns, I might envision using a long LFSR whose low-order bit specifies whether to skip the next X-ns time slot or not. Every car gets its own lidar seed, just like it gets its own key fob seed now.

Then, when listening for returned pulses, the receiver would correlate against the same sequence. Echoes from fixed objects would be represented by a constant lag, while those from moving ones would be "Doppler-shifted" in time and show up at varying lags.

So yes, you'd lose some energy due to dead time that you'd otherwise fill with a constant pulse train, but the processing gain from the correlator would presumably make up for that and then some. Why wouldn't existing systems do something like this?

I've never designed a lidar, but I can't believe there's anything to the multiple-access problem that wasn't already well-known in the 1970s. What else needs to be invented, other than implementation and integration details?

Edit re: the 20 Hz constraint, that's one area where our assumptions probably diverge. The output might be 20 Hz but internally, why wouldn't you be working with millions of individual pulses per frame? Lasers are freaking fast and so are photodiodes, given synchronous detection.


I suggest looking at a rotating lidar with an infrared scope... it's super, super informative and a lot of fun. Worth just camping out in SF or Mountain View and looking at all the different patterns on the wall as different lidar-equipped cars drive by.

A typical long range rotating pulsed lidar rotates at ~20 Hz, has 32 - 64 vertical channels (with spacing not necessarily uniform), and fires each channel's laser at around 20 kHz. This gives vertical channel spacing on the order of 1°, and horizontal channel spacing on the order of 0.3°. The perception folks assure me that having horizontal data orders of magnitude denser than vertical data doesn't really add value to them; and going to a higher pulse rate runs into the issue of self-interference between channels, which is much more annoying to deal with then interference from other lidars.

If you want to take that 20 kHz to 200 kHz, you first run into the fact that there can now be 10 pulses in flight at the same time... and that you're trying to detect low-photon-count events with an APD or SPAD outputting nanoamps within a few inches of a laser driver putting generating nanosecond pulses at tens of amps. That's a lot of additional noise! And even then, you have an 0.03° spacing between pulses, which means that successive pulses don't even overlap at max range with a typical spot diameter of 1" - 2" -- so depending on the surfaces you're hitting, on their continuity as seen by you, you still can't really say anything about the expected time alignment of adjacent pulses. Taking this to 2 MHz would let you guarantee some overlap for a handful of pulses, but only some... and that's still not a lot of samples to correlate. And of course your laser power usage and thermal challenges just went up two orders of magnitude...


used NI-GPIB on USB cost $100 on ebay. You don’t need $1000.


See knock-offs comment.


very impressive. better than anything on the market either NI or Keysight.


from the picture, the compressor and generator located inside the dome. the dome is filled with CO2. maintenance people have to carry oxygen tank, or they die.


they just do the maintenance when is empty?


i think it had something to do with CO2 can be made into supercritical state relatively easily, not for nitrogen or other common gases.


This pretty much

You can liquefy CO2 at a higher temperature than N2


You can do it easily with something like propane, or other larger molecules. But CO2 is non-flammable, largely non-toxic, and easily available.


no mentioning of storage overhead? how much energy being wasted for each charging and discharging cycle?



cheap microcontrollers use RC oscillators. If they only drift a few seconds a day, that would be an achievement by itself.

RC oscillator is poor enough that early days USB communication would fail if running on RC clock.


atomic clock is not expensive. they have different grades. module level atomic clock cost only $3500.

the NIST hydrogen clock is very expensive and sophisticated.


Plug-in Hybrid with a small engine for charging purposes probably makes the most sense at the moment, although full electric is sexy, the range anxiety is real.


at theoretical perspective, if the X-Y plane can be addressed with 16-bit DAC by controlling laser deflection. then to seek any data with in a 4GB address space will have typical latency of 300us with the latest laser scanning technology.

I am not aware any laser scanning technology that can do 16-bit accuracy that has no moving part. so, fundamentally, this is a storage technology with mechanical addressing.

laser can be scanned by acoustic wave, but that itself lack the beam pointing accuracy. the ultrasonic drive frequency will limit how fast is can deflects the laser beam.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: