Hacker Newsnew | past | comments | ask | show | jobs | submit | mogery's commentslogin

awww!


lol no


oh hey! i worked on this a while back!

really cool project. runs unity games too. author is a very talented guy


> author is a very talented guy

Almost an understatement, he's done amazing projects throughout the years:

https://github.com/ptitSeb?tab=repositories


i suck at chess, but this is lovely!


Same! I like how it's a little easier to plan out moves and set traps with this game...


How is Adoption a chicken and egg argument? RCS is, by its design, slower to update than iMessage.


Because iOS has a large enough market share that carriers don't prioritize adoption ("It doesn't benefit our customers"), but a lack of adoption means Apple can say "Why would we waste our time on something that's not well supported?"


Really, using a Teensy 4.0 to make scrolling faster? That seems really overkill to me. You could make a small computer out of a Teensy 4, quite excessive to use it as a display-out.


[shrug]

I will happily use an entire arduino clone to produce a periodic pulse when it could be done with an NE555 timer. One will take me approximately 2 minutes, the other more than 2 hours.

When your build quantity == one, the concept of overkill becomes less meaningful.


I recall some Keysight "educational" video where the guy says, "This is my million-dollar cellphone charger" and plugs his cellphone into a USB port on a fancy oscilloscope.


That's true. If I had been able to get the hardware scrolling to work on the ILI9341 no doubt it could have easily run on a lower-end Teensy. Maybe someone more clever than me can get it to work. It sucks to resort to brute force.


I'm a little surprised there's not a better LCD controller for this kind of application, even a little iCE40 should be enough to implement a character display and smooth scrolling on both axes.


Vast overkill in this respect is historically accurate; the VT100 was built around an 8085, for example.


The 8008 was invented for a terminal application -- the Datapoint 2200. So not only were VDTs effectively microcomputers since the 70s, but Intel's main processor line up until the present day owes its existence to VDT applications.


Yeah, but the Datapoint 2200 was kind of a high-end niche thing, I think. I've never seen one in real life. (And I don't think anyone ever actually built a terminal around an 8008.) The VT100, by contrast, was ubiquitous. It also came out in the 01970s: 01970 for the 2200, 01978 for the VT100. Early-01970s terminals like the ADM3A or the Tek 4014 more typically were not microcomputers at all, just discrete logic.

The VT52, with a much richer escape sequence language than the ADM3A, was kind of on the borderline: https://vt100.net/docs/vt52-mm/chapter4.html http://xahlee.info/kbd/iold51593/EK-VT52-MM-002_maint_Jul78_... It had a "microprogram" in ROM with conditional jumps, but lacked capabilities like arithmetic, bitwise operations, and subroutines that could be called from more than one place; many of the instructions are things like "start printer" and "jump if UART has received a character".


A couple of years ago I did a vt52 emulator with an fpga, no processor at all (not even a soft-processor). Just regular ram and sequential logic. You can use a simple processor for some commands, but scrolling is best handled in hardware i think:

https://github.com/AndresNavarro82/vt52-fpga


Super sweet!

I didn't read the whole manual I linked above, but I think the way the original VT52 handled scrolling was that it incremented a register which it used for the index of the starting line. The microprogram was responsible for poking each new character into the character generator at the right moment 80 times for each of 240 scan lines, so it looked at this register each time it started drawing the screen to see where to start this process.

(Incidentally, if you remember BSD talk/otalk/ntalk/ytalk, you might remember that it divides the screen into two windows and avoids ever scrolling them by wrapping the conversation around from the top to the bottom of the window in a sort of similar way; I suspect that this is because some terminals, like the ADM3A, didn't support scrolling part of the screen, and redrawing half the screen at 2400 baud would have been slow.)

Your timing is a little more challenging to meet: although your horizontal resolution is almost the same, instead of 240 scan lines 60 times a second (a new scan line every 69 microseconds and a new character every 870 ns), you have 384 scan lines 70 times a second (a new scan line every 37 us and a new character every 470 ns, which I guess means 58 ns per pixel). But you're running on an FPGA where a lot of circuits can be clocked at 50 MHz, right? 20 ns.

I wonder if your design would have been simpler with a processor; you say half of the 2000 LUTs (iCE40 4-LUTs I guess?) were occupied with USB, but that still leaves 1000 LUTs for the terminal logic, roughly equivalent to 1000 7400-series TTL chips, which is a lot bigger than the historical VT52. I'm thinking the VT52 used its "microprogram" approach in order to reduce that complexity; now that we have faster hardware we could probably go further in that direction if we wanted.

Also 1000 4-LUTs is five times the size of SeRV, though I'm guessing SeRV itself probably isn't fast enough to do the character-feeding job the VT52 microprogram did. A bit-serial 7-bit processor might be the ticket.

— ⁂ —

I was chatting with a friend today about a project of his and wondering whether an FPGA-implemented terminal might be useful for a certain weird project, though not specifically a VT52. It's a super low power portable computer, and right now he's using a Raspberry Pi for prototyping.

You can maintain a display on an e-ink screen without any power, and Sharp's memory-in-pixel LCDs need about 0.05 milliwatts per 0.1 megapixels to maintain the display (and are several orders of magnitude lower-power to actually update). But a Raspberry Pi Zero takes more like 650 milliwatts to run, normally takes about 40 seconds to boot, and has no real sleep mode. With buildroot evidently you can get the boot time down to a few seconds https://www.furkantokac.com/rpi3-fast-boot-less-than-2-secon... but that's still a long time to wait for a response to a keystroke. The Pi 3 and Pi 4 evidently do have a working sleep mode, but it uses a lot of power (like, 50 milliwatts or something) and imposes a latency of 100 ms or so.

So I was thinking that one way to design the system would be as the moral equivalent of a web-browser + webserver, where some kind of tiny, low-power front-end processor handles most of the moment-to-moment interaction, then powers up a Raspberry Pi to boot Linux whenever it has a "webserver request" to handle. (Even while Linux is booting the UI doesn't need to become unresponsive! My browser doesn't hang until the web server responds!) For the front-end processor I suggested an ATTiny, because that's what he has on hand, but that may be a little too tiny, and anyway Ambiq's chips use less power and can handle faster compute. However, an FPGA opens up a lot of other possible alternatives.

Probably HTML and JS are not the right way to script the front-end processor, but clearly something as functional as an IBM 3270 is achievable. With the benefit of 50 years of hindsight, though, not to mention 8000-LUT FPGAs running at 50 MHz with tens of kilobytes of RAM, I think we can do a lot better. Like, you can have fragment shaders, though they probably have to be more limited than GLSL shaders because you don't have space for a framebuffer.


> ...but I think the way the original VT52 handled scrolling was that it incremented a register which it used for the index of the starting line

That's exactly right, pretty cheap but has some limitations like you point out here:

> (Incidentally, if you remember BSD talk/otalk/ntalk/ytalk, you might remember that it divides the screen into two windows and avoids ever scrolling them by wrapping the conversation around from the top to the bottom of the window in a sort of similar way; I suspect that this is because some terminals, like the ADM3A, didn't support scrolling part of the screen, and redrawing half the screen at 2400 baud would have been slow.)

I remember using them around '96, although we had the luxury of 10mbit ethernet (over coax cables) and some vt100 compatible terminal emulators running on amber screen XT machines (on DOS). As you mention neither the adm3a nor the vt52 had partial scrolling. The vt100 did have that and a lot of extra bells and whistles

> But you're running on an FPGA where a lot of circuits can be clocked at 50 MHz, right? 20 ns.

I run them around 50MHz just for the usb serial port, but all the terminal logic works at half that, right around 25Mhz which happens to be the pixel clock at these resolutions, so you cant go any slower than that

> but that still leaves 1000 LUTs for the terminal logic, roughly equivalent to 1000 7400-series TTL chips

I don't think that's a fair equivalence, most 7400 series ttl chips would be several 4 inputs lut, even simple gates like 4x2 bits ORS and way more for things like shift registers and counters of which the terminals have plenty. I also had to deal with ps/2 decoding. On an ADM you can also save a lot of decoding logic taking into account the bit paired keyboard


Sorry I hadn't responded until now!

It wouldn't be much more complexity to make the VT52's increment sequence a linked list instead of a counter, another 24 bytes of RAM or maybe just 120 bits. The escape sequence dispatch microcode is probably more space.

I hadn't thought about the complexity of PS/2 decoding.

My thought with the 4-LUT equivalency is that if you take some arbitrary 4-input combinational logic function and build it out of things like ANDs, ORs, NANDs, and inverters, you're going to need about 4 gates on average, which is about one 7400-series chip. I agree that things like bidirectional shift registers and up-down counters are hungrier. And maybe I'm underestimating what fraction of the VT52 consisted of such things.


Computers are getting cheaper faster than we can figure out what to do with them.

Overkill is fine.


Yeah, I've realized that the currently used API just lets you request a key for the public Spotify Web API, and I can just get the collection of the user and etc. from there.


To be fair, any other platform could easily outdo Spotify in this regard, if they publicly documented their internal API. :D


The thing is, Spotify had libspotify.

It opened a whole bevvy of open and useful Spotify clients that worked amazingly and some that still do to this day. Mopidy, as well as a handful of amazing MPD-speaking daemons got me through college. The only conceit was that it required a premium account and yielding a third party client your authentication data. They had some issues with Facebook authentication as it was OIDC, but setting a user name and password on your account was a simple solution.

The Spotify team has killed libspotify in preference for a Javascript browser or Mobile Device library (ios/ObjC or Android /kotlin) that uses the browser to authenticate over openID. It can only be a connect target, not query the full api, and depends heavily on the browser or native api to play the media.


Spotify exists to deliver music, other apps exist to lock users to their corresponding platforms.


How does Tidal do this? My understanding is Tidal is a music platform like Spotify.


I haven't had any issues on the RE end, mostly because I didn't need to do a whole lot of it, as the librespot people have already paved the way. But I've still gotta figure out the new protocol the Spotify client uses for playlists and such, so... there are bound to be issues up ahead.

Most of my issues stemmed from a lack of proper documentation.

One time I screwed up the packet format and was pulling my hair out when the only response I got was "Invalid username/password" from the Spotify API (or something along those lines. been a while).

The other times were more related to the Switch. Figuring out how networking and audio works, mostly. Hunting through the shit documentation and the source code of libnx to find out what I need to do. My audio implementation was either not playing anything or crashing for a long time in the beginning. I still have no clue what I did to solve it, which is unfortunate.


Spotify is actually fine with librespot existing (funnily enough, one of the main reasons the old API is still around is that librespot uses it, lol). So, I doubt it will ever get DMCA'd.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: