Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Both seem to have worked at/with Dynamicland previously, though not mentioned on this site.

https://omar.website/posts/notes-from-dynamicland-geokit/

https://cwervo.com/projects/dynamicland-experiments/



The moment I saw papercraft, computer vision, and a projector, I thought this had to have some relation.

I’ve never been able to resolve a clear position for myself on Dynamicland. I’ve long admired Brett Victor’s work, and I have only the fondest appreciation for the project's philosophy and the enthusiasm with which Victor writes about it.

The only problem is that I’ve never been able to figure out even the first thing about how it works. It’s completely incomprehensible to me, and I just don’t know how to square the project's ideals of human-centered, community-based computing with its seemingly-impenetrable alternate universe of dot stickers and projected images.


It's a bit unfortunate that people can't look past the projections and the dot frames/QR codes. Those are just a means to an end, which is trying to simulate a world where all objects have the ability to compute and can be easily reprogrammed on the fly.

Imagine a future 20 years from now where color e-ink is as cheap and ubiquitous as wood pulp paper, and microchips are so small and cheap the can be embedded in everything. DynamicLand seems to be a peek into what living in that that world could be like.


Came here to say the same - Bret Victor is doing similar things[1].

I think the point of these projects is to find an alternative approach to interfacing with technology. Why not combine paper and computers?

We make the assumption that interfacing to technology is limited to keyboards, mice and fingers but there is no reason for us to limit ourselves to these approaches.

Anyone using punchcards would be amazed by keyboards and so we will be amazed by interfaces that are beyond our imagination.

[1] https://dynamicland.org/


> Anyone using punchcards would be amazed by keyboards and so we will be amazed by interfaces that are beyond our imagination.

Typewriters, proper, pre-existed alongside punchcards for many decades before being incorporated into computer interfaces as keyboards. The fact that they did pre-exist computer keyboards may have led to them becoming the default so fast. While keyboards are amazing, they certainly weren't beyond imagination.

I guess you can say that QR codes, projectors, and cameras predate this Folk computer idea as well. But they are also far less intuitive. Using a typewriter well requires knowing basic literacy and a few new functions (carriage return, line feed, shift, etcetera). Graduating from a typewriter to a keyboard requires learning some additional functionality.

What current devices are teaching the basic functionality needed to jumpstart adaptation to this Folk computer interface?


> Anyone using punchcards would be amazed by keyboards

As someone who uses punchcards regularly, I don't understand what you mean. People used a keyboard to punch cards since the 1930s or earlier. You type on a keyboard and the keypunch puts the holes in the card.


Which make and model of keypunch do people use these days? I thought that IBM had discontinued the Model 029 ages ago (along with all the other unit record equipment).


I use the IBM 026 keypunch at the Computer History Museum. This is for historical things, not production use :-)


OK, is it then safe to say no one uses punch cards in production ;)

I just assumed that punch cards have gone the way of dinosaurs but always pleased to change my assumptions.


I was gonna say, this sounds a lot like continuing the research of Dynamicland, which itself contained research from CDA, VPRI, and other groups. Why is it so hard to find a consistent funder of basic research like PARC in the 70s?


PARC, though awesome for society, was a massive commercial failure for Xerox.

As tech nerd, I love the lore of The PARC and went to visit it as soon as I moved to the Bay Area.

However, I assume it would be taught as a lesson of what not to do in a business context:

Investing in pure research often yields innovations that are opposed to existing business lines or simply too far out to see as useful by management.

That said, many companies have research arms. Microsoft, Walmart, IBM, Meta, Google…


Uh no. Xerox made billions from the laser printer. And yes they missed capturing all the profit but it was crazy profitable. https://www.forbes.com/sites/gregsatell/2015/03/21/how-parc-...


Hmm, thanks for the interesting article. Probably knew that at one time but forgot lol.

Still, it shows my point, that research that reaches too far from the company’s core biz is difficult to recognize as a success. The laser printer, was a better printer and it was an improvement to Xerox existing scan+print business.

Bell Labs also had a lot of commercial success.


when a business has an r&d department it's inherently suicidal to consider any current activity as ‘core biz’ especially in a competition. your research department’s objective is to move the puck into new locations, hopefully evading your competitors, and your job as mother institution is to skate there. settling into any so-called core business is contra good business thinking. starting an r&d department isn’t.


Makes you wonder how the mythos that Xerox dropped the ball with PARC is so pervasive in our culture, or that it was net unprofitable. Probably the early scrappy 'garage engineering' mythology surrounding Apple plays into it?


Lesson: Don't develop innovations that are too advanced for management to understand, because they will pass and let the rest of the world eat their lunch.


I think the actual lesson learned was to develop these research departments as incubators where the inventors are expected, and trained, to become entrepreneurs who spin out companies that the parent company owns part of.

It would have been easier to just make some of the inventors upper-level managers and executives of the parent company.

Given the cost of the original Xerox computer, Apple still may have eaten their lunch. Apple itself spent a lot of effort getting the Macintosh, expensive as it was, as cheap as it was.


not really. from my study (of both xerox and other companies with similar fate), it’s success that killed them. when you have a hugely successful product that people can’t seem to live without, and businesses can’t operate without, it’s extraordinarily difficult to plan and execute its demise in favor of something new. your business executives, skilled in the arts of racing to bottom lines, won’t side with you. they’d rather go down with the ship than keep jumping every 5 years or so.

edit: goes without saying that the best time to take on any largely successful company is approximately a decade from when they’ve been successful. that is to say, we’re on the cusps of a better payments api (ie stripe of this decade), better email (37signal’s hey), better e-commerce, etc.


From what I had gathered, today's VC incubators and academic-affiliated incubators work on the inventor -> entrepreneur model (or at least entrepreneur -> inventor model).


It's astonishing (and sad) how few opportunities exist for basic research.


Donald Braben's book touches on what's changed pretty well I felt (published by Stripe press too): https://www.amazon.com/Scientific-Freedom-Civilization-Donal...


Ah nice tip. Will put it on the reading list!


Honestly and respectfully, would you be clambering to fund this?

It's disturbing how little actually came out of dynamic land, and now there's a schism where the founders are pasting QR codes to hands and have as their 3rd bullet point "GPU FFI shaders"?

Does any of that scream fundamental UX research, or the future of computing?

To me, it's just another tired retread of half-baked messianic thinking that tails off into nowhere as the hero complex devolves into a series of half-baked ideas designed to scratch an individuals daily itch, rather than a central purpose.

Google had stuff like this internally for quite some time, through much later than most people would guess.

The thing is there just isn't some vastly superior paradigm sitting out there to fix computing with. The industry is mature enough that if something truly good and helpful exists, even in parts, it's quickly implemented.

One ray of light might be that the rate of change is fast enough that there are likely to be gaps emerging over the next decade.

But they're not going to be found in this sort of fashion.


Is computing good enough as a literacy that anyone can use it to model complex ideas? Computers are simulators and can be used to model almost anything, sadly most people just use it to model old media (images, movies, text documents). As an example, can your [insert elderly relative] program it the same way he/she can probably string together multiple words to express a complicated idea.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: