Hacker Newsnew | past | comments | ask | show | jobs | submit | C--'s commentslogin

Not really a computer but some distributed systems already work that way. I've read somewhere that Netflix (might have been Amazon?) is designed in such a way that it's capable of satisfying certain requirements even when services are down.


Are we talking about a "fallback db servers take over when the main ones drop", or "static assets server morphs into temporary db server if the main one drops"? Because there is a massive difference.


With virtualization/containerization that's not an unreasonable scenario. One of VMWare's selling points is that a sufficiently expensive vSphere cloud can expand and contract over its available hardware depending on load/time-of-day. With iLO integration, it can even power off the hardware it's not using and power it on when needed. If one server goes down, its VMs can be distributed over the remaining hypervisor, leading to potential resource contention but maintaining availability.

So, sort of.


There are crazy cases of plasticity like that, this is more of "massive nosql caching infrastructure is down, everything else works overdrive to still mostly function correctly".


So when your legs stop working and you start crawling around the floor with your hands pulling the rest of your body, is that plasticity too?

It's easy to continue working at increased costs when you don't have a cache. That's.. that's what a cache is for; it's not critical to the infrastructure, it's here to reduce the costs.

Not to rain on any parade of course, I'm just pointing out I'd like to see more cases of actual plasticity of an email server that puts all jobs on hold while it temporarily takes over for a database server that just stopped responding.


I'm not saying servers can be plastic worth anything, I'm saying that the role of the cerebellum is to assist functions that exist elsewhere, so missing it is not the same as missing a primary functional unit.

The motor cortex is still doing its original job, it just has to work harder.


Caching can be critical if the volume is high enough. There are physical limitations of computing appliances that make sites like Google or Facebook actually impossible to run without extensive caching and indexing layers, not just prohibitively expensive.


I thought that was obvious. OP was making a point about things running without cache at the cost of increased load... I'm sure Google Search couldn't reliably achieve that right now, but we were not talking about Google Search.


Right. I figured I wasn’t telling you anything new, but thought it worth making explicit for the benefit of others.


If one designed their down-scaling properly, the caching layer would be shut off as the system contracted, not being part of solving the domain problem.



I feel they could have stopped with C++03. C++11 certainly introduced very few necessary features but this is just going too far. This is not solving any problems but in reality is causing new ones as introducing syntactical redundancies ultimately leads to everyone having their own preference, which in turn is a source for inconsistent and dirty code bases and bugs.

They should have taken a hint from all of those coding standards coming from companies such as Google, which basically constrain the programmer to a small subset of the language features.


One rule of thumb I use for code is: Can you step through it in a debugger, at the source level, and have insight into what is going on? Because with really, um, "clever" code, you'll probably need to. And so will your cow-orkers. And if the code doesn't work or has limitations -- quite likely with creeping "clever" machinations like this one -- you'll get Spoken To.

(Yeah, this depends on how good your debugger is. Visual Studio and XCode are pretty decent).


This is the only metric worth paying attention to, in my experience. Whatever you do, your product will end up with defects, you'll need to get rid of them, and towards the end of the project there will always be a huge pile of them. (And, thanks to natural selection, only the hard ones are left. All the easy problems are long extinct.) So why not optimize for this stage? Even if you don't end up turning a time/effort profit overall - which is unlikely - you'll at least benefit from being under far less stress during what's always a fraught period.

And for whatever reason, C++ debuggers and debug info formats don't seem to be doing a great job of keeping up with the language. Even basic stuff like nested function calls (as easily found with any smart pointer/iterator or array class...) tends not to work very well, to say nothing of single stepping in to std::function, or watching STL types, or using iterators or smart pointers in the watch window. So this necessarily means being a bit conservative about which features you use.

(You might have to go through the shipping-a-product process a couple of times before you really internalize this. But once you've learned it, it really does stick.)


> C++11 certainly introduced very few necessary features

I have to ask whether you've actually done any significant amount of programming in C++, both C++11 and pre-C++11.


Honestly, it seems like the people who complain loudest about C++ have barely used it. C++11 is fantastic. C++14 and beyond are quite exciting as well.

I'd avoid Alexandrescu-style template trickery unless you really truly need it, though. You can write good C++ that more or less resembles Java, and that's fine.


> You can write good C++ that more or less resembles Java

That doesn't sound like good C++. Making everything a pointer or implementing polymorphism through pointers makes the code harder to reason about, and makes tighter coupling between interfaces and implementations.


Not to act too serious about nicks here, but his/er handle is "C--", after all.


> I feel they could have stopped with c++03.

You can still do this kind of horror with C++03. I'd know, I have!

Tests: http://sourceforge.net/p/libindustry/code/HEAD/tree/branches...

Code: http://sourceforge.net/p/libindustry/code/HEAD/tree/branches...

This was written shortly before I decided to stop fighting the language and simply use a better one for all of my hobby projects.


It would be interesting to know if C++ support is on the table.


I wonder the same. Since they are supporting C it seems like a small amount of effort would get them C++ support.


Just asking to those with more knowledge than me. How is this any different to a RaspberryPi or a Beagle Bone Black?


The RaspberryPi is supposed to be a computer you can use like a regular one. Rex seems to have a dual role: It's designed to control a robot (lots of pins and extension boards) and to act as a general purpose computer. So it sits somewhere between an Arduino and a RasPi.


The BeagleBone Black also sits in that range in that it is both an ARM-based general purpose computer but also has 2 real-time programmable microcontrollers onboard. Which isn't to say that this Rex project has no value, if it is produced it would be a much easier out-of-the-box experience for someone to just plug in some garden variety I2C motors and get going than it would be on the Black, but this sort of combined computer/microcontroller design isn't unique to it.


IO expansion and power handling is one differentiator from the Raspberry Pi. The Raspberry Pi has a limited GPIO breakout (useful for many applications, but not tailored for robotics). And, besides not being designed to drive peripherals, the RPi's power system has some issues[1] that even make hosting USB devices a pain.

It does seem like they could target the Beagle Bone Black and focus on the software, but they probably wanted more control over the power and peripheral connect strategy than the BBB provides.

The processor they chose includes an integrated DSP, which could be useful for sensor data processing--or even motor control. The original BeagleBoard offered an OMAP with a DSP, but TI is discontinuing the OMAP line[2] and the newer BBB uses a Sitara A8 processor without the DSP. I didn't see the Rex's processor choice listed on the kickstarter page, but I'd guess it's something like TI's DM3730[3].

In summary, they seem to want to design a board specifically for robotics rather than building upon an existing general-purpose platform. This will give them more control over the power and peripheral strategy that will allow them to achieve a higher level of integration. This could also simplify software by limiting the number of busses and "tacked on" peripherals.

The founders come from Carnegie Mellon University, which has a first class robotics program. Also check out the CMUcam[4] project--an open source, computer vision module. The Pixy (CMUcam5) was funded[5] in September and looks really cool.

Edit: Check out the Rex page on the Alphalem site[6] too. It says that the "Rex can supply up to 20A directly from a 6-12V Ni-MH battery to connected devices. That's more than enough to power a couple servo drivers, Arduinos, rangefinders, and full servo load on an 18-servo hexapod robot!"

[1]: http://www.raspberrypi.org/phpBB3/viewtopic.php?f=63&t=23205

[2]: http://www.eetimes.com/document.asp?doc_id=1262580

[3]: http://www.ti.com/product/dm3730

[4]: http://www.cmucam.org

[5]: http://www.kickstarter.com/projects/254449872/pixy-cmucam5-a...

[6]: http://alphalem.com/pages/rex


I'm kinda curious about the "20A" part - the PCB traces required to not melt the board while passing that much current would be pretty beefy...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: