Hacker Newsnew | past | comments | ask | show | jobs | submit | soravux's commentslogin

In my university, both machine (and deep) learning and FPGA + chip design are specialties of the electrical engineering department. We just launched earlier this year an industry-oriented AI masters degree. Taking your extra classes in chip design would give you a solid foundation to perform this.


Would you kindly provide the link to the particular course of your university? Thanks :)


The master's program in AI I was referring to is only available in French (Laval University, Québec City).

The chip design expertise is provided by most electrical engineering departments, with courses usually named VLSI design, FPGA/ASIC development or microelectronics.

If you apply for a master's degree (in AI, for example), you can often mix-and-match speciality classes and ask for those chip design courses to be added to your cursus.

If you are a hands-on person curious about the matter, you can buy an FPGA (~50$ for entry-level) and follow a Verilog or VHDL tutorial online. Quickly put, an FPGA is a chip that can be "rewired" at will, very useful to learn or prototype before building a production chip.


Thanks for the info :)

Hope it will be available in English soon enough.


As a complement of information, the question of colorization was recently revisited using deep learning methods [1]. All very interesting work!

[1] Zhang, Richard, Phillip Isola, and Alexei A. Efros. "Colorful image colorization." European Conference on Computer Vision (2016) http://richzhang.github.io/colorization/


You could always optimize it by combining both methods in the process somehow.


Interesting topic. We did something similar in the past using the bytecode instead of the AST tree to accelerate genetic algorithm evolution:

http://multigrad.blogspot.com/2014/07/low-level-frenzy.html http://multigrad.blogspot.com/2014/06/fun-with-python-byteco...

I'm curious, what was the original use case for pyast64? Feels like we could apply it to genetic programming fairly straightforwardly.


Sorry, I didn't have a use case -- I did it purely for fun. However, I got the idea when I was trying to optimize some Python bytecode.


Human vision works well because our brain has an incredible quantity of priors to guide it, that is similar past experiences that explain most of what we are seeing. When your eyes see something, only a small amount of information is passed to your brain (like motion, for example). Your brain "fills up" the missing pieces with what he's used to. That's why we don't see the blind spot created by our optical nerve entrance in the eye and often miss things that are hiding in plain sight without motion.

Illusions are due to the deception of our vision priors. Our brain expects something, makes you see something this way, but it's not what is happening in reality (maybe because it was engineered to deceive those expectations, as the images in this thread link). This is because the mental model we have of a standard "sight" doesn't model well those examples, our brain was not trained to work this out, I guess because it has no advantages in doing so (in terms of evolution or learning as a kid). Our brain is only trained to extract information efficiently on "plausible images" (lit by sun-like light, taken on the earth, etc.), you can't feed it random noise or it will try to explain it with things it knows (which is called Pareidolia).

In machine learning vision, we re-learn, usually from scratch (or fine-tune), at each experiment. This generates (or modify) the priors learned. Think of the priors as the "default(s)" image (in terms of complex internal representation, not in terms of pixels) that helps you think about the problem at hand. If you have a motion detection/tracking problem, this optimal default information representation will be different from the default information most useful for classifying or segmenting.

What I want to say with those examples is that machine learning computer vision is prone to illusions, that is images that defeat (are too far away, or not well explained by) its internal representation space and/or default representation. Also, each algorithm (let it be neural networks, SVM, or anything, really) has a different internal representation, so different images will be illusions for them. An illusion for one model won't necessarily be an illusion for another one.

The thing is, we are far from mastering advanced machine learning, in the sense that we don't have optimality proofs for capacity, architecture and filters on deep neural networks for a given task, for example. There's a lot of recent research on those illusions--for example, adversarial examples or networks. It seems to indicate that those illusions are far from human vision illusions and seems to be due to the mathematical nature of machine learning, for example adding small noise (sometimes with a lower magnitude than the smallest representable value by standard images formats!) to a correctly classified image can result in a wrong and very certain prediction. The most proeminent viral example of this on the internet was the school bus becoming with high certainty an ostrich after some small noise was added to the image. Other examples can be found in the introduction of [1].

[1] https://openai.com/blog/adversarial-example-research/


Neat project! Just curious about why using jp2a (spelt j2pa in the article) over other alternatives such as libcaca (img2txt) or aalib?


That is the exact goal of the post. I arrived to the algorithm through evolution with random initialization. I enforced absolutely no heuristic to make it converge to this equation.


I know, but can we prove there's no heuristic on that algorithm? Because if the answer is "yes", this should be general enough to find other optimizations. It's fun to think about.


The best answer I can provide is: The code is there. There are no heuristics in it. Everytime you run it, you get different results (because of random initialization seeds and the stochastic nature of EA). It may find the (a - (x >> 1)) equation on a specific execution, or not. Over the runs I made, this equation (or similar) was the most popular and nothing come close to it. In fact, it finds a lot of other optimizations; either less accurate or way more complex. I remember getting tens and tens of operations in equations with "bof" accuracies.


This is really interesting. I'll be sure to include these references in my next blog post on the subject. The goal of the post was to find the equation from scratch, though. As it can be seen, the constant optimization was less the spotlight of the post.


Not to mention your blog engine/site is broken. Please fix the fact that we can't click on the right scrollbar due to your fancy shmancy javascript side menu.

Put it on the left hand side, or don't overlay it over the scrollbar.

Damn #hipstercoders


The placement of any site elements on top of the scrollbar is indeed annoying. Is that a Blogspot thing, or is it due to a customization?


It's blogspots "Dynamic Views". Complete dreck. I can only assume it's the default when you create a blog nowadays as I have no idea why anybody (least of all developers) would opt into that crap.


Someone at blogspot needs a smack on the head. The site takes several seconds to load, which is absolutely ludicrous for displaying plain text, and when it finally does it's unusable as you've noted.


It's a sadly commonly used theme that also does not display anything at all if you don't enable Javascript.


[deleted]


Requiring tons of javascript and seconds of load time to display plaintext? Yeah, that is pretty crazy.


Everything is on default from Blogspot. I only put the Gist source code displayer and the LaTeX math javascript. If anyone is offering a new design, I would gladly take it... As long as I can concentrate on the content and not the container.


Eureqa seems a wonderful software for this specific application. If only I could get 2,500$ for a one-year license...


It has a 30 day free trial and you can use it for free if you limit to a few hundred datapoints (not ideal but workable.)

It's not perfect for this application though because it doesn't bitwise operations or use execution time as a fitness function.


There are cheaper ones such as GeneXproTools.


It's javascript that loads the Gist into the page. You may have NoScript or similar that prevents the loader to fetch the code. The first code is the one from the Wikipedia page (fast inverse square root) and the second is this one: https://gist.github.com/soravux/9673839


You are right. I made a review of the USB Condom in my blog, talking about the shortcomings of their implementation: http://multigrad.blogspot.ca/2013/09/review-of-usb-condoms.h...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: