Hacker Newsnew | past | comments | ask | show | jobs | submit | joestringer's commentslogin

The packet-processing BPF programs are less tightly bound to Linux kernel APIs than you might think. Even in Linux, there has been motivation to make the APIs more generic to support different kernel hooks for packets, in particular XDP which doesn't operate on the standard internal packet buffer representation (skbuff).


Microsoft doesn't support XDP, do they?(And you can only use XDP on Linux in certain circumstances). The clsbpf stuff is all pretty heavily tied to skbs.

Also, even in an XDP program, you're still likely to use a bunch of perf stuff, which is again pretty Linux-specific.


Even looking at the original BPF which focused on filtering packets as they are forwarded to userspace (think tcpdump)[1] and looking at the extensions that eBPF provides on top to hook into various subsystems[2,3], it's clear that this is going far beyond the use cases originally envisioned. I'd love to see an eBPF paper to follow up / contrast with the '93 USENIX BPF paper.

[1]: https://www.tcpdump.org/papers/bpf-usenix93.pdf

[2]: https://ebpf.io/what-is-ebpf#hook-overview

[3]: http://www.brendangregg.com/BPF/bpf_performance_tools_book.p...


FWIW: I just wrote a long-ish post on the history from BPF (and before BPF) to eBPF and XDP:

https://fly.io/blog/bpf-xdp-packet-filters-and-udp/

An interesting fact is that packet filtering as a problem domain has been dominated by in-kernel virtual machines going back into the 1980s; it's an idea that comes all the way from Xerox.


Need to know what's type of water the people at Xerox Palo Alto were drinking.

They pioneered many groundbreaking and game changing works on computing including (but not limited to) windowing desktop environment, integrated programming/structural editor with CEDAR/Tioga, SQL (team moved to Oracle), Ethernet networks, laser printer, VLSI and Jupiter operational transform for distributed computing (precursor to CRDT). Each of this technology is now an industry of its own.


I kinda feel like Dealers of Lightning should be required reading at this point[1], both for the breadth of invention and how they squandered it.

[1] https://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer...


Unfortunately we are still quite far from the safe computing platforms they were using at Xerox (Interlisp-D, Smalltalk, Mesa and Mesa/Cedar).

The best we have gotten so far are the hybrids .NET/Windows, JME, Android Java/Linux, Chrome/Linux, Swift/iOS/macOS.


The shift from BPF to eBPF was less of an evolutionary step as the name might indicate. The overlap with the name BPF is primarily due to the requirement for eBPF to be a superset of BPF in order to avoid having to maintain two virtual machines long-term. This was one of the conditions for eBPF to be merged and in that context, the name eBPF made sense.


Disagree (see sibling post). Classic BPF could have been translated into any virtual machine design they came up with (because classic BPF is incredibly simple). When McCanne came up with the same design in 1998, his team called it "BPF+", for the same reason eBPF is called eBPF --- because it is pretty much an evolution of the earlier idea.


I'm not going to argue with you. You can read up on initial naming and framing in slides of netconf and plumbers conferences as well as LKML archives.


Remember when Microsoft claimed to invent various computing technologies, even though they had been around since the 70s or earlier?

That’s the type of history you’re articulating here.


To be clear: the dispute over the history of BPF/eBPF is not interesting, and I don't want to litigate it anymore than they do.

I'm just here to say that eBPF and BPF are in fact pretty closely related. The eBPF design is uncannily similar to Begel, McCanne, and Graham's BPF+ design[1]; in particular, the BPF+ paper spends a fair amount of time describing an SSA-based compiler for a RISC-y register ISA, and eBPF... just uses (at this point) LLVM for a RISC-y register ISA.

Most notably, the fundamental execution integrity model has, until pretty recently, remained the same --- forward jumps only, limited program size. And that's to me the defining feature of the architecture.

The lineage isn't important to me, so much as the sort of continuous unbroken line from BPF to eBPF, regardless of what LKML says.

[1]: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.597...


This seems to be around the wrong way.

For both traditional kernel modules and eBPF programs, you compile the code ahead of time. For kernel modules, if you have a bug, you load it into the kernel and the kernel hard crashes at runtime. For eBPF programs, the kernel will reject the program before you inject it.

In practice to deploy eBPF programs, you end up adding the kernel verification step into part of your CI/dev workflow so that by the time you ship your programs, you know that they will safely load and safely run in real environments.


With secure shell, you can make it significantly more useful. Of course, you still need a box to SSH into.

https://chrome.google.com/webstore/detail/secure-shell/pnhec...


What if you clone the repo before the license was added? Then do you get code in the public domain?


No, the default state of software is copyrighted and not redistributable. You need a specific license to give you the right to modify and/or redistribute software (though I think copyright should be changed to allow private modification for certain purposes).


crosh is being deprecated in favour of a newer ssh client:

https://chrome.google.com/webstore/detail/secure-shell/pnhec...


Anyone else familiar with "Arbitrary Free Protection"? 'cause it's only turning up one search result for me.


It's actually quite scary to read the comments on TFA and see that indeed, people did know about this breach.


The first step to making software do what humans do, is to analyse what humans do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: