The End to End argument for distributed systems

The following paper is interesting:

I found it in the following talk about NATS:

1 Like

This paper is simply amazing. It helps you change your mindset about the distributed computing approach.

Some quotes from the end-to-end paper:

This paper presents a design principle that helps guide placement of functions among the modules of a distributed computer system. The principle, called the end-to-end argument,suggests that functions placed at low levels of a system may be redundant or of little value when compared with the cost of providing them at that low level. Examples discussed in the paper include bit error recovery, security using encryption, duplicate message suppression, recovery from system crashes, and delivery acknowledgement. Low level mechanisms to support these functions are justified only as performance enhancements.

Thus the amount of effort to put into reliability measures within the data communication system is seen to be an engineering tradeoff based on performance, rather than a requirement for correctness.

The arguments that are used in support of reduced instruction set computer (RISC) architecture are similar to end-to-end arguments. The RISC argument is that the client of the architecture will get better performance by implementing exactly the instructions needed from primitive tools; any attempt by the computer designer to anticipate the client’s requirements for an esoteric feature will probably miss the target slightly and the client will end up re-implementing that feature anyway.

Could the Unix/Linux practice of keeping policy out of the kernel and in user space be another example of end-to-end?

I am sure Linux is quite disciplined about syscalls but BSDs arent so much.