Lets answer the question "Why cannot we write perfect software ?". We all are periled by this notion of generating digital junk in the form of software code which is always riddled with bugs of one form or the other. I think its important to appreciate the fact that "humans cannot write perfect code". And lets not take my word for it and provide a proof for it
Theorem 1: "Humans cannot write always perfect code"
To prove this ->
- a. I will explain the "Emperical model of probability".
- b. Derive some "Results from the emperical model".
- c. Explain "Converse of -> Results from the emperical model" (converse of conclusion drawn from step b).
- d. Proof Theorem 1 form step a,b,c. (mostly c)
- e. Answers to critics for the above reasoning.
a. Emperical model of probability.
Ever wondered why is probability studied ? Is probability an study and analysis of a random processes/experiments ? Study of anything that is random is by defination - futile. For example if you are randomly going to be picked up for security screening at an airport then, any study about ways to avoid the security screening is quite dumb. Also another iniutivie example is "knowing that the probability of a getting a heads in a coin toss, does not help u any in winning any coin toss".
Well probability existed as field of mathematics and is quite successful at it. So something is flawed in the conclusions drawn in above paragraph. To understand the flaw we need to understand the emperical model of probability. Well to conclude that "probabilty is a study of random experiments" it was totally wrong.
Probability is the study of "long term stability of outcomes of an random experiments". And it turns out that this long term stability is quite deterministic and not random. What is long term stability of results ? It means taking a 1000000gazillion experiment of coin tosses and then measuring how many of them resulted in heads will tend close to the result 1/2. I use the words "tends close to the result 1/2 or 0.5". As it turns outs results tend more and more closer to .5 as the number of experiments increases from 100000gazzillion to 100000trazillions so on and so forth.
Good ! so now we know what is probability all about and for the record - it does not help u win any lottery tickets or coin tosses. Sadly but true most of what i was taught about probability had all about winning lottery tickets, card decks and tosses.
( Just proves that explaination of any mathematical model is worthless if its not backed up by an emperical model).
b. Results form the empirical model.
------------------------------------------
The take home point from the above paragraph is.
b.1. Probability is not analysing something random.
b.2. For probability to make "MORE" sense you need and infinitely countable runs of the random experiment.
b.2.1 The conditions in the experiments have to be deterministic and same on all run. (otherwise it makes no sense).
I will use the converse of b.2 and b.2.1 to prove our theorem 1.
c. Converse or Results from the emperical model.
---------------------------------------------------------
b.2. For probability to make "MORE" sense you need and infinitely countable runs of the random experiment.
b.2.1 The conditions in the experiments have to be deterministic and same on all run. (otherwise it makes no sense).
Converse of 2.1:
c.1 If you are getting predicatable (converging results) from the gazzilion runs of random experiments. It means that the conditions under which the experiment are stable and deterministic.
(Well think about it-> its quite inutitive dont bring anything complex into mind). This ends the backdrop required for the proof.
d. Proof Theorem 1 form step a,b,c.
-----------------------------------------
Now lets try to model the emperical model of software development.
Software D.E.V.E.L.O.P.M.E.N.T.
The "development" part in software development is a random experiment. You for example cannot control many aspects of the development process like.
d.1. The algorithmic understanding level of the programmer.
d.2. The understanding of the programming model used to implement an algorithm.
d.3. Sometime solutions are based on random algorithms.
I could go on and on and on
So it means that even gazillion runs of a software development process cannot provide help for a probabilistic measure of software quality. (This is based on C.1. We fail C.1 so we fail B.2 and B.2.1). So we cannot have probablistic measures of software quality because the outcomes are based on random experiments with random experiment settings. So atleast we cannot conclude about the probability of writing bug free code.
Hence proved.
e. Answer to critics for the above reasoning.
-------------------------------------------- ------
Always nice to BLAST AWAY SIMPLE critiques against a proof.
Q. Are u dumb to write such a proof ?
A. I cannot conclude on me being dumb. So i cannot answer :). Its all about what u think. Unfortunately interpretation of all sciences become quite subjective after graduation ;).
Q. Well my "hello world" runs perfectly and so hell with your proof ?
A. You for one reason don't appreciate the number of lines of code you executed to get u r moronic "Hello world" up and running.
Starting from
1. Helloworld.cpp
2. Compiler.cpp Linker.cpp Assembler.cpp
3. Loader.cpp
4. XYZ.cpp
5. Microcode.vhdl
6. Solarflares.god
So u see your helloworld is just a simple pimple on the arse of the universe. Most of the components involved in getting your helloworld up and running have one essential property. The property of having infinately countable test runs starting from compiler.cpp to solarflares.god. So its not quite an arguement.
Please read "Reflections of trusting trust - Ken Thompson (its a 3 page blaster)" To see how you helloworld can fail. That with a perspective on security though.