

> The "n.1" restaurant in terms of monetary value is McDonalds. However, they have a goal in their mind: to be the n.1 whatever they do. They have also wasted huge amounts of money - it's documented everywhere, every time one of their projects gets shut down. The difference lies in the ability to stand up and get on their feet again - learn from failures. As long as the mentality is driven by the idea of the acquisition, not by ambition, things won't change - the most efforts will be spent in hiding the pile of s * that employees do, trying to draw the right numbers on the right graphs. Most of them want to survive, some want to get a lot of money fast, and who knows, maybe in 5-10 years sell the company for 1,2,10 mln USD. The current reality is that most companies are average or below-average with no great plans to further develop or to become the next IBM, NASA, Intel, or whatever. We think we are in a better business just because IT is something relatively new, which not everyone fully understands, so we think that we are the best in everything we do, because we bring innovation.


Yes, you must pay a lot to get the best cook, who will take all the time and require all the staff he wants/needs to prepare the best recipes, and so forth.

Do you want to be the n.1 restaurant? Then you must pay, so that your business is good food, not marketing - notice: the best restaurants don't even have good websites, as the food speaks for them. I'd add that IT is a cost as it is a baker for a bakery, a cook for a restaurant, a mechanic for a repair shop. However, I would like to mention that competition is only one very important factor. it doesn't matter if a competitor ships buggy RoR-based systems a few months earlier. The risks can be mitigated by implementing the necessary macros in a structured way to keep the complexity under control:
The power of ten rules for developing safety critical code code#
My experience is that the risks due to code duplication are very high, and so it's worth the risk of using complex preprocessor macros to avoid them. So in these situations you face a trade-off between the risks due to complex preprocessor use, and the risks due to code duplication (namely, a maintainer might change a fact in one place where it is used but fail to change it in another). The problem is that it is common in C to encounter situations where the only way to avoid lots of code duplication is to store a table of facts in a macro definition and use the preprocessor to expand those facts into the relevant bits of code in each place where they are used. 8 – Limited Use of Preprocessor" which bans all complex uses of the preprocessor. The recommendations look good to me and (with one caveat) correspond to rules that I apply when writing C code with a high reliability requirement. 5 - Low Assertion Density" would be better described as "High Assertion Density" - the recommendation is for a minimum of two assertions per function (and functions are supposed to be short per rule 4). It looks to me as though the recommendation that RankRed has titled "Rule No.
