Steven Denbeste has passed away.
Back when the interwebs were new and blogs had not even been a thing a few that started stood out. Especially in the more or less right thinking universe. One was a quirky blog called USS Clueless, which tended to far from clueless. USS Clueless, along with Instapundit and the blog that I’m not going to name were the three go to blogs of the early 21st Century. The blog was written by Steven Den Beste and was almost always insightful.
Bill Whittle has a tribute here.
The server for USS Clueless is gone, but you can still see most of it on the Wayback machine. Most of the politics are obsolete, but the ideas are not. And they still make for interesting reads. They are worth remembering in these troubled times.
Steve left political and general blogging because all the trolls he got but had an entertaining anime blog for about ten years or so.
Here’s one reason that he will be missed:
As a general rule, new hire engineers in software engineering tend to be nearly useless for the first 3-6 months after they come on board. Part of the reason why is that they have to go through a rather abrupt process of unlearning a lot of things they’d been taught at the university where they studied.
It’s not so much that the subject matter they studied was wrong, as that the university environment teaches habits and procedures which are diametrically opposite to those software engineers in embedded software actually use. Being a student teaches them virtually every possible lesson in how to be a failed engineer, and unless such students have been through an internship, they spend the first three months or more of their employment frustrated and confused and totally lost.
There are a lot of things wrong with how they did projects in school, but by far the most important was that they were graded in terms of the quality of their result (i.e. their assignments), and the more problems that the grading grad student found in their project, the worse grade they got. The university experience rewarded them for concealing problems.
In the industry, the exact opposite is the case. The most fundamental rule in engineering, even more basic than Murphy’s Law, is: Everyone fucks up.
Everyone makes mistakes. It’s a fact of life. It isn’t a cause for shame, it’s just reality. Just as engineers are in the business of producing successful designs which can be fabricated out of less-than-ideal components, the engineering process is designed to produce successful designs out of a team made up of engineers every one of which screws up routinely. The point of the process is not to prevent errors (because that’s impossible) but rather to try to detect them and correct them as early as possible.
There’s nothing wrong with making a mistake. It’s not that you want to be sloppy; everyone should try to do a good job, but we don’t flog people for making mistakes.
What’s wrong is not detecting the mistake until after you ship.
I’m quite the oldtimer in software engineering. I started working professionally in 1976 and I was part of the generation of programmers who collectively developed processes which helped to convert software into a reasonable engineering discipline. (It was a standing joke in the industry at the time that “Computer Science” would more properly be described as “Computer Craft”, because there was precious little scientific about it.) Before that point, software was a major product business but it wasn’t treated with the same kind of rigor and attention to quality as other kinds of engineering. Most commercial software prior to 1975 ran on medium sized or large mainframes. The hardware of such computers was expensive to fix once shipped, so the development engineers took a great deal of care to make sure that they got it right. However, software for those mainframes wasn’t developed to the same schedule, and it was routine in the industry to send updates to the software to customers on an ongoing basis. Since software could so easily be updated in the field, this permitted a certain amount of sloppiness.
It was the development of the microprocessor and the ROM which inspired the change. With cheap small computers and software burned into hardware, computer software could be used as part of larger products, and fixing software bugs was just as expensive as fixing any other kind of product flaw. To a far greater extent than any other programmers, those working on embedded software were compelled to develop procedures which would result in far greater quality than the industry norm, because the consequences of failure were far higher.
This first began to happen at some companies who sold extremely expensive products at very high price with a reputation for very high quality, such as Hewlett Packard or Tektronix. I worked at Tek, and Tek had routinely been spending quite large amounts of money on testing and product validation during the engineering cycle. It was natural that many of those procedures already developed for electronics and mechanical designs should be adapted to software, and they were. From the very first at Tek we were assigned independent test engineers at a ratio of about one for every five designers, for instance, and shipment required their approval. The test engineers worked for the manufacturing manager and were politically insulated from the engineering manager. These things did help, but not enough.
It became clear very early that software as an engineering discipline had unique characteristics and that it couldn’t be treated as just another form of EE. It was clear that the design methodology itself had to change.
In the kind of electronics being developed there at the time, it was usually pretty easy to modularize the system and define straightforward interfaces, and so you would assign one engineer to each module who would largely work alone, though they did do occasional large scale design reviews. When their designs were integrated after the first prototype build, then if they didn’t work together you could test what happened at the interface to see who wasn’t in compliance. For instance, on one project I participated in (the 7D02 logic analyzer) the physical implementation was a series of cards plugged into a backplane, and the interface was the wiring of each connector. Each card was assigned to a different engineer.
Software modularization goes way back, but the interfaces are vastly more complicated and much harder to manage, and modularization is hierarchical rather than level. On that project the EE’s had seven interfaces; we programmers had hundreds of them. Early software projects had been run along the lines of how our EE’s worked, with individual programmers being given large sections of code to own and develop, with integration taking place rather late between relatively mature bodies of code which virtually always didn’t work together cleanly. Even identifying where the problem was could be difficult, and fixing the problem could involved truly vast amounts of effort and a hell of a lot of backbiting, finger pointing and often outright hostility, as the schedule slipped and slipped and slipped.
The fundamental assumption behind that approach was wrong. It assumed that the primary job of software engineers was to develop code. It’s a natural assumption, but it isn’t correct. It turned out that as projects became increasingly large and complex, the primary job of software engineers was to manage interfaces. The largest number of bugs which strongly affected quality and timely shipment did not come from improper implementation of code within a module. They nearly always came from mismatched interfaces between modules.
I was involved in one of the very first development teams at Tektronix to try an entirely new way of developing software. It had first been tested by a research team in Tek Labs (the internal long term research group at Tek), and we learned it from them. To some extent the way they were doing it was impractical for real application, but after observing them a few times, we came away and streamlined it for our purposes, and the result was quite good. I learned a great deal from that experience, and on the next project (the 1240 logic analyzer) where I was one of the two most senior software guys, I insisted that we use an even more rigorous form of the process. What we were doing made top management extremely nervous, but our product manager was a software guy and he defended us, and on a 2.5 year project we brought the software in 3 weeks ahead of schedule, with a level of quality virtually unknown at Tek at the time.
But a year into the project we hadn’t written a line of code. The first project leader had been useless and had largely stopped participating in the process entirely (Sam and I ran it) and they eventually replaced him. The new guy came in, spent a couple of weeks talking to us and looking at what we were doing, and then went back to management and told them that we knew exactly what we were doing, were doing it exactly the right way, and that the best thing management could do is to leave us alone.
The process was referred to at the time as “egoless programming” and the core idea of the methodology was that we live and die as a team. Either our product works and is successful, in which case we are all winners, or it doesn’t and we are all losers. It does no good on a failed project to point fingers and prove who was to blame; in egoless programming everyone is to blame. The only criterion which matters is whether the project succeeds.
When we lose the people that influence our lives, there is always a true and great loss. This is another one in what has been a very bad year. It’s fitting that the last Chizumatic post was titled; “Board up the windows, ’cause they’s a storm coming!”
The top figure is some of the cheesecake that Steve liked so much.
Update: the full USS Clueless archive..
Update 2, a tribute from National review.