A Brief Moment of Fame

These things change rapidly. But at this moment, a search on Amazon.com for UML across all categories yields this result:

20.

Product Details

Ulterior Motive Lounge: UML, 80s Flicks, and Bunny Slippers by Martin L. Shoemaker (Kindle Edition – Dec 19, 2011) – Kindle eBook

Buy: $9.99

Auto-delivered wirelessly

Books: See all 4,206 items

A search for UML under Books finds this:

16.

Product Details

Ulterior Motive Lounge: UML, 80s Flicks, and Bunny Slippers by Martin L. Shoemaker (Dec 19, 2011) – Kindle eBook

Formats
Price
New
Used

Kindle Edition Auto-delivered wirelessly
$9.99

And a search for UML in the Kindle store finds this:

11.

Product Details

Ulterior Motive Lounge: UML, 80s Flicks, and Bunny Slippers by Martin L. Shoemaker (Kindle Edition – Dec 19, 2011) – Kindle eBook

Buy: $9.99

Auto-delivered wirelessly

And these are the results in the specific categories where the book is listed:

Amazon Best Sellers Rank: #70,014 Paid in Kindle Store (See Top 100 Paid in Kindle Store)

That will all change tomorrow, surely. But it feels great today!

An Argument for Requirements Analysts

Productivity vs Defect Removal

An attempt to trade quality for cost or schedule actually results in increased cost and a longer schedule.

Steve McConnell,
Professional Software Development

What has long been known in other businesses is true for software development as well: if you cut corners for shorter schedules or lower costs, you will get longer schedules, higher costs, and higher defect rates; but if you take the right measures to lower defect rates, you can get lower defect rates and shorter schedules and lower costs. As Crosby wrote, “Quality is free.” But it’s free only in terms of ROI, meaning the investment must be made first; and it’s only free if you first define what you mean by “quality”.

Fortunately, Crosby provided the appropriate definition as well: quality is conformance to requirements. That can be a concrete, quantifiable definition; but in some way it just moves the problem down the road, leaving us to define requirements: not just the term, but the specific requirements of our projects. It leaves us with this inescapable truth:

If we don’t know our requirements, we can’t have quality.

…we must define quality as “conformance to requirements” if we are to manage it.

Philip B. Crosby,
Quality is Free: The Art of Making Quality Certain

Analysis Capability and Impact on schedule

Data from Boehm et al., Software Cost Estimation with COCOMO II

Recent surveys have found that the most frequent causes of software project failure have to do with requirements problems – requirements that define the wrong system, that are too ambiguous to support detailed implementation, or that change frequently and wreak havoc on the system design.

Steve McConnell,
Professional Software Development

Poorly defined requirements are endemic to the software development industry. Boehm’s research on factors that affect development schedules and costs show that:

Excellent requirements analysts can reduce a project’s schedule by almost 30%, while inadequate analysis can increase the schedule by over 40%.

Again and again, schedule pressures lead teams to start developing before they have sufficient requirements definition.

Even though requirements analysis is a key skill, the topic isn’t as “hot” as new technologies and tools that are promoted by vendors and conferences and magazines. And many development teams feel swamped just trying to keep up with technologies and tools.

Martin L. Shoemaker

Influences on Schedule

Data from Boehm et al., Software Cost Estimation with COCOMO II

Among all personnel factors, Analyst capability has the widest range of impact (multiplier range of 2, worst case divided by best case). Teams may tout their application experience as a strength; but application experience has an Influence Range of only 1.51. Application and platform experience combined have an Influence Range of 2.11. Teams would never throw out their domain knowledge and develop for an entirely new platform; but poor requirements practices have almost the same Impact Range.

These teams aren’t foolish, yet they foolishly let a critical aspect of their process get out of control on project after project. A look at their team rosters may give a clue as to why: while there are many roles on the rosters, there may be none with requirements as a primary responsibility. Marketing and sales have requirements responsibilities, but many conflicting responsibilities as well. Lead engineers are supposed to verify requirements; but they are also too busy, and are commonly focused on solutions, not requirements. Designers and developers also focus more on how than on what. Traditionally in software development, analysts have primary responsibility for and are evaluated on the correctness of requirements.

The role of requirements analysts is
to define the problem in a verifiable form,
so that teams may recognize a valid solution.

And next you must ask: who owns that responsibility in your organization? If the answer is “no one” or “I don’t know”, there’s a ripe opportunity to cut your schedules by 30% to maybe even 50%, all while improving your quality.

Code is not enough. It’s all about requirements; and that’s all about communication.

The UML Learning Path

(Click picture for a larger image.)

The UML Learning Path

No, I’m not going to name any of the devs who inspired this post. They wouldn’t know who I am, anyway.

But it takes an extremely high degree of arrogance to go from “I don’t see a way to use this” to “This has no value, no matter who says they’re getting value out of it. So I’ll dismiss it, and I’ll mock them” Either arrogance, or more likely, insecurity: “I don’t understand this; so since those people think it’s important, either they understand something I don’t, or they’re fools. I’ll mock them, so everyone thinks they’re fools. That will make me look smart.”

And that insecurity manifests in a lot of places on a lot of topics, not just UML: Agile Development, Orchestrated Development, CMMI, Test Driven Development, C#, Java, Ruby, linux, .NET… Any time you move from “I don’t see it” to “It’s worthless”, look around: if other developers are putting those tools to productive use, then it’s not worthless. It just doesn’t help you. So do you call it worthless, and imply they’re fools? Or do you openly mock them, demonstrating that you’re a fool?

Or do you follow the only exit path in this diagram? There is only one, after all. Once you get UML, you’ve gotten it for good. You may not use it all the time, but you’ll understand when and why you should use it. But the only exit path is the middle: you recognize that UML (or Agile, or Orchestrated, or…) is having some value on some projects, so it’s not worthless; but you just can’t see the value. You remain open-minded.

Business Actors

On Twitter, @ClearSpringBA asked:

@UMLguy to show a “parent” actor over subsidiaries, do I use the generalization feature in UML? (doing an actor-UC diagram, new to it)

Wordy cuss that I am, I answered multiple times:

@ClearSpringBA Are subsidiaries subordinates or special cases? For ex, Supervisor is special case of Employee; Emps are subordinates of Supv

@ClearSpringBA For special case, genralization arrow from Supv to Emp. “Supv is an Emp with more responsibilities.”

Her questions back:

@UMLGuy thanks for all the info. subsidiaries are parent companies, can do everything on behalf of a sub.

UMLGuy so, i would draw the arrow toward the parent company? the arrow with the “big head”, generalization arrow? prob wrong terms!

I decided this had gotten complex enough that words weren’t working; so we went to email. Since this is general enough not to show any business info from her client, I thought I would share my response, in case anyone else finds it useful:

Here’s a simple diagram of business relationships:

Businesses as Actors

You can read it as follows:

  1. A Business is an Organization. Triangle arrow (“generalization” or “inheritance”) can be interpreted as “is a”.
  2. A Parent Company is a Business.
  3. A Parent Company has zero or more Subsidiaries, which are Businesses. (They might also be Parent Companies themselves, since a Parent Company is a Business.) The plain arrow can be interpreted as “has” or “contains” or “uses”. 0..* means any number, possibly 0. 1..* would mean any number, NOT 0. That could mean, for example, that a Parent Company MUST have at least 1 Subsidiary.
  4. A Business has 0 or 1 Parent, which is a Parent Company. The “topmost” Business has no Parent. All the others have 1.
  5. A Business has zero or more Partner Businesses.

I hope that clarifies things, and gets you thinking about new ideas. Please let me know if you have more questions.

And I hope it helps someone else, too!

Quality is NOT Free

A business classic tells us that Quality Is Free. The title is intentionally provocative: no, quality isn’t free, it just pays for itself. But first, you have to pay for it.

And that, unfortunately, is where we fail in the quality game so often. Corporations seem addicted to the practice of compartmentalized budgeting, or what I think of as “bucket budgeting”: you’ve got a bunch of different buckets you pour money into at the start of the fiscal period; and each bucket can only be spent on a particular purpose. When that bucket is empty, you can’t accomplish that purpose any more. Oh, the company may have some reserve you can tap; but you’re going to get frowned at for exhausting your budget.

I understand bucket budgeting as a planning tool. I think it makes perfect sense, for the same reason you should make estimates out of small elements. In fact, it should the exact same reason, because these budgets should be estimates, not shackles. You’ll correct over time.

And that’s where the failure comes in. Somehow, some way, in too many organizations, those buckets are shackles. Your bucket is what you can spend, what you will spend — regardless of the bottom-line impact of your spending. Even if every $1 spent out of your bucket brings $1.20 into the company, you only get to spend what’s in your bucket. This isn’t a new complaint, of course; and smart managers certainly keep an eye out for ways that spending money can save or generate more than they spend. But less bold managers don’t like to rock boats. They live within their buckets, because overspending their bucket gets them a bad review. It takes courage to stand up and make a case for more money in your bucket, unless you have a very clear, simple chain between the money you spend and the money that comes back in.

And the quality equation is particularly susceptible to bucket budget shackles. Quality does pay for itself, but it seldom shows up anywhere near the buckets where the costs came from. The cost of quality is measured in extra labor and time up front on preventing defects, along with extra labor and on the back end detecting and correcting defects. It’s also training time, which takes time away from “productive” work. It’s also management time and communications effort in getting everyone — execs, workers, and customers — to understand that seemingly slower time is actually faster in the long run.

The benefits of quality, meanwhile, are in reduced support costs, reduced rework costs, and increased customer satisfaction and loyalty. These do affect the bottom line; but they don’t put money in the buckets of the managers who have to pay for the quality in the first place.

A common sarcastic reaction I here among the workforce is “Quality is free, so management thinks they can have it without paying for it.” And sadly, this reaction is often justified. But I don’t think most managers are really that clueless. I do think many managers are shackled by bucket budgeting, and unwilling to buck the system for something that won’t have an effect on their budgets. The effect may be the same as cluelessness; but if we don’t understand the proper cause, we can’t devise the proper correction.

And no, I don’t know what that correction is. I mean, to me, the answer is simple: start treating budgets as estimates, not shackles; but I don’t think little old me is going to change corporate culture that drastically just by saying so.

Final note: this isn’t inspired by anything at my current client. Really, it’s not: I’ve been complaining about bucket budgeting for over a decade. But it’s true that my client is currently entering a phase where they’re trying to invest in quality in pursuit of benefits in the long run, and I want to do my part for that. There are forces that will push against that effort, and forces that will push for it. I’m doing a little writing to help me clarify how I can help push in the right direction.

It’s All About Communication

Note: This was originally chapter 13 of my book, UML Applied: A .NET Perspective from Apress. My editor and my tech reviewer read it; and both said, “We like it; but what does it have to do with UML?” So I came up with a more on-topic closing chapter. But I still like it. It sums up my view of the software development profession quite nicely. So I thought I would share it here.

Somewhere, deep in our prehistory…

A man tells a story, a story of a hunt. Perhaps it’s a hunt he’s planning. Or perhaps it’s a hunt that just took place, and he’s telling the tale. In telling his story, he draws Figure 13-1:

Figure 13-1: The first recorded UML diagram?

Figure 13-1: The first recorded UML diagram?

Regardless of whether he’s telling the past or planning the future, he has built a model of “How we kill the prey.”

From these simple pictorial models, we moved to pictograms… and cuneiform… and eventually to alphabets and written words. With words, we can model much more abstract concepts than our hunter ancestor could — such as “abstract concepts”.

Elsewhere in our prehistory…

A man looks at a set of things that are important to him — perhaps it’s livestock, perhaps it’s weapons — and can see at a glance that all the elements of the set are there.

But soon, the man grows more fortunate. He gains more livestock, and more weapons. And as he gains more, at some point he can’t tell at a glance that all the items are there. Remember our 7+/-2 rule: past a certain point, the elements of the set blur together.

So the man finds a way to keep track of the items. He recognizes that he can associate the individual fingers of his hands with the individual animals or weapons. And he knows his fingers, and how many he should have, and which finger corresponds to the last item; and so he knows how many items he should have. He can match the fingers to the items; and when the match is correct, he knows all the items are there; and when the match is incorrect, he knows there are missing items (or extra fingers).

And thus, with his fingers he has built a model, a symbolic representation of the items that concerned him. And when he grew wealthier yet, and the number of items grew beyond the limits of his fingers, he and his intellectual heirs invented new models: counting sticks… and tally marks… and Roman numerals… and Arabic numerals…

And all of these counting methods were models or symbols of things in the physical world. But along the way, we developed more abstract concepts: operations that could be performed on the numbers. We can add to or subtract from a number, without adding to or subtracting from the items themselves. And thus, we can have numbers that don’t represent any physical items, but simply represent the abstract concept of “count” itself. And along the way, we invented arithmetic… and zero, which represents an absence of things… and rational numbers that don’t represent any count of things, but rather the ratio between things… and mathematics… and negative numbers, which represent a dearth of things… and irrational numbers, which represent the numbers between ratios, numbers which cannot be associated with a physical count of things at all, yet which often represent underlying aspects of physical reality (π, e)… and algebra… and imaginary and complex numbers, which represent vector quantities which cannot be graphed on a number line without extending that line into a new dimension… and varying numerical bases… and mathematical logic… and accounting… and trigonometry… and statistics… and probability… and higher mathematics, where we question or even dispense with the matter of number itself…

And all of these advances in mathematics are powerful models, letting us express concepts only distantly related to “How many goats do I have?” — concepts that underlie everything around us, from our understanding of the laws of physics and chemistry, to our understanding of rhythm and meter and tone in music, to our engineering skills that produced the laptop on which I type this sentence.

Somewhere inside my laptop…

There is a tiny zone of silicon, with a tiny doping of impurities which render that zone a semiconductor. And at any given moment, that zone is in a charged state or an uncharged state, depending on the flow of electricity through my laptop and through components of the system and through paths shaped by wires and by billions of other tiny zones of semiconducting silicon. And we look upon the charged state and the uncharged state, and we choose to call the charged state “1” and the uncharged state “0”. This is simply an arbitrary choice, not a physical reality. It’s a symbol we could define any way we chose.

And thus, the state of that tiny zone of doped silicon is a model of a model: a physical representation of the concepts of “0” or “1”. The tables are turned: instead of our symbols representing reality, we manipulate reality to represent our symbols. And then, our technology lets us string these physical symbols into sets to represent larger and larger numbers — and with IEEE floating point format, numbers beyond simple counting. Yet despite the size and power of these number formats, they are still at base models of models: physical symbols to represent the abstract concepts of numbers. They are silicon “fingers” (and we even call them “digits”, just as we do fingers).

Somewhere in the field of psychology…

Psychologists study how our brains work, how we think and how we communicate what we think. And one underlying mechanism stands out: creating and manipulating symbols and models.[1] We have symbols and models in our heads that represent to us concepts in the real world and concepts in the abstract (such as “concepts in the abstract”). And we manipulate these symbols, and we create new symbols for new concepts. And in fact, if we do not have symbols for something, we cannot think about that something. These symbols, these concepts, form our mental models of the real world, and of ideas. From these, we can build larger, more useful models, such as theories and skills and bodies of knowledge. Armed with these larger models, we can understand and manipulate the world around us. And however imprecisely, we can communicate and share these symbols.

Elsewhere inside my laptop…

As I sit with this laptop in my lap, and I press this key — this very key, “q” — a contact closes under the keyboard, and a gate opens, switching from a “0” state to a “1” state. This allows a signal to be sent to the CPU, and from there to be routed to a particular chunk of software. That software looks at the signal, and translates it to a specific numerical code — 112 — that we have chosen to associate with the letter “q”. Like our physical symbols for “0” and “1”, this is an arbitrary symbol we have chosen to have the precise meaning, “q”. This doesn’t mean that 112 “is” the letter “q”. It means that 112 represents the letter “q”, and we have agreed to this precise representation, so that when I press this key, you see something that represents to you, “q”. Ultimately, we have manipulated physical things that are symbols and representations — models — and by sharing these models, we can exchange ideas. The models have become so powerful and so precise and so flexible that we no longer see the models at work. We see the thoughts themselves.

Somewhere inside my head…

I have a set of symbols inside my head: a set of thoughts, a set of ideas, a set of concepts that collectively can be called a model. This model represents to me a set of equations and mechanisms by which one can simulate a spacecraft and control that simulation on a voyage from the Earth to the Moon. I can enjoy the beauty of these symbols, and I can enjoy the concepts, and I can play with them, and I can write them out. And I can see that these symbols do truly represent (to the accuracy that I care, since I’m not a rocket scientist) the process of traveling to the Moon and back.

Somewhere inside your head…

And if I teach you the same symbols, and if we share enough common symbols and models, you can do the same thing. This is the purpose of the tools we call “language”: to allow me to convey the things that I know in such a way that you know them as well, and can apply them, and can build upon them and communicate your new ideas back to me and to others. As I quoted him in Chapter 3, Stephen King is right: writing is telepathy.

Once more inside my laptop…

But now, with software, we have a new construct, a new kind of tool. It has a character like traditional machines: regular, precise application and replication of mechanical and physical laws toward a particular purpose. But it has a character like language. Being made entirely of malleable physical symbols which can represent whatever we want them to represent, it is more than a traditional machine: it is an Idea Machine, a Concept Machine, a Symbol Machine. And since those are the building blocks of how we know and how we think, it is a Knowledge Machine and a Skill Machine. I can take the “Lunar travel” symbols from my head, and I can express them not to you, but to software that resides on your computer. And if I express them sufficiently thoroughly and correctly and precisely, you too can go from the Earth to the Moon, but without having to learn the symbols and models that I know. Instead, you need only learn a much smaller set of symbols for telling the software to do what I know how to do.

The Point (At Last!)

As software developers, our job has never been about technology. Our job has always been about expressing symbols and ideas and knowledge and methods in precise physical symbols so that other users can reuse and apply those symbols and ideas and knowledge and methods without having to master them themselves — or if they do master the symbols, without having to go through all the rigor and discipline and effort of applying them for themselves. All they have to master is the effort of telling the software to do what we have already mastered.

And the other side of our job has been to master that which we don’t know. We may be skilled at precise expression of ideas in physical symbols, but that doesn’t make us skilled at Lunar launches and color science and financial forecasting and the myriad other symbols and ideas which users want expressed in software. So we have to learn how to elicit those symbols and models from those who have them, but who lack our skills in precise expression through software. In a sense, we’re translators between mental symbols and physical symbols, adding in our own mental symbols to provide an intermediary layer to allow users to make use of the physical symbols.

In other words — and you had to know that this was coming — it’s all about communication.

And there’s a bit of irony for you. Our field — where our entire job is precise communications — is infamous for practitioners who are perceived to be short on communications skills. And in fact, nothing could be farther from the truth: to have any success in software at all, you must be very, very good at precise communication, at taking an abstract set of symbols in your head and precisely communicating them to the software in such a way that the software “understands” what you understand.

So if software is all about communication, how has it come to be seen solely as a technology job? Indeed, how has it come to be dominated by technology fanatics who (in many cases) do have something of a problem with unstructured, imprecise communications? Why do some developers have such trouble with the nuance and the double meanings and the give-and-take and the vagaries of human communications?

Well, one reason is that for most of the history of software development, communicating with the technology has been hard. Very hard (though getting easier). It requires an exceptional degree of precision, and therefore it requires specialized skills not found in the general populace. As I mentioned in Chapter 2, you’re weird: you handle precise communications in ways that most other people do not. This tends to set us apart. And perhaps that setting apart attracts people who are better at technology than at communications. Perhaps the time spent with precise technological tasks even causes atrophy in our other communications skills, since people usually do better at that which they practice a lot, and worse at that which they neglect.

Another reason is that in the past, limitations of the hardware strictly constrained what we could and couldn’t communicate and also put a premium on mastering obscure technical tricks and creating new technical tricks to live within the limitations of the technology. The software developer culture evolved in circumstances where technological prowess was essential in order to communicate at all.

And yet another reason: the small size of projects that could fit within those limited environments was often a small size which could be mastered by someone who could communicate his or her ideas well without needing recourse to communication with others. A large number of projects were small enough and simple enough to resolve primarily through technological prowess.

Well, as I tell my students, there are no simple problems left. Yes, this is a bit of an exaggeration; but as a practical matter, the simple problems have been solved. Hardware today has grown so powerful as to have no meaningful comparison to the hardware of a generation ago; and yet the software problems of today strain that hardware to its limits, and there’s always a backlog of problems we’d like to solve as soon as the hardware catches up. Today, no one hires a software developer to solve simple problems. They hire us for the hard problems… the really complicated problems… the problems they don’t entirely understand themselves… the problems for which requirements are hazy, and success cannot be clearly defined. And they’re counting on us to clear away that confusion, to help them define what their needs are and what success means. They’re counting on us to communicate with them because they don’t know how to communicate with us. We must communicate with them and for them, leading them through the communications process to lead to the precision that the software requires.

In other words, now and for the foreseeable future, I believe that communication will be the key software development skill. I don’t want to discount technology skills, because there are always new technological frontiers to master (like .NET today); but without communication, we’ll fail to apply the technology in ways that matter. We have to communicate with our users, because we need to know what they need, how to accomplish it, what skills we must master to accomplish it, and how to recognize when we’ve succeeded. We have to communicate with our users in another way, through our user interfaces that allow them to access and command our software. We have to communicate with each other, because the problems are too large to solve in isolation. We have to communicate with future maintenance developers who must carry on our work when we can’t. We have to communicate with mentors and educators and experts (perhaps in a disconnected fashion, through their lessons and books and examples and presentations) to learn the new domain skills and new technology skills we must master to address user needs. When we ourselves become mentors and educators, we have to communicate to our students and readers by writing our own lessons and books and examples and presentations. We have to communicate to testers, so that they know what and how to test, along with how to recognize success or failure. We have to communicate with documenters, so that they know what to explain and how they will communicate it to the users. We have to communicate with marketers and sales personnel (yes, even them) so that they understand what we can and can’t deliver. We have to communicate with managers and executives to explain what they can and can’t expect from the technology within constraints of time and resources.

And we have to precisely communicate with the software and the other technology, instructing it to carry out the knowledge and skills we have mastered. I sometimes call this “shipping our brains in boxes”. While the metaphor is a bit gruesome, the idea is key: that software installed on my customer’s machine is a proxy for my brain, sitting there and doing for that user what I could do myself if I were there – only doing much of it faster than I could do myself (and more accurately, because software is more precise and repeatable than humans usually are). And besides speed and accuracy, there’s leverage: that software proxy for my brain can be installed on hundreds or thousands or more computers, helping far more users than I could ever help in person. We’re not just working with technology, we’re communicating with it, teaching it the things we know and then replicating our knowledge across the user community.

So that’s my conclusion, one final time: it’s all about communication. And while UML isn’t the answer to your every communication need (for instance, I thought about writing this entire chapter solely in UML diagrams; but the abstract nature of diagrams just wouldn’t express the precise thoughts I wanted to convey), I hope that I’ve shown it to be a powerful set of graphical tools for conveying complex ideas and relationships so that you can more easily communicate the skills and ideas from your users and experts to the software so that your users can apply them. Today, many UML tools will also produce the structure of your code directly from the models. In the future, UML will become a programming language, a tool for communicating with the technology directly. (Some UML tools like Compuware OptimalJ and ILogix Rhapsody already do this in certain niches.)

So UML is a valuable way to express yourself, both to people and to machines. It’s a new technology skill that will help you to apply all of your other technology skills, once you master it. Practice UML and apply it, not just by yourself, but as a means to communicate with your fellow developers and your users and everyone else with whom you must communicate. I hope that with time, UML can do for you what it has done for me: speed and smooth your communications so that you can solve more complex problems more successfully.


[1] Marcia Wehr, Ph.D., Mastering Psychology, “THINKING AND LANGUAGE”, http://inst.santafe.cc.fl.us/~mwehr/StudyGM/12Ove_Th.htm. Dr. Wehr provides an excellent introduction to psychology, as well as a good basic description of software development: “When we face a mental challenge in which there is a goal to overcome obstacles, we are engaging in problem-solving. Generally we use a common method of problem-solving. We 1) identify the problem, 2) define the problem, 3) explore possible plans or strategies 4) choose a strategy 5) utilize resources to act on the chosen plan 6) monitor the problem-solving process and 7) evaluate the solution.” That sounds like the basic outline of all software development processes.

Doctors are the Stupidest Users!

Note to doctors: No, I don’t really mean that title. I use provocative titles to get attention and capture an attitude. What? What are you doing? You’re going to stick that thermometer where?

Doctors are the stupidest users. If you’ve ever had to write software for doctors, you’ve discovered this: the phrase “RTFM” was made for doctors. They just can’t be bothered to read even the simplest help docs. They can’t bother to learn even simple tools that a bright grade-schooler can master.

OK, that’s the programmer perspective. Now let’s look at it from the doctor’s perspective. On her desk is a small mountain of medical journals she needs to read to keep up with her specialty.

Next to those is a small mountain of textbooks for the new specialty she’s trying to learn.

Next to those is a small mountain of new regulations and guidelines she must comply with to maintain her license.

Next to those is a small mountain of insurance guidelines she’ll probably never have time to read but should if she wants to make sure she’s charging within guidelines.

Next to those is a small mountain of insurance paperwork that demands her scarce attention. She has no time for it; but if she doesn’t keep up with it, patients may be denied treatment that would’ve been approved if she had.

Next to those is a small mountain of accounting statements that she’d rather not bother with, but has to if she wants to pay her student loans and malpractice insurance.

And next to all of those is a large mountain of patient histories, test reports, specialist reports, and hospital reports she needs to keep straight in order to treat her patients.

And somewhere, buried under all of those papers, are a few pictures of the family she hopes to see once or twice this week.

And along comes this programmer person who says, “What’s the deal? This is easy! Just read this, and this, and then try this, and learn this, and look at this, and it’s easy! You’re a doctor. You’re supposed to be smart. This should be easy!”

After she schedules the programmer for an emergency barium enema, the doctor goes back to her work.

This is part of the reason (by no means the entire reason) that there’s a profession of Medical Technician. These people’s responsibility, in part, is to be the doctor’s user interface to programs that the doctor just has no time to master. They have time to specialize in arcane software. They usually have more technical experience.

But really: should it be this way? Not that I want to put Medical Technicians out of work; but shouldn’t the programmers spend more effort understanding the doctors and their needs, rather than requiring the doctors to understand programs and programmers?

For years, I’ve spoken and taught about a principle I call Conservation of Complexity. For any given task, there’s a minimum complexity required for that class. You can’t make it less complex. (But trust me, some idiot can always make it more complex…) If I’m automating that complexity, I take some of that work onto myself, and leave some for the user; but no matter what I do, I can’t reduce the complexity below the minimum. And being lazy by nature, I’ll want to do the minimum necessary to meet the requirements: minimum complexity for me, and let the user do the work.

But we can cheat! We can’t reduce the complexity of one instance of the task; but we can reduce the complexity of multiple instances, especially when those instances are performed by multiple users. In so doing, we can reduce the net complexity in the system.

Let’s say the user and I split the complexity, C: I get 0.5 C, and he gets 0.5 C. Now if 100 users do the task 100 times, we have:

Cnet = 0.5 C + 100 * 100 * 0.5 C = 5000.5 C

Cave = Cnet / n = 5000.5 C / 10000 = 0.50005 C

So we’ve cut the complexity in half. That’s great, right? That’s why we write code, right?

But suppose some idiot — i.e., me — didn’t make that app very usable for the user. Oh, it’s easier than working by hand, or I won’t sell any copies; but say it’s 0.8 C. In that case:

Cave = 0.80005 C

So I took an easy way out. The system is still less complex on the whole. We’re still winning, right?

But now, let’s go the opposite way. Let’s say I put in the extra effort to reduce the user’s work to 0.2 C. And let’s assume this takes me a simply ghastly effort, 2 C. That means that:

Cave = 0.2002 C

By me working twice as hard, the system as a whole works roughly 80% less.

Now these numbers are just examples, of course. But I think they make my point: if all I worry about is getting code out the door, I may not work hard; but my users have to work a lot harder. Too often, we programmers see our perspective too clearly, and the other guy’s perspective too dimly. (In other words, we’re human.) But the smarter thing to do is to put forth the extra effort to reduce the user’s complexity. Because after all, there’s money to be made in reducing your user’s workload!

I’m reading Alan Cooper’s The Inmates are Running the Asylum; and with practically every page, I’m reminded of Conservation of Complexity. And of the doctors.

Also see Joel Spolsky’s discussion on mental models. He approaches the same idea: the programmer has a mental model of how the code works; the user has a mental model of how the code works; and great software results when the programmer’s mental model is very close to the user’s. It’s easier for a team of motivated programmers to change their handful of mental models than to change all those users.

Quantity IS Quality

On a mailing list where I hang out, a participant recently said (paraphrased): “He believes that popularity proves quality. I believe that there is almost no correlation between quality and popularity.”

We hear this sort of thing all the time. There’s an implication among self-appointed elites that “the masses” — i.e., everyone who’s not them — just can’t recognize quality. It’s assumed that “popular” is proof that something is bad. You see this attitude in film snobs who insist that an Oscar nomination for The Return of the King is some sort of travesty, because the film is a popular fantasy and not some art house flick or some historical epic. To be fair: you see it in Lord of the Rings fans who for years have been telling others who didn’t like the books that they just didn’t appreciate great literature. And before the films, you could see it among the literati who snubbed Lord of the Rings because it’s a popular fantasy rather than a dreary, post-modern, self-referential, obscurantist yawn. You see it in opera buffs who assume the rest of us are subintelligent because we don’t share their passion for opera. You see it in young rebels who look down on the lives of the conformist “sheeple” and who demonstrate that they are individuals and not “sheeple” — by all dressing and talking and acting and piercing alike. And you even see it in gourmets who extol the virtues of French food over more pedestrian fare like food from McDonald’s.

But the truth is: they’re wrong, every single one of them. They proceed from two clearly false assumptions: that there is one clear, objective, inarguable standard of quality; and that of all human beings, they somehow have been born with/been granted/achieved the unique ability to pronounce what the standard is.

But the fact is just the opposite. If I can avoid butchering the Latin too poorly, de gustibus non disputandum: with taste there can be no dispute. Or in the modern vernacular: there’s no accounting for taste. When someone tries to tell you that his tastes are objectively correct, he’s demonstrating how self-centered he is or how shallow his thinking is.

Does that mean there are no things that are objectively better than some other things? Can’t we all agree that Shakespeare is better than “The Simpsons”? Nope: I could gather up quite a debate on both sides of that issue; and the pro-Simpsons side would be every bit as educated and erudite as the pro-Shakespeare side.

Can’t we all agree that French food is better than McDonald’s? No, for multiple reasons: many people dislike new tastes, and prefer comfort and familiarity; not everyone likes the spices in French food; and if you grew up with French food every day, you might see it as “normal” and McDonald’s as a new experience, where novelty makes it attractive.

And so on, and so on, and so on. If you take any “objective” measure of general quality (as opposed to quality for a particular purpose, which may be assessed much more precisely) and examine it all the way down to its roots, you find personal tastes, past experiences, biases, and other responses that aren’t objective at all. There’s no objective measure of quality.

Except one. See, like many things that are immeasurable in the small, quality is measurable in the large, through statistics. No one person can absolutely proclaim that a certain thing is a quality product; but we can measure with reasonable precision how many people accept and endorse the quality of a product, by virtue of their purchases. In other words, the list writer I paraphrased has it exactly wrong: the closest thing we have to an objective measure of quality is popularity. If a significant number of people enjoy a product, then the odds that you will like it are higher. We’re all individuals, not ruled by statistics; but statistics are a useful piece of information to help you find products to try. Quantity purchased is a valid measure of quality. The market identifies products that the largest number of people accept as quality products.

And before anyone chimes in about betamax, QWERTY keyboards, CDs vs. albums, Microsoft, or any other oft-cited “evidence” that the market can produce the “wrong” answer: go reread my post, because you still missed the point. Don’t force me to go haul out the evidence that shows the conventional wisdom is wrong in every one of these cases, or I’ll produce so much it crashes the server.

Concern vs. Worry vs. Obliviousness

Concern

Worry

Obliviousness

“Has this happened yet?” “Oh, no, what if this happens?” “This could never happen.”
“How likely is this to happen?” “Oh, no, what if this happens?” “This could never happen.”
“How can we tell if this happens?” “Oh, no, what if this happens?” “See no evil, hear no evil, speak no evil…”
“Can we prevent this from happening?” “Oh, no, what if this happens?” “This could never happen.”
“What will be the impact if this happens?” “Oh, no, what if this happens?” “No problem…”
“Can we prepare for that impact?” “Oh, no, what if this happens?” “Cross that bridge when we come to it.”
“Can we manage that impact if it happens?” “Oh, no, what if this happens?” “Cross that bridge when we come to it.”
“Who will be responsible for watching and managing this?” “Oh, no, who will we blame if this happens?” “I didn’t do it!”
“We’d better do something about this!” “Somebody’d better do something about this!” “This could never happen.”
“There’s nothing to do about this right now, so let’s focus on what we can do.” “Oh, no, what if this happens?” “Looks like smooooooth sailing ahead.”

Which column describes your project management techniques?

Disclaimer: This has nothing to do with any of my current projects or clients. The person who inspired this already knows who he is, and is firmly in column 1. Any other resemblance to actual persons or events is purely coincidental.