Code Simplicity

Simplicity and Strictness

As a general rule, the stricter your application is, the simpler it is to write.

For example, imagine a program that accepts only the numbers 1 and 2 as input and rejects everything else. Even a tiny variation in the input, like adding a space before or after “1” would cause the program to throw an error. That would be very “strict” and extremely simple to write. All you’d have to do is check, “Did they enter exactly 1 or exactly 2? If not, throw an error.”

In most situations, though, such a program would be so strict as to be impractical. If the user doesn’t know the exact format you expect your input in, or if they accidentally hit the spacebar or some other key when entering a number, the program will frustrate the user by not “doing what they mean.”

That’s a case where there is a trade-off between simplicity (strictness) and usability. Not all cases of strictness have that trade-off, but many do. If I allow the user to type in 1, One, or " 1" as input, that allows for a lot more user mistakes and makes life easier for them, but also adds code and complexity to my program. Less-strict programs often take more code than strict ones, which is really directly where the complexity comes from.

(By the way, if you’re writing frameworks or languages for programmers, one of the best things you can do is make this type of “non-strictness” as simple as possible, to eliminate the trade-off between usability and complexity, and let them have the best of both worlds.)

Of course, on the other side of things, if I allowed the user to type in O1n1e1 and still have that be accepted as “1”, that would just add needless complexity to my code. We have to be more strict than that.

Strictness is mostly about what input you allow, like the examples above. I suppose in some applications (like, say, a SOAP library), you could have output strictness, too–output that always conforms to a particular, exact standard. But usually, it’s about what input you accept and what input causes an error.

Probably the best-known strictness disaster is HTML. It wasn’t designed to be very strict in the beginning, and as it grew over the years, processing it became a nightmare for the designers of web browsers. Of course, it was eventually standardized, but by that time most of the HTML out there was pretty horrific, and still is. And because it wasn’t strict from the beginning, now nobody can break backwards compatibility and make it strict.

Some people argue that HTML is commonly used because it’s not strict. That the non-strictness of its design makes it popular. That if web browsers had always just thrown an error instead of accepting invalid HTML, somehow people would not have used HTML.

That is a patently ridiculous argument. Imagine a restaurant where the waiter could never say, “Oh, we don’t have that.” So I ask for a “fresh chicken salad”, and I get a live chicken, because that’s “the closest they have.” I would get pretty frustrated with that restaurant. Similarly, if I tell the web browser to do something, and instead of throwing an error it tries to guess what I meant, I get frustrated with the web browser. It can be pretty hard to figure out why my page “doesn’t look right”, now.

So why didn’t the browser just tell me I’d done something wrong, and make life easy for me? Well, because HTML is so un-strict that it’s impossible for the web browser to know that I have done something wrong! It just goes ahead and drops a live chicken on my table without any lettuce.

Granted, I know that at this point that you can’t make HTML strict without “breaking the web.” My point is that we got into that situation because HTML wasn’t strict from the beginning. I’m not saying that it should suddenly become strict now, when it would be almost impossible. (Though there’s nothing wrong with slowly taking incremental steps in that direction.)

In general, I am strongly of the opinion that computers should never “guess” or “try to do their best” with input. That leads to a nightmare of complexity that can easily spiral out of control. The only good guessing is in things like Google’s spelling suggestions–where it gives you the option of doing something, but doesn’t just go ahead and do something for you based on that guess. This is an important part of what I mean by strictness–input is either right or wrong, it’s never a “maybe.” If one input could possibly have two meanings, then you should either present the user with a choice or throw an error.

I could go on about this all day–the world of computers is full of things that should have been strict from the beginning, and became ridiculously complex because they weren’t.

Now, some applications are forced to be non-strict. For example, anything that takes voice commands has to be pretty un-strict about how people talk, or it just won’t work at all. But those sorts of applications are the exception. Keyboards are very accurate input devices, mouses slightly less so but still pretty good. You can require input from those to be in a certain format, as long as you aren’t making life too difficult for the user.

Of course, it’s still important to strive for usability–after all, computers are here to help humans do things. But you don’t necessarily have to accept every input under the sun just to be usable. All that does is get you into a maze of complexity, and good luck finding your way out–they never strictly standardized on any way to write maps for the maze. 🙂

-Max

13 Responses to Simplicity and Strictness

Leave a Reply