Has I.T. Gone To Pot?

Why Fad Cycles and Snake Oil Dominate Decision Making

Draft 7/27/2001


treading

Software development tools and information technology in general share these common traits with the clothing fashion industry:

One would think that computer decisions would be rationally and carefully thought out and tested via university and industry scientists. After all, computers are not clothes. However, when you look carefully, you see don't see a lot of rationality. It seems rather capricious. Old technologies/techniques come back in style with new names and new packaging. Mainframes, once an embarrassment in the late 1980's, are now called "big servers". LISP and Smalltalk concepts keep ending up in new languages with different names and/or different combinations. Even XML looks a bit like the ancient JCL.

My dad never throws away a tie due to style changes. He says that if he keeps it long enough, that it will come back in style. He is generally right. However, there is a small hitch. The "cycle" for each aspect does not necessarily match the cycle for other aspects. For example, the wide width may cycle every 10 years, while paisley may cycle every 15. Thus, a fat paisley tie may have to wait 30 years for both to coincide. (Incidentally, my dad has ties that old.) This is part of the reason why fads never come back in exactly the same way. They mix with other fads each time. If all the possible features have different cycles, then there is almost an infinite combination of feature combinations. This keeps clothing designers busy. And, perhaps computer product vendors also.

Something becomes "in" and everybody rushes into it like lemmings. Everybody just follows the lemming in front of them, without worrying about the final destination. The head lemming may be smoking pot for all anybody knows. New languages and paradigms become the rage for little or no apparent reason, or ride piggyback on other fads. Object oriented programming became The Thing over night without any rational, open comparisons or studies. Then Java becomes the hottest language overnight without any public and rational comparing. Even many serious proponents of object orientation think that Java's version of object orientation is second rate. (It is ironic that Smalltalk fans rode in on one fad, OOP, only to be stepped on by another: Java.)

Some will say, "That is just the way it is. Live with it. Stop asking WHY." However, it is not my nature to ignore "why". Either there is some kind of rationality behind it all, or it is random chaos, or a combination. I want to find the pattern(s).

It might be argued that the Darwinian marketplace is weeding out the bad and replacing it with the good. However, there is no evidence to support this. For one, it would probably be a gradual process that would take decades instead of a few years unless the new thing was significantly better than the old thing.

Looking at biology, some adaptations provide a clearly-observable advantage. For example, among two similar species, one species may develop a poisonous stinger while the other does not; it only pokes, but does not have venom. Thus, all other things being roughly equal, if the species with venom overtakes the species without venom, then few would question that venom was a key factor, especially if there are observations of prey being brought down solely due to the venom.

However, other advantages may be small and subtle, but average out over the long run to be beneficial. For example, species X may average 2.19 surviving offspring and species Y may average 2.2 surviving offspring (all else being equal for example sake). Although the difference is imperceptible to a single observer, over the long run it compounds and can make the difference between flourishing and extinction. Observers may never know why species Y "won" because observable survival differences may be nil. It may be the average difference in many traits, none which sticks out as clear as the venom case.

  s =  a + b + c + d + e + f + g

  // Poker vs. Venom
  s1 = 1 + 2 + 1 + 0 + 2 + 1 + 2
  s2 = 1 + 1 + 2 + 9 + 2 + 1 + 1

  // X vs. Y
  s3 = 5 + 6 + 5 + 5 + 6 + 5 + 6
  s4 = 5 + 6 + 6 + 5 + 5 + 6 + 6
The "s" equation roughly represents the survivability based on the net effect of traits in a species. s1 and s2 represent our venom example. It scores higher than s1 because one factor, "d" (venom) is significantly higher. In the second set (s3 and s4), although s4 scores higher, there is no one trait that "sticks out". Thus, it would be harder to tell why s4 out- competed s3 by observation alone. (The inherent sampling error may be greater than the actual differences, most likely.)

The animal family Trilobite once flourished in great numbers (quantity of individuals and number of species). However, for some completely unknown reason, it very slowly died out. Its time span is otherwise so vast that it is often used to date rocks via species identification. Although climate catastrophes may account for some of Trilobite's problems, other families bounced back just fine after the climate normalized.

There is no one or few features of Trilobites that one can look at and say, "Oh, that is why it became extinct." There were a wide variety of Trilobite species: large, small, prickly, slimy, thick armor, eyes, swimming, crawling, many segments, few segments, etc. They even could roll up into a tight ball like sow-bugs do. Whatever feature it lacked or inherit design flaw it possessed that gave it a competitive disadvantage, it (or they) was so subtle that it took hundreds of millions, perhaps billions, of generations to manifest itself.

If the "new thing" was like the Trilobite (except in reverse), then it would be hard to pinpoint its exact advantages, and it would take a long time to gain. However, if it was like the venom analogy, then it may gain fast, but the reason for its advantage(s) would be fairly clear. Instead, we are seeing neither. Things pop into to fame virtually overnight, but the exact reasons are dubious or obscure or too subtle to isolate by observation.

Let's say we have technology A and technology B. A is the current de- facto standard and B is the "new big thing". If B was significantly better than A, say at by least 50 percent (measured in productivity), then usually it would be easy to demonstrate that B is better than A with example source code, benchmarks, or whatever. It is hard for something to be significantly better without the benefits being easily demonstrable. However, such demonstrations are rare in practice. At least well- reasoned ones.

Often times the demonstrations are even rigged. For example, in Bertrand Meyer's well-respected book, Object Oriented Software Construction, 2nd. Ed., Bertrand claims that OOP simplifies programming by eliminating the need to visit multiple duplicate case (switch) statements when a new variation is added to the case lists. However, he fails to disclose that if one adds a new operation, they may have to visit multiple subclasses in the OOP version but only one routine in the "traditional" version. His "visit hop counts" are misleading. Sub-classing favors new variations, but disfavors new operations. In other words, he presents the benefits of B but not the downsides, hiding them from the reader (perhaps inadvertently, to be fair). Many technical comparisons fall into this trap. If the well- respected authors generate such poor comparative material, the average authors most likely commit even more egregious comparison errors.

So, perhaps the benefits are minor. Say, a 5 percent increase in productivity. If this is the case, then the sudden rush to convert over to B is probably not justified. There is a "conversion hump". It takes time and effort to both learn B, and convert existing stuff to B.

Another problem is knowing if B is really better. If B is only 5 percent better then it will be rather difficult to know when comparing something complex. A car is certainly faster than horse over longer distances, but knowing whether a gasoline car is better than a steam car is harder. Thus, if you have invested in a fleet of steam cars, and gas cars become the rage, you will most likely want strong evidence before suddenly replacing your current fleet and retraining or replacing your current mechanics. A 5 percent difference will probably not sway you in the short term.

A rational business person would ask for evidence that gas is better than steam. If the evidence seems overwhelming, then it may make sense to start the conversion immediately. However, if the evidence is inconclusive or shows only minor differences, then the owner/manager, let's call her Ms. X, would probably be making a wise choice to wait a while before converting, or converting gradually.

However, one could argue that gasoline was clearly the future, and the earlier Ms. X converts, the better. But, this is either assuming hindsight or that current trends match future trends. One cannot say for sure that just because B is "in" now, that it will be the best investment years down the road. For instance, Power-Builder was "in" for a while. It looked like it was starting to replace Visual Basic as the prime development environment for medium and large companies. If one followed that trend, then they would have picked a bad horse. This is not to say that Power-Builder is a bad product, but that it simply fell out of the band wagon. Thus, if its fad-ranking was a primary criteria for choosing it, then that criteria later burst. As they say in the stock market, current performance is no guarantee of future performance.

Thus, why does OOP, Java, XML, etc. suddenly come out of dark labs to be The Next Great Thing? Is it:

  1. Merit

  2. Band-wagon effect

  3. Orchestrated marking

  4. Combination

If it is merit, then we face the issues we already covered. If the merit is great, then the reason for the differences should be obvious. If the merit is minor, then change should be gradual. However, we are getting none of these. Thus, some other force must be at work.

The band-wagon affect can be compared to how stars are thought to form in the Universe. Clouds of gas and particles may develop uneven areas due to various random environmental factors, such as colliding with another galaxy, a nearby nova, gravitational tidal forces of the galaxy's arms, etc. Sometimes the uneven spots in the cloud are thick enough to generate a slight gravitational pull. This gravity not only starts to pull the lump itself closer together, but also brings in other nearby material. It is kind of a chain-reaction, also known as "positive feedback". (Positive means "self-reinforcing" and not necessarily "good".) Slight gravity brings together more particles. More particles cause yet more gravity, which in turn bring in yet more particles. The end result is enough mass to ignite the blob into a star.

star formation

Technology fads appear to follow a similar pattern. Bob might hear a few colleagues talk about something called "XML". Thus, he searches the magazine rack for something that talks about XML. By buying a magazine about XML, he encourages magazine publishers to publish yet more about XML. (Lets assume that Bob is not the only person having this experience.) And, when more magazines talk about XML, the more people become curious about what all the talk is about.

It thus seems reasonable that something becomes the subject of talk and attention without necessarily having high amounts of merit. (I am not suggesting that XML is useless, just that it's hype-to-ability ratio is too high. See links below for more.)

Unlike star formation, there may be another factor at play here. Humans can notice rising trends and try to act on that trend one way or another. Star formation is blind to trends, but humans are not. For example, a magazine might notice a trend and write a big cover story on it.

Further, the magazine may realize that it is best to hype the alleged benefits also. Magazines thrive on change. If it was not for change, then people could just refer to their trusty ol' college textbooks. In other words, hyping benefits sells magazines. They know that sharks that don't keep moving will die. (Disclaimer: I never verified this shark cliche with a biologist.)

Magazine articles by their nature are usually short or condensed. The nitty gritty details of comparing A to B often cannot be adequately addressed. So what magazines often do is create "technical cliches". I have observed many of these in popular OOP literature. They often take the form of shape or animal examples. One article once said, "In OOP, you can reference (inherit from) a one-dollar bill, and simply change its worth to make a five-dollar bill; whereas, in traditional approaches, you have to write the five-dollar bill from scratch." It makes a great word-byte, but is rather deceiving. Without hearing the other side of the story, one is more likely to believe such cliches. Promoting the up-sides but failing to mention possible down-sides is a common sales technique. You only see one side of the coin.

Java hype also has similar flaws. In fact, its claim to run on multiple machine brands and be easier to program (than C++) closely resemble the claims used to justify COBOL many decades earlier. (It is sometimes said that the greatest content reuse does not come from the programming department, but from the marketing department. What is really needed is cross-platform GUI and network protocols (API's), and NOT yet-another-language.)

And why should anybody publish the downsides? If a magazine keeps saying that there is nothing new under the sun worth our attention, then it looses its very reason for existence. Magazines talk about change. Why read about new stuff if there is no worthy new stuff? I have never seen a magazine cover say anything like, "Nothing new! It's all just hype!" (The few exceptions may be OS-dedicated magazines where Linux and Apple may bash Microsoft Windows or visa versa.) Further, it is harder to sell advertising in your magazine or web-site if you criticize the very products or technologies of the advertisers surrounding your articles.

Benefit hype might also sell other things. Microsoft, for instance, keeps looking for ways to get people to upgrade their Office applications. If their existing software does the job, then people have little or no reason to upgrade. One approach is to hype new features. For example, the ability to convert to and from HTML and XML. People may need these conversions simply because other people use them, not because they are necessarily better than the "old" format.

Again, we see something similar to the star formation pattern here: because some people use format X, others may also need format X just to talk to the first group. Person M may send out 20 copies of a document in format X. The 20 recipients may have to upgrade just to read documents from person M. Those 20 may in turn send out documents with the new X format to yet many others, to continue the process. The process is not that much different than pyramid marketing schemes (including the lone fat cat at the very top).

One interesting theory is that Sun promoted and hyped Java because C++ was too efficient from a machine perspective. Thus, they promoted a language which was less efficient in order to sell bigger Sun boxes. Although we may never know their reasoning for sure, I would not put it past them. Intel is also rumored to have invested in content vendors whose products required stronger hardware.

I don't dispute that there are important new things to read about. However, the number of legitimate new things compared to the number of over-hyped new things is probably low.

Escape From Legacy Land

Once B is perceptually destined to replace A, yet more positive feedback forces seem to kick in to hasten the demise of A. "Legaphobia" kicks in. It is the fear of becoming obsolete. (Programmers sometimes like learning new things out of boredom or lack of challenge from using the same stuff for a while. They often don't care if it is better, as long as it is different.) Programmers will start learning B and slow their A skills maintenance to compensate for the time needed. And, vendors stop improving A-related products in preparation for B-related products. Time and resources is limited, so focusing on B reduces focus on A. Thus, A stops progressing. It's death becomes a self-fulfilling prophecy in many cases.

For the vendor to justify their effort toward B, they start promoting it. Suddenly, ads and references to B-related products are all over the place. A-products are moved down on the list and ads and articles about A dry up.

Dr. Norman Matloff, a UC professor of computer science, has this to say at the university web-site:

As explained in Sec. 5.9, an experienced programmer CANNOT [easily] get a job using a new skill by taking a course in that skill; employers demand actual work experience. So, how can one deal with this Catch-22 situation?

The answer is, sad to say, that you should engage in frequent job- hopping. Note that the timing is very delicate, with the windows of opportunity usually being very narrow, as seen below.

Suppose you are currently using programming language X, but you see that X is beginning to go out of fashion, and a new language (or OS or platform, etc.) Y is just beginning to come on the scene. The term "just beginning'' is crucial here; it means that Y is so new that there almost no one has work experience in it yet. At that point you should ask your current employer to assign you to a project which uses Y, and let you learn Y on the job. If your employer is not willing to do this, or does not have a project using Y, then find another employer who uses both X and Y, and thus who will be willing to hire you on the basis of your experience with X alone, since very few people have experience with Y yet.

Clearly, if you wait too long to make such a move, so that there are people with work experience in the skill, the move will be nearly impossible. As one analyst, Jay Whitehead humorously told ZD-TV Radio, if your skill shows up as a book in the Dummies series, that skill is no longer marketable.

What if you do not manage to time this process quite correctly? You will then likely be in a very tough situation if you need to find a new programming job, say if you get laid off. The best strategy is to utilize your social network, including former coworkers whom you might know only slightly - anyone who knows the quality of your work. Call them and say, "You know that I'm a good programmer, someone who really gets the job done. I can learn any skill quickly. Please pass my resume to a hiring manager.''

Necessary Evil?

Some may suggest that the chaotic fluctuations are necessary for progress. Since the merit of ideas cannot be readily quantifiable, the only remaining choices are stagnation or trial-and-error to find the better products or technologies. Given that stagnation is not the way of America, the remaining choice is trial-and-error.

I don't fully agree with that. In my opinion, the merit of new concepts, languages, and paradigms should be required to be exposed to tough and open scrutiny before it could be heavily promoted as an improvement. Of course, this cannot easily be enacted into law.

Thus, the only solution may be education about the hype process. If people realized that they are being bamboozled to forever ride the tech-hype treadmill, then people may be less likely to be suckered in. I should point out that it is not any evil plot by one group, just a side-effect of active capitalism. Rather than throw the baby out with the bath water, let's clean up the bath water a bit.

Perhaps trade groups can be formed to protect the interests and do research for companies that don't benefit from high churn. Perhaps some better metrics will be developed in the process, so that we don't have to rely on anecdotes from hype-dependant sources.

Related articles:
William G. Smith on Data Modeling
OOP Criticism
Java Criticism
XML Criticism
Metrics


Main
© Copyright 2001 by Findy Services. All rights reserved.