Wednesday, March 28, 2007

Some more thoughts about intel.

Guess you could say this is an archive of sorts. Sometimes it can be difficult to trace down particular points I'll make on a forum, in this case MepisLovers, so what follows is three of the postings I made on this thread.

I'll back up what nlyric says. First, with AMD systems you have the option of using OpenBios/LinuxBios... you don't have that option with Intel systems.

Second, with AMD systems you generally get a performance boost when running in 64bit mode (generally). With Intel, you generally either get a performance hit or no change at all (generally).

With Intel systems you run the problem of more recent Northbridge chipsets using 3rd party Parallel-ATA support (e.g. the well known JMicron issue). With AMD chipsets, almost all existing chipsets work out of the box with Linux, ranging from Via, to SiS, to Nvidia, to ATi, to Ali.


The next post was made by another member of the forum

64 bit mode is bunk, a scam. You dont need 64-bit computing. unless a program is, a) optomized for 64 bit when developed (hint: they arent) or b) the program has an need for insanely high numbers (scientific computing), then 64bit is pointless. It takes a computer aproximatel 2x as long to process 64bits as it does 32bits. Now in the future we will need these high numbers but thats quite a ways down the road.


And my direct response.

difference in hardware and software emulation modes.

In the case of processors built to run 64bit operations (e.g. Alpha, PPC, x86-64, Itanium with proper compiling), 64bit operations take just as much time as 32bit operations.

For certain operations like Video editing or Sound editing, running
in 64bits presents a decided advantage, not just in how much can be stored to and read from memory, but at the size of the information that can be processed.

Now, in designs like Intel's Prescott where 64bit is supported as an emulation mode, yes, it does take longer to run 64bit operations than identical 32bit operations.

Although you do have a point. For the average user who just wants to surf the web, listen to a bit of music, maybe write a letter or two, 64bits is overkill. For that matter, 32bits is overkill... and probably 16bits as well.

And... no. Core2 Duo is not consistently beating AMD in performance. I've already been over that line of bunk before, multiple times. Core2 Duo does not scale with clockspeed nor front side bus, and Socket AM2 users who run DDR2 timings of 3,3,3,8 are not beaten by identically clocked Core2 processors.

Nor is Core2 Duo beating AMD in processor efficiency. Now, I haven't personally run any tests measuring the voltage between two identically clocked Socket AM2 and Core2 Duo processors... but HP and Dell have... and their conclusion (as well as Googles), is that the AMD chips are more efficient in power per watt.

The only vendor who presents otherwise is Apple Computers, which is one of the starting points if you are looking for rebuttals of energy efficiency pitting Socket AM2 65watt processors against Core2 Duos

not to mention the 35watt processors of which AMD has 6... and Intel has none.


Adrian came back with this comment to 64bit being a scam:

Then why my benchmarks test shows a 20% overall increase in performance? There are problems to switch to 64bit but for some uses that might be well worth, besides you can run 32bit software in 64bit OS (with some caveats)


Then my final post

sorry to bump this thread back up to the top... I forgot a point to make...

Specifically speaking of x86-64, the architecture is designed to run x86 / IA32 code natively. What this means is that if you have a 32bit instruction, it is routed through the processor directly, there is no emulation involved.

The advantage is that the code should work natively in both a 32bit Operating System, and a 64bit Operating System. Theoretically, and in practice by Microsoft's x86-64 design group for Windows XP 64, running 32bit operations in a 64bit OS results in a performance boost.

The reason given is that the Host OS is using registers and memory allocations that the 32bit Application is not set to. The result is that when you are running Multiple 32bit programs at once, the larger number of registers available will minimize the performance impact.

Another item that I somewhat mentioned in... this post:

Yes and no to whether or not x86-64 is built for Multi-Core systems.

When AMD was creating the Hammer architecture one of their design priorities was to enable seamless / glueless connection of the processors. So, yes, the x86-64 specification does account for running in a Multi-Core / Multi-Processor configuration.

I don't want to say that it was designed to run in SMP from the start since x86-64 does inherit x86 / IA32.
The point to be made here is that if the Linux kernel writers implement the x86-64 specification correctly, you should be able to run 32bit programs at a faster rate than you can under a 32bit kernel. The simple fact is, not all programs benefit from the jump from 32bits to 64bits, just the same as a lot of programs didn't benefit from the jump between 16bits and 32bits. Honestly... how many bits does Notepad or KATE need anyways? (was tempted to say VI or Emacs, but that's just asking for trouble)

One of the advantages Intel does have right now is that it does have a workable Quad Core processor out. Technically, the processor is not a Quad Core, it's just two of Dual Core processors on one ... substrate? I don't think that's the word... lets just call it packaging.

AMD took the route of having all of their Multi-Core processors being connected by Hypertransport, being true Dual and Quad Core systems. You'll also hear AMD reps say now that they probably should have met Intel's Dual Core + Dual Core with an AthlonX2 + AthlonX2 design. Okay, granted, HP, Dell, Sun, and everybody else involved in the server game is saying that AMD's upcoming Quad Core blows Intel's Kentsfield out of the water ... and the upcoming "true" Quad-Core from Intel... that doesn't change the fact that right now I cannot buy a Quad Core from AMD.

The situation is reminiscent a lot of the time period between the AthlonXP 3200 and the first Athlon64's. I wrote about it last year... here. Back then I stated this:

Just the same with Core2 and Athlon64. AMD has already been in this position before. Intel, while slow, is capable of competing. The Athlon64 would not always be the most powerful processor available, AMD has never said otherwise. Just as the Pentium4 3.2ghz went beyond the AthlonXp 3200, something would go beyond Athlon64.
The point is that Intel does hold the very top spot on performance... right now. If you have $1000+ to spend on the processor ALONE... go ahead. Get an Intel.

But... if you are stuck on a budget... why pay more... when you don't GET more?

* on the edit... the ending line kinda stuck out at me. Going back to the first post in this thread I outlined where you would be GETTTING less if you went with Intel... e.g. : the drop of direct PATA support from the most recent chipsets, the lack of any support for OpenBios/LinuxBios, and the performance hit or no change when running 64bit code. Then there is the price disparity. I can pick up a 2.6ghz AMD AthlonX2 for only $200 from Newegg. The 2.66ghz Intel E6700 weighs in at $519, over twice as much. In the AMD's price range you have the 1.8ghz E6300 at $180, and the 2.13ghz E6400 at $219. If you are spending $200 on a processor, there isn't even a contest.

So... the ending below is probably more accurate.

But... if you are stuck on a budget... why pay more for Intel Products... when you GET Less? : Back up

It seems that the problem with resulted from a company called RegisterFly. We have succeeded in working around the issues with RegisterFly and have reclaimed the home page. Yes, we are back, and hopefully we have all of the server side issues dealt with.

Jason Frothingham

Wednesday, March 21, 2007 : oops it happened again

For those looking for and are getting a place holder address..

oops. Something went screwy on the server end. We are still trying to work with the hosting company involved to get everything back in place.

Tuesday, March 20, 2007

Supplemental : Development of Linux.

Supplemental: After the original post was published and I sat back to read over it again, something about the posting read wrong. Basically, as I read it, while I give some background data and scientific facts supporting Creationism and Horizontal Evolution, the arguments against Vertical Evolution are fairly weak. I don't clearly define who or what is promoting the Vertical Evolution stance as I do with the Creationism stance.

Part of this is because such definitions are ultimately outside the scope of the original article. The original scope of the article was to clearly define Creationism and Horizontal Evolution and point out that the development of Linux is a mixture of the two. One one hand there is the clear dictation of the superior coder designing the code, and on the other hand there is the Horizontal Evolution adaptation of the existing code and the further progression of the code to meet different needs.

The other part of the weakness behind the argument is the desire to try to stay within grounded scientific fact, rather than attempting to address absurdity. However, if the absurdity is not clearly addressed, the entire argument is weakened.

So, who exactly is behind Vertical Evolution and who is promoting it? If Vertical Evolution is such a farce, why is it so prominently taught today?

Again, we'll need to set the background data.

Once again I'll list the 3 laws of Thermodynamics as explained by David Morgan-Mar in his own humorous method here:

  1. The First Law of Thermodynamics states that energy is conserved. You can't create energy from nothing, nor can you destroy it. Since heat is a form of energy, you're stuck with it, unless you convert it into some other form of energy or move it away. Since you can't get something for nothing: You can't win.
  2. The Second Law of Thermodynamics states that heat will only flow from a region of higher temperature to one of lower temperature. To move it the other way, you need to supply some extra external energy to do the work. So moving energy in any useful direction that doesn't happen naturally requires you to put in additional energy. You can't even use the First Law and say "the total heat content is equal, so just move the heat from the cold place to the hot place" - it won't happen: You can't break even. However, the efficiency with which you move energy around is related to the temperature. The colder the better. If you could get to absolute zero, you could break even, just. But:
  3. The Third Law of Thermodynamics states that it's impossible to reach absolute zero. As you make a system approach absolute zero, the process that you are using to cool it down slows down. It sucks smaller and smaller amounts of heat away (and you're using tons of energy to power this über-refrigerator), and you can never suck away that last bit of heat to get it to absolute zero. Since you can't even get to the place where the Second Law lets you do things with 100% efficiency: You can't get out of the game.

Essentially, the Three Laws of Thermodynamics state that everything deteriorates over time. Now, defining this deterioration is almost an exercise in futility.

Judeo-Christan Creationists and some Muslims define the deterioration as Sin. Like the political party question presented in the first article, this is a delicate subject that nobody really wants to address in public or fight over. Yet, as the Political Control behind many of the Large Media companies affect publications, so the argument of Sin affects the arguments of Creationists and Vertical Evolutionists.

A general agreement among Creationists is that the physical reality, the world we live in, our very Lifes, are corrupt. There is something inherently physically and mentally wrong with everything that is in the world, something is irrevocably broken. Among Creationists, there is again, fragmentation. Sin is considered by some to be the reason behind the broken reality, typically by Judeo-Christian Creationists.

Deists also subscribe to the Three Laws of Thermodynamics in that reality is fundamentally broken. However, they may not be so quick to attribute it to Sin. Collectively, I don't think this class has an agreed upon idea on what is the cause of fracture, just that something is broken.

Those who propose Extra-Planar or Extra-Dimensional forces also sign off that something is fundamentally wrong with reality. However, their ideas behind it is that whatever Extra-Planar force built this reality made an error in the creation. Again, considering their normal representation out in public, Extra-Planar proponents are not exactly too fond of talking about their ideas.

So, our first class of Vertical Evolution proponents are those who completely reject the idea of Sin. To these people, everything is inherently good. Society is corrupt, parents are corrupt, everything is corrupt but reality itself. Nature is pure and good, and Nature is perfect.

Hate to break it to them, but nobody had to teach me how to take things for myself. I had to be taught how to share. Babies are born with corrupted natures, end of story, get over it.

The fundamental rejection of Sin also means that they have to reject the very idea of a Creator as advocated by the Judeo-Christian Creationists.

So, why do these class of proponents reject the idea of Sin anyways? Well, for a large part, it depends on what people believe or experience as they are growing up. We all have heard the stories, some 4 year old kid watches their grand-parent, aunt, uncle, close friend of the family, or other relative die a slow and horrible death at the hands of Cancer or some other disease. We see them sitting on Daddy or Mommy's lap crying their eyes out... Maybe another relative if it was their parent that was removed... They look at the authority figure and ask that question nobody wants to answer : Why did God allows this to happen?

Can you imagine trying to explain to a four year old the concept of Sin? Much less explaining that reality is corrupt? That human life is imperfect? That bad things DO happen? Nobody wants to do that. How can you tell a mere child that, hey, sorry, That's Life, get over it.

You Don't.

Now, I'm not saying that events like these are the only cause of people who reject Sin, there are other reasons. But it is the clearest example that I can try to explain. As people grow up and ask the difficult questions, their experiences shape what they believe. Many recall the overly pious relative or neighborer who sounds like Billy Graham running an invitation every minute, and it turns them off. To them Sin just becomes another buzz-word tossed around, instead of being a meaningful concept of what is wrong and what is right. The problem is, Sin is a noun, and a verb. Sin is part of the world, and it is a corruptive influence. Sin is part of what makes a human, a human.

There are several different factors that go into making the personality of a human being, and sometimes those experiences cause people to Rebel and reject what they thought was true.

Again, a point I failed to elaborate on last night. Okay. The point of this is that people can grow up to be bitter against Authority, be bitter against what they see as Established Religion, be bitter against what they saw as Christianity, and so on and so forth. These are the people who look at the world and say "A loving God would not allow this to happen, so there is no God." Most of these people are fairly intelligent, and the idea of an Extra-Planar force is ludicrous to consider. Many of these people wind up rejecting God, or the notion of a God. Ultimately, the concept of Sin is rejected entirely. That means that explaining the Laws of Physics, the laws of Thermodynamics, is impossible. The Corruption in reality doesn't have a name, and doesn't have an explanation. The views of the Creationists then must be classified as impossible.

Now we fall into the realm of Once you have eliminated the impossible, whatever else, however improbable, must be true. To those who reject the idea of Sin or a corrupted reality out of hand, they now view it as impossible. Ergo, whatever else has got to be true, no matter how ludicrous the other concept is.

In the view of Vertical Evolutionists, the lack of a Fossil Record or a living example of a linked species; the failure of dating methods like Carbon-14 beyond a few hundred thousand years, or any number of scientific facts does not matter. The sheer statistical chance of matter coming together on it's own and forming the necessary requirements for life, as well as the materials around the matter forming the appropriate material for creating life, as well as the likely hood of the matter also forming the exactly correct distances from a source of light and the exactly right atmosphere to support that light; none of those statistics matter. It doesn't matter how improbable the statistics are. From a numerical point of view, it could have happened, again, ignoring the Violation of the First Law of Thermodynamics, you can't get something for or from nothing.

Granted, by the same numerical chances, I would have already one a lottery of 6 billion dollars each day for the past 20 years, as well as everybody in my neighborhood, city, state, planet.

You get the picture.

Okay, what about those deists? Those who don't accept Sin as the reason for a fractured reality? Can vertical Evolutions come from that side?

Yes. The short answer is found in proponents of Intelligent Design mixed with Vertical Evolution. Collectively speaking, this class of proponents state that a higher power created the matter first, but the method of creation to our current point of living was implemented through Vertical Evolution. Again, this goes against the Three Laws of Thermodynamics.

Mostly these guys just want to make to peace. They don't want to fight over the subject, and collectively just want everybody to go about their merry way. They are not willing to subscribe to a broken reality, but they'll subscribe to the possibility of a higher power managing everything.

Okay, fine, what about all of the scientific tests over the years that prove Evolution? What about the tests? Last time I checked most of them did a handy job of proving Horizontal Evolution, not Vertical Evolution. Charles Darwin did a great job of documenting adaptation in his research, and he hasn't been the only one to do so.

The problem with a lot of Scientific research done today is the idea of a preconceived ending. Many researches already know that Vertical Evolution is true. There is no doubt, so anything that indicates otherwise is inherently junk or irrelevant information.

Several years ago (9/16/1966 to be exact), Hogan's Heroes ran an episode called Hogan Gives a Birthday Party. The plot of the episode is that the Allie bombers were trying to take out a German Refinery, but with 3 layers of Ack-Acks and all the Messerschmitts, successful bombing runs were just about impossible. Sargent Shultz gives Colonel Hogan the idea to bomb the Refinery with a German Bomber. Hogan goes to Snow Job Colonel Klink, the German Commandant, into arranging for a German Bomber to be within Hijacking Range.

Hogan sells Klink on the idea that the Luftwaffe Pilots are superior to the Allied Bombers, and feeds Klink the idea to prove this in scientific testing.

So, Klink sets up the test and gets in a replica of a Heinkel cockpit. Hogan correctly identifies the cockpit of coming from a German Bomber. Klink then tells Hogan that :

Klink : The aim of a research project my dear Hogan, is not to discover new facts. We already know that the Luftwaffe personal are superior. Here we are merely furnishing Scientific Proof.

Hogan : Surely we'll have a chance to get acquainted with this.

Klink : When you take the test there will be plenty of time.

Hogan : You know, everytime I come face to face with this cruel German
Cunning, I always wonder why my side is winning the war.

Klink represents a lot of the scientific tests that are going around now, and if you want to make a parrallel, a lot of the security tests pitting Windows versus Linux. The result is already known, all that needs to happen is for somebody to provide proof for the pre-established result.

Hogan, by comparison, realizes that the test is fundamentally flawed, and that he is on the receiving end of the screw job, and that the guys running the test have no intention to make it fair and balanced.

If this is also reminding you of the news on CNN, NBC, ABC, CBS, and from Reuters and Associated Press, good.

I completely forgot to put the point in here that I was trying to make last night. Serves me right for writing so late.

Anyways, the point was going to be this : Many of the researchers who claim an earth age of millions of years already know that the Earth is millions of years old. It is beyond a shadow of any doubt that the planet and the universe is ancient. Ergo, all research tests created to establish the age of the Earth must therefor prove an older age. If the testing process, like Carbon-14 dating breaks down, it isn't because the Earth and Universe are young, there has to be something else gone screwy. Like Klink in the TV show, the result is already known, all that has to be furnished is the proof.

Most of those who fall under the Creationists banner approach the idea of the age of the earth in a different manner. Generally those who become, or are, Creationists approach the age of the planet in the form of this question: We do not know what the age is, so what does the scientific evidence support? The average supporter of the Creationist viewpoint takes a look at the fossil layers, the relative stability of various half-life dating methods, and achieves the independent conclusion that the Earth is young. Some view the process from a Chemical viewpoint, others from a Genetic Breakdown viewpoint, others still from a Radiation viewpoint, but most who wind up supporting the Creationist viewpoint do so because that is what they found to be proven.

So, why do Vertical Evolutionists, and the political Party have such stock in proving a flawed theory? Why does it matter so much?

I honestly couldn't answer. I simply do not comprehend the behavior or beliefs of those who promote Vertical Evolution. Part of me is glad that I don't. I do not want to understand. And if I fail in trying to explain that behavior, perhaps it is a good thing that I am not capable of thinking like that.

It also is fairly hard with this supplemental to tag it back into considerations on the process of Developing Linux. All it does is enforce the arguments behind the definitions of the vocabulary being used, and what some of the aims and motivations might be.

But, at the same time, it provides a derailment to the original topic, and linking back into discussions about how the Linux Development process is achieved is near impossible.

Where Greg Kroah and Linus are wrong.

This post is about to step on more toes than probably any post i've had before.

The subject draws from this page on Kroah's site :

and the comment by Linus that "Linux is Evolution, not Intelligent Design."

My first thought was, Good thing that Linus is a computer programmer and not a Chemist, Biologist, or any other kind of scientist.

Now, the first thing we need to do is actually set the definitions. The problem is a lot of people involved in the Intelligent Design debates are using the same vocabulary, the same words, but the meanings behind the words are completely different.

Lets call Intelligent Design for what it is: Creationism. It is the belief that a higher power created everything, from the atoms to the molecules, to the very life that we live.

Now, there is a lot of evidence to support Creationism. Proponents of Creationism cite the statistical chances of a single string of amino acids coming together in such a way to create life. You'll also hear arguments relating the lack of any fossil record indicating transitional species. Some will point to the problem that where did everything come from? How did matter come to exist to begin with? Some Creationists will point out that while there are ample examples of Homo Sampiens diverging from each other in skin tone, average hight, average health, and facial features, and while such separation is found in birds, monkeys, and other animals, there has never been located a mid-species.

All dogs found so far are still dogs. All cats are still cats. There isn't a single "cog" to be found. No dog has bred into a cat, and no cat has ever bred into a dog.

Others will conveniently point out how most dating methods, such as Carbon-14 dating just happen to stop working after the time period estimated for the world to have existed in the Jewish Torah and Christian/Muslim Bibles.

The problem behind Creationism isn't for lack of scientific backing or number crunching. The problem is this: What do people want to believe.

Many scientists, while signing off that Creationism is backed by hard science, are not willing to sign off that the Creator is the God of the Jews, the God of the Christians, and the Semi-God of the Muslims.

Most of these class of scientists are known as Deists. They believe there is a higher power, but that they either cannot know, or cannot understand the higher power. They collectively, are not comfortable with what they see as "Organized Religion." They'll cite examples from the times of the reign of the Catholic Church and the old England Anglican Church in which science was oppressed. You'll also hear such lines as "Why would a Loving caring God allow this to happen." They'll look at cancer or other diseases, and decide that whatever higher power there is, it can't be the same one that the Jews, Christians, and some Muslims pronounce as being in control.

Another popular class of scientists are those who believe in extra-planar forces. Their theory is that something created the world in which we live, and the universe, but it isn't, or wasn't, a God. It was some external force that we can't comprehend, or simply don't have the technology to understand. These guys are mostly represented out in public by the UFO wack job who claims midnight abductions and crop circles.

Because of the bad reputation that the wack jobs bring those who believe in extra planar forces, most of them aren't exactly in a hurry to spout their theories.

Intelligent Design then, was created as a catch all term. It was supposed to be a banner that all those who rejected anything but a higher power could gather under. It would not matter if you believed in Jesus/Yaweh, another un-named or unknown God, or no God at all.

Evolution, on the other hand, is harder to define and classify.

There are two types of Evolution.

Horizontal Evolution is based in fact. Humans who live in climates closer to the Equator have higher skin pigmentation, and therefor a darker skin color. Humans who live away from the equator have lighter skin colors, almost a pale white. Humans who live kind of in between have a yellow or brownish color overall.

Yes, Blacks from South America, Africa, Indonesia, and Persia.
Whites from Europe.
Asians, from Asia, Japan, and North American Indians.

Horizontal Evolution is also found in other animals. Birds, such as Finches, can have longer beaks than cousins in a different location. Dogs bred near the equator generally have less fur than say, a Husky or a Saint Benard who works the Swiss Alps or Alaska.

The traits of horizontal evolution are marked by a change in what something is capable of doing, rather than what something is.

However, horizontal evolution does not result in a higher form being reached. A dog with a heavy fur coat, while perfect for snow covered wastelands, is going to be miserable in South Central Georgia. A finch with a long thin beak, will find it easy to snatch bugs out of bark on trees. The same finch, when asked to break open a nut or fruit, is going to be quite out of luck.

The same example is found in a shrimp, the Rimicaris exoculata, which has developed an light sensitive organ on it's backside. That was reported by Discover back in 2001. The shrimp did not improve as a whole. It, instead, changed to fit its environment.

Horizontal Evolution is also subject to the laws of physics, specifically the laws of Thermodynamics. David Morgan-Mar had a humorous look at the Three laws of Thermodynamics here

  1. The First Law of Thermodynamics states that energy is conserved. You can't create energy from nothing, nor can you destroy it. Since heat is a form of energy, you're stuck with it, unless you convert it into some other form of energy or move it away. Since you can't get something for nothing: You can't win.
  2. The Second Law of Thermodynamics states that heat will only flow from a region of higher temperature to one of lower temperature. To move it the other way, you need to supply some extra external energy to do the work. So moving energy in any useful direction that doesn't happen naturally requires you to put in additional energy. You can't even use the First Law and say "the total heat content is equal, so just move the heat from the cold place to the hot place" - it won't happen: You can't break even. However, the efficiency with which you move energy around is related to the temperature. The colder the better. If you could get to absolute zero, you could break even, just. But:
  3. The Third Law of Thermodynamics states that it's impossible to reach absolute zero. As you make a system approach absolute zero, the process that you are using to cool it down slows down. It sucks smaller and smaller amounts of heat away (and you're using tons of energy to power this über-refrigerator), and you can never suck away that last bit of heat to get it to absolute zero. Since you can't even get to the place where the Second Law lets you do things with 100% efficiency: You can't get out of the game.

Basically, with Horizontal Evolution, there is no net gain on abilities. Dogs that develop heavier coats become poorer suited to warmer environments. Dogs without heavy coats are poor choices for the Iditarod. Finches with long beaks are best suited to hunting bugs. Finches with heavy beaks are best suited to breaking things open. The best that can be hoped for in Horizontal Evolution is to get as close to Law #2 as possible, and attempt to break even.

Horizontal Evolution also accounts for Mutations, be it by strict Genetic means, other Biological Means, by Chemical means, or by Radiation methods. Those fall in under Law #3, you can't get out the game.

The next type of Evolution is based in dreams. That is Vertical Evolution. Now, I personally admire anybody who seriously believes in Vertical Evolution. I really do. I wish I had the faith to believe that I came from nothing. I wish I had to faith to believe that without a fossil record or any living examples of a transient species, that I can from a Monkey. If only my faith were that strong to believe in total manure.

Now, it isn't a question that Vertical Evolution is patently false. Chemist Michael Behe has several books out poking holes in the theory, and there is little reason for me to recover well established ground.

Going back to the Three Laws of Thermodynamics, Vertical Evolution is in clear violation of Law #1, which is You can't get something for nothing.

Vertical Evolution is also in violation of Law #3, You Can't Get out of the game. Vertical evolution assumes at some point that a higher nexus point has been reached and there is a clear separation between what is and what was.

Vertical Evolution proponents are marked by promoting manure like the Big Bang Theory, telling people that humans descended from Monkeys, and so on. That monkey part greats on my nerves the most. Wouldn't it be "ascended" from Monkeys anyways? Last time I checked a Monkey hadn't created a Space Shuttle or designed something as complex as an Intel Pentium Processor.

So, why do we hear a lot about Vertical Evolution and Horizontal Evolution, and why is Creationism openly mocked by the Mass Media?

I'll try to avoid pointing the political party that by and large runs the Associated Press and Reuters, as well as news outlets like CNN, NBC, and ABC but that actually has a large bearing on the perception in the non-scientists eye about what is going on in the fields of Biology and Chemistry.

The central problem is again, fragmentation. Everybody who believes in Vertical Evolution, BELIEVES in Vertical Evolution. The tenants behind Vertical Evolution don't allow much wiggle room for dissenters.

Vertical Evolutionists also have a distinct advantage in their choice of terminology.

Consider Intel for a second. Intel named processors based off of their Conroe design with the retail label, Core. So, whenever you talk about Multi-Core, Dual-Core, or Multi-Processor systems with Multiple Cores, you say Core. This forms a mental link between Intel and the word Core. Even if you talk about Dual Core Athlon64's, you still say, yeah, Core.

Vertical Evolutions work in the same way, albeit for a much longer time period. Any time a person talks about Horizontal Evolution, they still have to say, Evolution. By using the vocabulary of an established series of scientific facts, the Vertical Evolutionists have an automatic vehicle with which to associate their ideas.

That is why it is so important to clearly define the meaning behind the words.

Creationists, however, are at a disadvantage. Creationists are fragmented, which is almost by their nature. Most Creationists are probably also Conservatives who strongly believe in doing things themselves. They don't depend on a Group to provide support. It is their nature to be independent, and to think independently. Most Creationists will have arrived at their conclusions by crunching the numbers themselves. They will have looked up the archaeological records, themselves. They will have run the Amino Acid calculations... yes, by themselves.

The result is that when it comes to presenting their facts to the public, Creationists don't have a unified front to talk through. Intelligent Design was supposed to be that front, but it went against the very nature of who the Creationists were. Most Creationists also take it for granted that their view is correct. Many just don't see a need to combat the Vertical Evolutionists. To the average Creationist, the Vertical Evolution idea is about as absurd as Drinking your own Urine is healthier than drinking purified water.

It doesn't even merit consideration, much less action. That, by and large, is now considered to have been a mistake by many Creationists. Rather than fighting early and removing the Vertical Evolutionists years ago, the Creationists are now having to fight an idea that has succesfully hijacked established scientific fact.

That is why you probably haven't heard much from the Creationists, but, if you pick up an average college textbook, you'll read a lot of Vertical Evolution-ism.

Okay, so where exactly does Linux fit into the mix of these two scientifically established methods, and the farce.

Linux is a mixture of both Horizontal Evolution and Creationism.

One one hand, as Kroah and Linus state, Linux moves to fill a need. If somebody needs support for a RAID card or for a Multi-Processor system, Linux moves in and fills in the requirement for that product.

However, the movement is not strictly horizontal evolution. A higher power, in this case the coder, the maintainer, or the guy who checks the code into the CVS, makes sure the code does what it is supposed to.

Nor is there a trade off on the development. Just because a RAID Driver was created does not mean that a Sound Driver was removed, which would be the case in a strict enviroment of Horizontal Evolution.

That is also the mark of Creationism. Something was created. Now, the process might be implemented in the method of Horizontal Evolution, and in most cases Open Source development strikes a balance between the two.

But when it comes down to it, you can't escape the fact that Linus himself publishes the kernel everybody uses. Would not that make him the ultimate end point, and thus the originator of the Intelligent Design behind Linux?

So, in the end, Mr. Kroah and Linus are wrong. Linux is both Creationism and Horizontal Evolution.

Trying to pigeonhole the Open Source process in one section or the other just doesn't work. Nature doesn't work that way either.

Supplemental to this post is located here :

Monday, March 19, 2007

Extreme Tech's Turn in on the Stove Top

I normally don't read Extreme Tech. While their editors are fairly competent, they tend to suffer from the same problem AnandTech suffers from. They try to hard to please the guys buying ad space. Recently Extreme Tech put Phil Rogers from AMD/ATi into the "hot seat."

Well, the Seat wasn't all that hot, and as far as I'm concerned Extreme Tech and Phil Rogers both completely blew parts of the interview. I posted in the ZD forums calling foul on the interview, but I have a slight feeling that ZD's webmasters are not going to like being called irresponsible. So, here is the reply plus some more explanation.

Rogers and extreme tech blew it.

ExtremeTech: Part of what makes Vista interesting for gamers is of course DirectX 10.

Rogers: To begin with, DX10 is very far reaching and a radical departure from DX9.

Um. No. It isn't. Not by any stretch of anybodies imagination. From the game developers themselves, like Tim Sweeny or Gabe Newell, much of the talk about future shaders and DirectX 10 is that it doesn't actually do anything for games. All DirectX 10 does is run a few shader functions... FASTER... than DX9C. There isn't any specific shader function or ability in DX10 that is not in DX9c. From the code gurus on Rage3D or Nzone forums, to the code writers behind OpenGL 2.1, the code view is unanimous. The guys actually writing the code when asked what can they do in DX9c and not in DX10 always results in "nothing, but we can get a few more frames in DX10." Now, ask them about the differences between DX10 and OpenGL 2.1 and you'll get a long list of differences and what each does best.

ExtremeTech blew in presuming that Vista is interesting for gamers. Um. No. It isn't. Not by a long stretch of imagination. DX10 is downright boring. Now, if you want to talk about OpenGL after 2.1? Sure, you got my attention. I'm more interested in code and games that I can run on Wii, Playstation3, Xbox, Xbox 360, Playstation 2, Playstation Portable, Gamecube, Linux x86-64, Linux x86, Linux PPC, Windows x86-64, Windows x86, BSD x86 (also includes Mac for ExtremeTech), BSD PPC (again, this includes Mac), Solaris x86, Solaris SPARC, and Solaris x86-64.

As a gamer, I am not by any means interested in a technology that only works for Vista, and would require creating another shader path in order to be useful on equipment I already own. ExtremeTech blew it in failing to hammer Rogers over ATi's OpenGL support outside of the Nintendo Gamecube and Nintendo Wii systems. As is, I'm not really interested in DirectX 9.c either, considering that would lock developers into either WinNT5 systems or Xbox 360.

Rogers Blew it by saying that DX10 is/was far reaching and a departure from DX9c. It isn't, everybody knows it isn't, and ExtremeTech should have immediately jumped on that and reminded Rogers that all the guys actually coding the games and graphical interfaces agree that it isn't.

Now, okay, fine. I'll accept that Ziff Davis has a responsibility to it's clients, but this interview was just disgusting.

Oh, and ExtremeTech, you may want to take a closer look at Vista's flopping. You should have been asking Rogers what AMD/ATi is planning to do in conjunction with Dell and HP moving to offer pre-loaded Linux systems. You should have been talking about how AMD/ATi is going to take advantage of Beryl in order to help move more units. You should have been talking about what AMD/ATi is doing with Khronos in order to help develop the OpenGL after OpenGL 2.1. None of these topics were addressed at all. Again, this was extremely irresponsible on the part of the Ziff Davis employees conducting the interview.

Part of the point in the response is that Game Developers who choose to use Microsoft's DirectX Technology when building their games shoehorn themselves onto a limited number of platforms. A game built on the Xbox using the official Microsoft Tools would use a version of DirectX 8.1. In order to put the same game on the Playstation2, or Gamecube, a developer would need to re-write the code behind the graphics engine from DX 8.1 to the PS2's engine, and to the GameCube engine.

However, if the same game were programmed using OpenGL as the backend display, then the game could be compiled and run without major modification on both the Xbox and Gamecube which used traditional CPU/GPU setups. Porting OpenGL code to the Playstation2 would also be an easy task given that it is a Linux Operating System Computer.

While there would have to optimizations to the code to get the most performance out of the engine, using a single code base would allow the developer to spend more time optimizing the code for the respective system, and less time having to port or change the code for the system.

Now, I'm not saying that DirectX does not have its place in gaming today.

What I am saying is that DirectX only has a place for developers who only want to release their games on a limited number of Microsoft Platforms.

Now, call me crazy, call me insane, but I thought the whole point of selling a product is that you want to sell it to as many people as possible. The larger your potential market of buyers is, the more units you probably will sell.

Most accountants with Video game and software developers don't see that though. For example, Psychonauts. The game saw a release on the Playstation2, Xbox, and PC. The game which received critical praise, did not sell in retail channels. However, the Platforming adventure games on the Gamecube sold very well. I for on think the game probably would have sold much better on the Gamecube as more Gamecube owners were interested in the type of game that PyschoNauts is/was.

The same is also proven with the Wii and DS. Neither system caters to the hardcore market that buys a new graphics card every 3months to get 5 more frames in Counter Strike. However, both Systems cater to the HardCore gamer who wants an epic game experience, and the casual gamer who couldn't care less if they save Hyrule from Gannondork.

Nintendo, finally, got the picture that everybody else had been missing. The larger your potential audience is, the larger sales you'll probably get.

Okay, so maybe Nintendo isn't getting 35% of the market that wants an Xbox 360. That doesn't matter to Nintendo. They now have 100% of the market that wants something casual and easy.

That is the fundamental problem with DirectX and Developers using DirectX. All they see is that limited subsection of the market that is already using the product, and their accountants continue to try to carve out slices of that market.

What would happen though if Valve Software released Half-Life 2 on Steam for Linux... tomorrow? Okay, fine. Valve may not have 50% of the First Person Story Shooter market on Microsoft products...

but they would suddenly have 100% of the market in Linux.

I'll dig more into Valve's need to distance themselves from Vista and Microsoft in another post, but I think the original point is made rather clearly now.

I personally think ATi did themselves a disfavor by ignoring Linux and OpenGL for so long. Many uses today still point to Nvidia as a Linux solution provider despite ATi's superior record over the past 12-15 months.

It will take a lot to remove the distrust and ill-will towards AMD/ATi. And Extreme Tech did nothing to help with that particular point.

Friday, March 16, 2007

Mac'N'Cheese, done nicely.

I have several things on tap in my mind. From the Sonic and Excite Truck reviews for Gamenikki, telling Gabe Newell and Valve software that they need to get to Linux ASAP, and while figuring out how to tell people that my TV went south on me... oh wait, I already did that on the guide site...

Right now though I wanted to talk about food. I've gotten real used to eating Kroger Mac'n'Cheese since it, Butter, and milk are relatively cheap. While this isn't exactly Cooking with Stacy Vulture, here's how I get my Mac'n'Cheese to turn out nicely most of the time.

On average the Kroger brand Mac'n'Cheese requires about half a stick of butter, or half a cup of butter. Spread over two or three meals, the amount really isn't so bad. Personally though, any stick margarine will do, so feel free to get the "ultra healthy although it really isn't stuff" if you want to.

Start the Mac'n'Cheese cooking as listed on the boxes. 1 and 3/4's cup water and the Macaroni itself. Put it in the Microwave for 6 minutes. At 6 minutes, pull the Macaroni out and stir.

Put the Macaroni back in and set the timer again for six minutes. Now it's time to prepare the butter.

Take your half stick of butter and cut it 8 parts, sorta like this :: |1 |2 |3 |4 |5 |6 |7 |8 |

If the butter is frozen, stop the microwave with about 1:30-1:35 left on the timer.

If the butter is not frozen, stop the Microwave with about 1:10-1:15 left on the timer.

Set the butter in a circle around the top of the Macaroni with one or two pieces in the middle (depends on the size of the bowl).

Let the Microwave complete out it's time.

Take the Macaroni out, stir the bowl.

Now add the powder and the Milk.

Stir again.

The idea is that the butter is both melted over the Macaroni, and is also absorbed by the Macaroni as well. This should make the Macaroni Fluffier and more moist. This also assures that most of the Macaroni gets some butter on it, which helps with the Cheese powder obtaining a more even coating over the Macaroni.

While this will not completely remove the powdered taste associated with cheap Mac'N'Cheese, it should make the meal a little bit more consistent.

Sunday, March 04, 2007

Call Center : crunching some numbers

Alternate Title : Tech Support with an Axe to Grind

Way back in October last year I ran a post about McAfee, Norton, and Microsoft's tift over Vista Security. I followed that up with an expanded article containing the original content, but filling in the holes and gaps. This post is built in somewhat the same manner. Silly Ole Bear ran a post on MepisLovers about a Script Reader from his local ISP. Rather than just cold dumping the post into Blogger, like the Operating System Security thread, this is going to be a bit of a remix, threshing out points within the posts I made.

The Setup

SilverBear started what would become the thread at this post:

And no doubt they are starting to get the same uncomfortable feeling that a novice exorcist gets when the demon starts laughing at him halfway through the ritual!

[Just click on the START button, type what I tell you and be GONE!!!! BE GONE!!!! Away with you, foul Customer of Darkness. . . oh crap. . . it isn't working. Maybe if I gargle with the holy water. . . ?]

From the other side of the spectrum (the support side), that can be a nightmare. The first part of any troubleshooting I would do with a customer who couldn't connect is an Ipconfig, which needs the DOS box to open up. Many Original Equipment Manufactures (OEM's) like Dell, HP, or Gateway, would remove the MSDOS, DOS, or RUN lines from the Windows Start menu. It was not uncommon either to find a Windows Xp or Windows 2000 machine modified to crash out either cmd.exe, or command.exe, most often by limiting usage rights to one of the executable file names.

Under Windows XP it also was not uncommon for many users to have the Control Panel's category view set to "Expand" through the Start Menu. The result to the end user is that their accounts could be locked out of tools needed to troubleshoot their hardware or software problems, with no easy recovery method, unless the tech on the line knew how to navigate through the Start Menu Properties options by heart.

Another common change is that the Address Bar is removed from the C:\ drive Internet Explorer View, and is locked. This is a large problem when the customer only has Internet Explorer for a browser, and IE is hijacked. Knowing how to get the Address Bar under the File Browser view is useful, and may re-enable temporary access.

A lot of Windows users go on and on about how the Windows "Desktop" is unified, about how it's always "the same," and so forth. Bull Manure. There is a difference between "Customizable" and "Driving you bonkers trying to unlock stuff."

However, SilverBear is correct in that the techs are reading from a script. When I left Sitel one of the complaints I leveled against Sitel, Cox Communications, and Suddenlink Communications, is that tech support wasn't tech support. "Fixing" the problem on a call, correctly diagnosing the problem, or even determining who the customer could contact or go to for assistance, was less than 10% of the quality grade. Following the "Script" to the letter counted for far more of the "quality grade." If you didn't solve the problem at all, even if you didn't know what the problem was, you could still get a 90% plus on a quality grading. From my perspective, asking Quality Representatives what I should have done, or could have done, or what I should do on a call met with blank stares. QA personal were not interested in the customer's problem, or how the customers quality of service was presented. All QA expressed interest in was following the script.

Speaking for myself, I came from the school of training where if you were tech support, you were required to Duplicate, Replicate, and Simulate every single bloody problem the customer could have. And you, personally, had to figure out how to get out of it. If a client could bring up a C++ debugging error while running City Of Heroes under Cedega, it was your job to figure out how that error was produced to begin with, then figure out what steps needed to be taken to resolve the problem.

From my viewpoint, I consider what Sitel is/was doing to be dead wrong, and that viewpoint was one of the influences that affected my own decision to leave. The behavioral pattern is not limited to Sitel, Cox, and Suddenlink Communications. There are several other Internet providers, Computer Manufacturers, and Software Vendors who feel that Tech Support is best serviced by people reading from scripts with no practical computing experience, nor any practical training for support. The good news is, since they (the vendors) likely will never learn, that increases my chance of earning a paycheck providing the support that they (the vendors) are not willing to provide.

Is this Cost Effective?

SilverBear responded again to one of the points, and asked a question that I ducked in the thread (mainly because I'm pretty sure my first response was going to drive readers to tears)

I've looked at some of the stuff on your site, and it's very revealing to read some of your experience there. Wouldn't you summarize it by saying that it's just more "cost effective" for these companies to hire people who can read scripts that cover-- maybe, what?-- 80% of the problems, than to put techs who they'd have to pay more all the time to cover that small percentage of problems that fall through the cracks in the script ?

Unlike the MS or Mac or Linux choices that are available if one looks, in some areas if you want broadband internet, you get one choice, or if you're lucky, 2. Where I am, it's DSL through our local phone company or dial-up. Period
Fortunately for us, it's a small, locally-owned company that does try to do right by you if you have a problem.

In the village about 4 miles away you also can get RoadRunner through Time-Warner Cable service, as of late 2005.

But what would cause the average big boys to change their approach, absent competition?

Unfortunately, the practice is not cost effective, not by any stretch of any imagination, to have 80% of the problems covered by people reading from a script.

The Sitel Break Down

Back in 2003 and 2004 our Sitel (Sitel Augusta), was reported to be one of the highest earning Sitel Units. According to internal reports, this was supposedly entirely from the "Cox Project." Some of the terminology included in the reports were things like "first call resolution" or "Correct Identification of Issues" Things were so good that Cox Communications had signed an unprecedented time of service contract. When I started at Sitel on the Cox Project we had something else nobody in Sitel had. We had Job Security. We knew that 5 years down the line we were still going to be getting Cox Communications Paychecks. We were not just the best performing Sitel, we were the best performing High Speed Internet Support in the business. Thanks to technicians at Sitel Augusta, Cox Communications was getting praised for it's excellent support and knowledgeable technicians.

During 2005-2006, the reports changed as the "technicians" left. As I saw it, our Upper Management didn't just waffle, the upper management completely lost it. One of the Senior Managers told me directly that Sitel's goal was to have Sitel Augusta become the Premier Video support center for Cox Communications. Now, understand that the majority of people working under the Cox Project had answered ads looking for High Speed Internet and Computer Service Technicians. We were not Video Technicians. Most of the training that we had personally received on video consisted of "Send a hit / Roll a truck."

Okay, fine. I do know the difference between a Composite, Component, Svideo, DVI, HDMI, and DisplayPort plug. I know how to re-wire a Sun Microsystems work station monitor to use a DSUB plugin. Attempting to explain the difference to a customer over the phone, who can't see what they are doing is hard. I'm not saying that I was not capable of supporting Video Calls...

I'm saying that is not what I was hired to do. That is not what we were official trained to do.

As I saw it, the Sitel Management chased after "small job" after "small job." Remote Dial Access Support, Voice over IP support, Video on Demand Support, Billing, and so on. Our management didn't have a clue what it wanted, or why it wanted it, and the result showed. The technicians who had only a year or two before been patting themselves on the back, were leaving. Call Volume Queues climbed as employees bailed ship. When I left we were doing so badly that our contract was sold to SuddenLink communications. Our job security had vaporized because some Accountant had decided to be cost effective instead of Job Effective. Whoever was running the show apparently decided that it would be better to have people sitting in chairs reading from scripts, than properly training technicians to answer calls and fix problems.

There are several fundamental problems with the "Script" system that make it a money loser, and ultimately cost ineffective.

Problem #1 with Script System: Consumer Interaction

The first is consumer interaction, or how many calls it takes to solve a problem. Each support call is supposed to take about 10 minutes, presumably accounting for everything that could go wrong during the call. If the employee taking the call is reading from a script and has no troubleshooting experience, nor any training on how to proceed with troubleshooting, there are three potential scenarios.

  1. The first is that the Employee will attempt to fix the problem. It is possible for the Employee to waste valuable minutes of the call tracking down various issues in the manual or from a manager. This interrupts the manager's time on their own tasks, and potentially increases the length of time till the call is completed.
  2. The second is that the Employee will not fix the problem, or give the customer inaccurate information about the problem, resulting in the customer having to call back.
  3. The third is that the Employee will give the customer bad information, potentially damaging the product being called about.

The practical result is that a customer can call back 2, 3, 4, 5, 6, or more times. Presuming 10 minutes per call, the employees are now being paid to troubleshoot the same problem multiple times. If a customer has to call back 4 times in one day, there is now a total of 40 minutes spent on what was budgeted to be a 10 minute solution.

From the customers perspective, the result is frustrating. Their opinion of the company is worsened, and there is an increased chance that the customer will take his business elsewhere.

Problem #2 with Script System: Call Volume

The second fundamental problem is Call Volume, or the number of calls that are taken in a day. Call Volume directly relates to Consumer Interaction. Let us say that 20% of the calls are now callbacks. A Customer has to call in twice for the same problem. The number of calls being taken for problems has now been artificially inflated.

Lets extrapolate the effect of the call backs out.

We have 10 Employees who are expected to take 6 calls an hour, at 10 minutes per call. That is 60 problems in one hour. If 20% of these problems are call-backs, the number of calls is increased to 72 per hour. Now, if all the Employees can handle is 60 calls each hour, where are these extra 12 calls going to go? What happens in the next hour when you have another 12 call-backs, which increases the number to 24 call-backs.

The problem keeps stacking exponentially. Now you have a queue stacking. The only thing you can do is continue to take calls until no more people are calling (clearing the queue). The people in queue are now having to wait on hold.

What does that do to the average consumer? Haven't we all seen the comics about having to wait on hold? Haven't some of us done that? Trust me, getting blasted by somebody whose waited for over 2 hours with Cox Communications hold music ain't fun.

Financially, as an Employer, running a tech support crew that is not tech support poses a huge financial burden. Even with a "hoped" for 20% callback rate or "higher problem" rate, a script based staff that focus's on soft skills and not troubleshooting skills will be spending more time "on the phone" having to re-take problems they do not know the answers to. From the customers side they get the impression the technical support is useless, and then start bad mouthing the company, that whole "word of mouth" thing.

The Number Crunching

From my perspective, having been both a caller, and a technician, the Accountants in charge of funding are extremely short sighted. To an accountant it makes more sense to higher 192 Employees with no Technical experience and training, and have them take calls at 6 calls per hour, 10 minutes per call, and only pay them $8 an hour for 40 hours each week. Figuring that they would work in 3 shifts with 8 hours per shift, there would be a maximum of 64 employees taking calls at any one time. (as 40 hours with 8 hour shifts only accounts for 5 days, not 7).
Number Crunching :: The Cost Effective Approach

64*6 = 384 :: So about 384 calls each hour. Figuring a constant call volume over 24 hours, the daily volume should be 9216 calls.

So, 192 people * 8 dollars = $1536 :: $1536 dollars * 40 hours = $61440 :

The payroll would run $61440 for each employee to work 40 hours a week. $61440 / 5 days = $12288.

The expenditure figuring a 5 day work week would be $12288. $12288 / 9216 ~ $1.33 :: so each call is worth about $1.33

Now, lets screw the numbers all up. Lets say that 20% of the 9216 daily callers have to call back in. That is about 1843 (1843.2) extra calls coming back in.

If the company that is paying for the calls pays for the extra 1843 calls, they'll pay $1.33 per call. That would be about $2451.20 extra from the company, per day. Multiply that times 5, and the additional 5 day work week cost is about $12256.

If the contractor taking the calls takes the hit out of their own pocket for the call backs, say, they pay time and half to handle the calls... lets run those numbers.

8 / 2 = 4 :: 8+4 = 12. So, $12 per hour. 1843 calls / 24 hours ~ 77 calls an hour. Figuring 6 calls per person : 77 / 6 ~ 12.83~.

Lets round it up to 13 people. You would need 13 people each hour to handle the extra call flow. 13 people * $12 = $156

$156 * 24 = $3744 : The payroll now needs to account for an additional $3744 per day in order to pay the overtime workers to handle calls.

$3744 * 5 days = $18720 : For a 5 day work week, handling the extra calls would cost the contractor $18720 to pay the employees.

If the Owning company pays up their amount, the contractor would have this : $18720- $12256 = $6464

$6464 dollars per week that has to come from somewhere.


Number Crunching :: The Technician Approach

So, lets take the same numbers and give them tech support training. 192 people, but because of their skills they are going to get paid $10 an hour in order to take calls.

So, 192 people * 10 dollars = $1920 :: $1920 dollars * 40 hours = $76800 :

The payroll would run $76800 for each employee to work 40 hours a week. $76800 / 5 days = $15360.

The expenditure figuring a 5 day work week would be $15360. $15360 / 9216 = $1.66~ :: so each call is worth about $1.66 dollars

Now, lets screw the numbers all up. Lets say that 5% of the 9216 daily callers have to call back in. That is about 461 (460.8) extra calls coming back in.

If the company that is paying for the calls pays for the extra 461 calls, they'll pay $1.66 per call. That would be about $765 extra from the Company, per day. Multiply that times 5, and the additional 5 day work week cost is about $3826.

If the contractor taking the calls takes the hit out of their own pocket for the call backs, say, they pay time and half to handle the calls... lets run those numbers.

10 / 2 = 5 :: 10+5 = 15. So, $15 per hour. 461 calls / 24 hours ~ 20 (19.2) calls an hour. Figuring 6 calls per person : 20 / 6 ~ 3.3~.

Lets round it up to 4 people. You would need 4 people each hour to handle the extra call flow. 4 people * $15 = $60

$60 * 24 = $1440 : The payroll now needs to account for an additional $1440 per day in order to pay the overtime workers to handle calls.

$1440 * 5 days = $7200 : For a 5 day work week, handling the extra calls would cost the contractor $7200 to pay the employees.

If the Owning company pays up their amount, the contractor would have this : $7200 - $3826 = $3374

The Analysis

So, overall, using the "cost effective" employees, the Owning Company would pay $61440 + $12256 = $73696

Using Trained Employees and you get this : $76800 + $3826 = $80626

On paper, the cost effective employees do appear to be more cost effective, costing almost $7000 ($6930) less.

However, in the case of the Contractor, hiring the Cost Effective Employees means paying $18720 in order to handle overtime charges. In the case of higher Technicians, the overtime charges are only $7200, less than half.

Okay, so why 192 people. Why $8 an hour and $10 an hour?

The 192 is easiest to explain. I originally started running these numbers based on a $12 an hour paycheck, which is what I was earning when I left Sitel. 12*16=192, which was easier math to keep track of. I went with $8 and $10 because I started at $8, and those numbers should be about average for today. The final result shows just how close the costs can be.

The question that needs to be asked is this: Is it really worth it? Is it really worth saving $7000 to hire non-technicians to handle technical calls?

In the case of the trained technicians, you error on the side of "worst case" scenario. It is presumed that only 5 percent of the callers need to call back. Realistically speaking, 99% of the calls received in HSI or Computer support are 2 minute quick jobs, things like "Plug the Power Cord" in.

In the case of the "cost effective employees" you presume only a minor 20% call back ratio. That number, while pulled from SilverBears post, errors significantly under the call back ratio I expererienced. Take it from somebody who worked in tech support with the "Cost effective method". It was not uncommon when I took a call for there to be 6 or 7 entries already logged into the database for that day. Multiple times when I had the customer explain to me what was going on, I knew exactly whose login I would find in their account. It was not a joke for those of us technicians in Sitel that other call centers did not have a bloody clue what was being told to the customer. When I left Sitel the callback ratio from other Call Centers was somewhere around 40-45%. Almost 1 out of every 2 callers would have to call back in for assistance.

Okay, fine. I am just one tech. I didn't talk to everybody. I did not handle all the calls. I did not see everything. All I knew is that if I took 70 calls in one night, more than 30 would be from customers calling back in. I filed several emails to our training staff and management to please get a hold of the other call centers and get their acts straightened out.

I personally wrote Training Documentation and user manuals for Cox Communications Tier 1 HSI, and sent the data to our own Trainer, to San Diego, to Atlanta, to Tier2-HRD, to CTC, to Sykes, to Texas, to everywhere I could, because I was fed up with Script Kiddies screwing around with clients.

The Breaking Point : Further Calculations on Cost Effective Compounding

I reached my breaking point after 3 years.

At what point do the consumers break though. With the cost effective employees, 1843 people could be expected to call back in with problems. What happens if they don't get their problem resolved on their second call back? What happens if the customer has to call back 3rd, or 4th time? Lets keep the 20% ratio.

20% of 1843 people call back in for a 3rd call. That is about 369 (368.6) consumers calling back.

20% of 369 call back for a 4th time. Now we have about 74 more calls (73.8).

9216 calls + 1843 calls + 369 calls + 74 calls = 11502 calls.

At $1.33 per call, the owning company is looking at a charge of $15296.66

The contractor now has to account for the additional overtime to handle these call backs.

369 calls / 24 hours ~15 calls an hour (15.375). 15 calls an hour / 6 calls per person ~ 2.5 people

So, the client now needs an additional 3 people each hour, to handle the 3rd call backs.

That's a total of 13 + 3 = 16 people per hour to handle the call backs.

73 calls / 24 hours ~ 3 calls an hour (3.08). 3 calls an hour / 6 calls per person ~ .5

So, finally on the 4th call back, again presuming 20% ratio, do we reach a point where The number of people already accounted for will be present to handle the calls.

So, 16 people at $12 an hour = $192 per hour. $192 * 24 = $4608 per day.

$4608 * 5 = $23040

$23040 is the total amount of money that needs to be set aside for the "cost effective" call center to pay the overtime for staff to stay and take the calls.


The Breaking Point : Further Calculations on Technician Compounding

Now lets run the same call backs with our Technicians.

5% of 461 people call back in for a 3rd call. That is about 23 (23.05) consumers calling back.

5% of 23 call back for a 4th time. Now we have about 1 more call (1.15).

9216 calls + 461 calls + 23 calls + 1 call = 9701 calls.

At $1.66 per call, the owning company is looking at a charge of $16103.66

The contractor now has to account for the additional overtime to handle these call backs.

23 calls / 24 hours ~1 calls an hour (0.96). 1 call an hour / 6 calls per person ~ .2 (0.16).

On the First go around we had a needed number of about ~3.3 people to handle the calls, and it was rounded up to 4. With that round up, we've already accounted for 5% of the call backs on 5% of the original calls.

The cost to the contractor doesn't change. Paying $10 per hour to trained techs with time and half over time, and accounting for a 5% callback repetition, the contractor is covered with just 4 techs working an hour of overtime each hour.


The Final Crunch

The Cost Effective Business model winds up with a charge of $61440 + $23040 = $84480

The Technician Model is still at $80626

Suddenly, the Cost Effective Model... isn't. It is MORE EXPENSIVE.

Now, I appearently am dead wrong about this, as the Accountants running Tech Call centers don't consider it worth the cost to pay out for Technical Staff.

In my example, would it not be better just to pay that about $7000 out and get technicians that get things done on the first try, and not risk having to hire extra staff or pay overtime? According to an Accountant, I would be wrong.

Oh wait... I wasn't wrong. Cox Communications Sold their contract because Sitel went the cost effective route, and it failed.

The problem is, Accountants do not thing about things like "One Call Resolution." It is my opinion that such concepts are beyond their grasp. But even when playing the numbers game, the Cost Effective method just isn't viable, period.

The Public Explanation

LnoyBoy brought up the public explanation given for going the cost effective route:

Really interesting post.

I guess I'd been fed a line of bull. I was told that the script readers were in place
to take up the "are your cables loose?", and "is it plugged in?" type questions, so
the real techs wouldn't have to deal with them.

That... really isn't a line of bull. Trust me, if you ever get bored enough to read my old Live Journal ( ), I had to deal with thousands of callers each year that pulled a cable loose, kicked the power cord out, set the modem in standby, or couldn't identify the power cord.

Then there are/were the people who call in during an outage, while a "Your Area Is In an Outage" message plays over the phone system, and ask "Is there an outage?"

The real problem that Sitel ran into is that the script readers didn't know when to stop reading the script and start diagnosing the problem.

E.g. : Somebody calls in with a slow or Intermittent connection. First thing you have the customer do is run a couple of ping tests. Get a local server, get a server some distance away (like Google or Yahoo), then grab a server across the ocean (I went for You look for the overall ping response and the variance in ping speeds. You check the modem's levels, make sure that the Signal to Noise ratio is within range. If anything at all looks wrong with the signals, you rolled a truck and got a line tech out there.

The problem was for us is that a lot of script readers were not even opening up the modem to check on the levels. Several would simply feed the customer some line of manure about their computer, advise taking the computer to a repair tech, and end the call. Others would just simple send the Caller right up to Level 2 support. Tier 2 takes one look at the account, sees no troubleshooting, and sends the customer right back down to Tier 1 (front line calls) for Troubleshooting. So, if you have a 10 minute hold time before the customer gets to Tier 1, they spend 10 minutes on the phone calling, 2 minutes talking to the front line rep, however many minutes waiting on Tier 2, then right back into the 10 minute wait time to talk with a Front line tech again. That's... 22 minutes at a minimum, and the customer's problem hasn't been addressed at all.

However, as far as I can tell, accountants are either unable, or unwilling, to see these trends, or account for them. Accountants look at the "bottom" line of how much "has to be paid out" ... not how much "has to be earned."

It is my opinion that one of the reasons why Cox Communications sold many markets to Suddenlink communications at rock bottom prices is because of the Tech Support and Customer Service that the customers were getting. I still remember taking Las Vegas back in 2003 and having upwards of 45 minutes hold times. Northern Virginia? You have a Senator wait 30 minutes, and trust me, your ears will /bleed/.

Customers don't want that, and at some point, they'll draw a line. We worked our tails off to get the call times down and to get things fixed on the first try, because those of us on the front line knew that none of these callers had to call. None of them had to buy our product, or our services. It was entirely voluntarily up to the customer, and if their product was not being delivered, they would take their money elsewhere.

If you are a company, in general terms customers do not have to buy from you. When the customers stop paying for the service, what happens? Was it worth possibly saving under $10,000 on a service contract when customers might cause an overall loss of $75,000,000 (not singling out any now owned Suddenlink market here).

From a Service standpoint, a balance needs to be struck. The techs need to be polite to the callers, but techs still need to be techs. A lot of Accountants running Call Centers just don't get that.