Wednesday, September 12, 2012

Shadow Test #4

Experimenting with labeling and email sharing.

Shadow Test #3

Round 3: If this works, a post from my main account should be pushed to and the GNiE G+ feed.

Shadow Account Test

If this works properly... This post on should be automagically pushed to the GNiE G+ feed.

Saturday, July 07, 2012

Digital Rights: Removed and Regained

Here is an interesting question for you:  What is the difference between the popular anti-cheat program PunkBuster and malicious root-kits like Microsoft ZDPP, Tages, or SecuROM?

On the surface all of the programs have the same basic function. That function is to implement a software lock on a program.

This software lock prevents the users of that program from carrying out certain actions; and at extreme ends can prevent users from the utilization of the program. Despite these similarities Punkbuster is welcomed in gaming communities, while the other utilities are openly reviled.

The impetus for this thought was a repetition of an oft-used claim I saw on the Sega Forums. Somebody made the comment that the application in question would not ever be on /Linux. This statement was based on that application's usage of NProtect's GameGuard, and the quote/unquote following statement: "which goes against the general use policies of linux."

The reasoning behind the statement took me aback. For starters, there are no such general use policies for /Linux systems. Secondly, I am very familiar with the GameGuard program. It is a competitor to PunkBuster, and as far as I am aware, not a malicious rootkit or a Digital-Rights-reMoval application. In my mind there is a clear difference between useful utilities that prevent players from hacking games, malicious DRM rootkits, and benign DRM Services.

The Lock-Out Implementations
  • Anti-Hacking: Punkbuster, GameGuard, Valve.Anti.Cheat
Tools like this prevent the computer user from breaking an application and using that break to affect other players in a networked environment. These tools generally do not prevent modifications to the program itself, the digital-lock is run as a process, and are generally optional to utilize. The person hosting the network-server must enable the anti-hacking tool, and the person launching the client-application must agree to use that anti-hacking tool.
  • DRM Malicious Rootkits: SecuROM, Tages, ZDPP
Rootkits like these prevent the user from accessing the application itself, and the digital-lock is generally implemented on a system-wide level rather than a process-level. The result is that these rootkits take away control of the application itself from the user, and can result in permanent system-level damage. Most of these rootkits have limited activations or installations which cannot be renewed or extended, thus forcing the purchaser to repurchase the software if they want to continue to use the software they already purchased.
  • DRM Benign Single Sign On Services: Desura, Valve.Steam
Services like these require an internet connection in order to authenticate ownership of the application. Applications are stored in a defined container, but users are largely not restricted from modifying stored applications. The Digital-Lock is often implemented as a process. These services often include additional features such as un-attended installations, automatic-updating, data-file synchronizing, cloud-storage utilities, and other features such as storefronts or library management.

The drawback to Single-Sign-On systems is that they do not address the offline-user. Not to put too fine a point in it, but what was wrong with entering a unique-CD-key code?

To me these differences are as clear as the difference between day and night. What is the perspective from somebody who is not as technically inclined as I am? Are these programs really all that different?

What about from a moral or ethical standpoint. Is it ethical to lock down computer software to prevent access or modification? Is that a morally right thing to do? For me the determination comes down to a very specific litmus test:
  • Is the lockout going to beneficially affect somebody else's application experience?
  • Is the lockout going to negatively affect your personal application experience?
  • Is the lockout intended to prevent theft of the application?
These three questions pretty much cover the litmus test for applications that implement a software lock out.

Yes, it is morally or ethically correct to lockout software if that lockout prevents a negative experience for somebody else. This would be the anti-hack tools such as PunkBuster and GameGuard. They ensure that people who are playing games in a networked environment are playing in a fair environment. Such lock-outs are already supported within the /Linux software ecosystem. Technologies such as Punkbuster have native IA32 and x86-64 libraries. Strangely, the aforementioned GameGuard does not advertise GNU/Linux support, even though I am led to believe that GameGuard has at least a native x86-64 client available upon request in order to compete with Punkbuster and Valve.Anti.Cheat.

No, it is not morally or ethically correct for lockout software to prevent you from using the software. This would be the malicious rootkits that can destroy your operating system or force you to have to re-purchase a software license. This would also include always on 24/7 dial-home services for non-internet-only games.

It is morally and ethically permissible to implement a software lock to prevent theft. However, in order for this type of lock to be morally and ethically acceptable, the lockout needs to be non-destructive and flexible. Single-Sign-On services are an acceptable compromise that give content generators a level of theft-protection while not threatening the users-computing environment.

The /Linux Perception

With the above concepts in mind, that there are notable differences in software locks, and notable differences in what makes those lockouts acceptable or unacceptable, how do this relate to /Linux? How do we explain to somebody who is unfamiliar with the /Linux software ecosystem that software-lockouts are permissible? How do we explain that there are no such things as usage policies?

The answers to these questions can be complicated. Over the years an extensive amount of Fear, Uncertainty, and Doubt has been generated on the subjects of the /Linux kernel, the GNU Operating System, third party applications, licenses, and many other aspects of the overall /Linux software ecosystem. A very recent case in point is the Free Software Foundation call-out on Canonical over the usage of private keys and Grub2.

Many computer users seem to be under the impression that proprietary programs cannot be run on /Linux systems, or that technologies that implement a software-lock cannot be run on /Linux due to some non-existent policy. Most think this either due to the repetition of F.U.D. from sources such as Microsoft, or just general confusion from lack of education. Before going further it would probably be a good idea to just clarify the relationships between the Kernel, Operating System, and Applications. To do that I'll use some breakdowns for Android/Linux and for an Embedded GNU/Linux:


In these pictures we can clearly see the drill down of the components working with each other. Applications talk to the API's and Libraries in the Operating System. It is those API's and Libraries in the Operating system itself that turn around and talk to the hardware devices exposed by the kernel. Incidentally, this layout is why you have *updates* for drivers, libraries, and API's. Errors or inefficiencies of code in these components can affect the entire operating system due to their low-system-level.

This breakdown is also explains why applications compiled for Android+Chromium  are not necessarily compatible with applications compiled for GNU and vice-versa. While the underlying kernel itself may be the same, applications generally talk to the API's and Libraries in the Operating System rather than the kernel itself. This is the Important Bit to remember.

Applications that run on GNU/Linux operating systems generally access the GNU libraries, which are released under the Lesser GNU Public License.  Let me quote something from 2004 written by the FSF:

FSF's position has remained constant throughout: the LGPL works as intended with all known programming languages, including Java. Applications which link to LGPL libraries need not be released under the LGPL. Applications need only follow the requirements in section 6 of the LGPL: allow new versions of the library to be linked with the application; and allow reverse engineering to debug this.

Note two of the salient, e.g. bolded, points made by the Free Software Foundation. Any and all applications can access the GNU libraries, regardless of license, financial cost, or any other factors. The only restrictions is that the author of that application cannot restrict reverse engineering of their product for the purpose of debugging library updates. Again this was written in 2004 and some elements of the LGPL have been updated or clarified in the LGPL Version 3.

There is another restriction to the LGPL, and it is one Google brings up here:,

LGPL (in simplified terms) requires either: shipping of source to the application; a written offer for source; or linking the LGPL-ed library dynamically and allowing users to manually upgrade or replace the library.

Many users and developers get hung up on the concept of OR, and for some reason or another believe that using the LGPL GNU Libraries requires releasing the source code that calls upon those libraries. There are some valid concerns here for some vendors since dynamic software linking can be an issue on embedded platforms such as cellphones and Tablets. In such constrained software environments the operating system is distributed as a static-image. Historically most constrained-computing devices; which for point's sake is defined as almost every single electronic device with an embedded operating system; are never updated after they are released. This is one of the reasons phone vendors such as AT&T struggle to get Android Operating System updates out in something that does not resemble a geological time scale. AT&T has not yet adjusted to users not only wanting, but demanding and expecting Operating System updates on an embedded device as part of the service plan.

In terms of desktop usage, this is not really a problem. Although many users and developers might be unfamiliar with the /Linux software ecosystem, they should be familiar with the Microsoft Windows distribution method. Microsoft generally presses out a static disc image for their Windows Operating System, and it is this image that is distributed to end users and vendors. The end-users and vendors are responsible for ensuring that the static-image that was distributed is then updated with the latest sets of software patches. This is a very normal operating procedure for users of desktop computers.

Most GNU/Linux work in much the same way. The user installs the operating system, then pulls updates down for the operating system. Developers writing for GNU/Linux systems thus have to determine whether or not they want to statically-link their library files for distribution, or dynamically link the library files and simply use those provided by the operating system. Dynamically linked applications are generally preferred since users can have a wide range of  GNU libraries in use and dynamic linking is the only sane way to distribute applications.

To reiterate, the dynamic linking restriction in place of source code does not, in any way shape or form, concept or idea, particle or boson, prevent the distribution of a proprietary application with an attached financial cost from using the GNU Libraries.

What about the Kernel?

So then, if there is nothing to prevent proprietary programs from running natively on /Linux systems, what about the Linux Foundation's call out of Nvidia for it's proprietary drivers in 2008?

Again, two separate things here. The /Linux kernel is just that, a KERNEL. The /Linux kernel is not an operating system. For the /Linux kernel proprietary drivers are a nightmare, which is why there are only two real proprietary drivers of note: Nvidia-GLX and AMD Fglrx. Linux kernel development occurs almost too fast to really support an out-of-tree driver API. This rapid pace of technology is one of the reasons AMD has said they'll be opening up Catalyst for the HSA Foundation (slide 30).

It is important to separate the Kernel from the Operating System. Yes, the /Linux Kernel developers have a very vocal policy against software lockouts and proprietary licenses. The policy of the /Linux Kernel developers only applies to the /Linux kernel, not to the Operating System. Case in point, the Android+Chromium operating system(s) also use the /Linux kernel, but because they are not widely associated with GNU/Linux and the associated "viral" GNU Public Licenses, they do not suffer from the perception that there is a policy against software lockouts or proprietary licensed software.

Distributing Digital Rights reMoval software

In theory then, could malicious rootkits under proprietary license like SecuROM or Tages be brought to the GNU/Linux platform? Assuming that the native-client applications were dynamically linked to the GNU Libraries, then yes.

Could those applications be distributed? If the license allows for the unencumbered redistribution of the application, then yes.

Would those programs be distributed? This is the better question to ask. One of the signature problems of commercial /Linux support is actually getting applications into the hands of downstream users. Many Windows users may be unfamiliar with general GNU/Linux distribution methods, but are probably familiar now with digital distribution through applications like Valve.Steam, Google Play, Itunes App Store, or the Amazon App store. These digital distribution stores are largely modeled after networked software storage systems developed for GNU/Linux known as package repositories. Valve.Steam, for example, is often referred to as "Apt for Windows" given the multiple similarities to the Debian Apt system.

Many of the applications released into the /Linux software ecosystem are released under open-source licenses with unencumbered distributions. This allows the programs to leverage the networked system package repositories for storage and distribution. Programs with proprietary licenses can still be distributed through package repositories. Case in point, Debian designates proprietary licensed applications as non-free and makes them available, although as a separate option from the main distribution.

There is a difference between an application that can be added to a repository, and one that will be added. Most package repositories tend to be guarded with multiple levels of security. For example, becoming a Debian Maintainer requires jumping through lots of hoops including physically meeting with another maintainer. Adding a deliberately malicious package would destroy the trust the downstream users have with the maintainers of the repository. Offhand, I think this might be where the concept that a universal policy against software lockouts came from.

There is a drastic difference between a Repository Maintainer protecting downstream users from a malicious application, and a policy against software lockouts. One does not beget the other.

Where do we go from here?

The /Linux software ecosystem continues to grow across both GNU/Linux and Android+Chromium/Linux. Commercial vendors who have long since ignored the /Linux software ecosystem are slowly being forced into adopting platform neutral development techniques. In all fairness the platform neutral approach has also been helped by deliberate breaks in Microsoft's Windows Operating systems. For many developers the only way to target Windows Xp, Windows 7, and Windows 8 for application deployment is to adopt a platform-neutral development strategy such adopting graphics technologies like OpenGL over DirectX.

From my perspective the turn-about has been both hilarious and painful to watch. Companies like Valve and Unigine tend to approach /Linux, and for that matter platform-neutral development and distribution as a market reality rather than a one-off experiment. Companies like Electronics Arts tend to approach /Linux development and distribution as an experiment, something that can be abandoned if things do not go completely right. Companies like Activision will happily use /Linux for servers, but have no idea what to do with the desktop /Linux market other than ban players from Diablo III who were not using Windows.

With non-native repository solutions such as Valve.Steam and Desura vendors now have an external solution to distribute their native-client protected applications to downstream users within the /Linux Software ecosystem. Does this mean that we will see the rise of malicious software distribution through Valve.Steam or Desura?

My guess is an "unlikely no." There are multiple reasons for this, starting with the simple fact that most malicious software lock-outs never really worked to begin with. Anti-consumers such as pirates were not halted by malicious roots such as SecuROM, Tages, or ZDPP. The malicious rootkits only impacted legitimate users.

Then there is the network-connection question. Some of the vendors I've talked with over the years admitted that they shipped a malicious rootkit instead of a Single-Sign-On service for the sole reason that they wanted to prevent application theft from an offline user. The computing market has changed greatly in the past several years as Internet Access has become almost ubiquitous. While it might be possible that there are still Windows users who are buying modern-day application packages with no intention of ever connecting to the Internet, I think it would be a bit of a stretch to find a /Linux user with disposable income looking to buy a modern-day application package with no internet access. I think application vendors could probably be assured that solely distributing their applications through a Single-Sign-On service on GNU/Linux such as Valve.Steam or Desura would not limit or hamper potential sale opportunities.

The resistance to pushing commercial released consumer applications into the /Linux software ecosystem is not going to go away overnight. Vendors and consumers need to be educated on what the Open-Source licenses really say, and years of Reaper Indoctrination say Shepard is alive, I mean, years of Microsoft's F.U.D. flinging are going to be difficult to counter. We will continue to see vendors decline to release their applications into the GNU/Linux ecosystem due to concerns over licensing, library linking, or imagined policies, regardless of what the facts actually are.

Wednesday, May 16, 2012

Android Centralization

I normally don't repost comments I've made in other people's G+ streams. Well, this time I am, by collecting some of the comments together and expanding upon them.

One of the murmurs going around tech circles right now is an apparent push from Google to centralize Android/Linux distribution. The objective on Google's part is to get Android/Linux updates into the hands of users. Case in point the latest version of Android/Linux is the 4 to 5 month old Ice Cream Sandwich 4.0 release. However, Android phones are still shipping with the literal years old Android Gingerbread 2.x release, and many popular Android/Linux phones like the Galaxy S II still haven't been updated despite carrier promises (AT&T) to get their act together.

The largest issue caused by inability of carriers to get their acts together and get updates out to device owners is not application incompatibility, despite what some reports, and programmers for that matter, would have you believe. The actual Android Application Programming Interfaces are very clearly defined and relatively stable between releases. The reference case in point here could be any long standing RPM or Debian based GNU/Linux distribution. Despite a wide variety of GNU/Debian operating system environments and /Linux kernels, you can generally maintain program compatibility if you target the published /Linux API's. Case in point, I can still run games like UT'99 and Doom3 that haven't had any effective updates to the binary executables in years.

From what I can tell from the Android/Linux documentation Google wasn't too particularly worried about Operating System fragmentation since they could implement API fallbacks for deprecated API features. In practice the program compatibility question does become complicated. The level of control given to vendors means that API fallbacks may not be exposed on any single Android/Linux device a user may actually have in their hand. There also can be other vendor-caused issues, such as "baked-in" shovelware like Facebook.

In my opinion the largest of these vendor-cause issues is the exposure of users to security threats. As Android/Linux devices become more widespread they become a more desirable target for malicious software and targeted attacks. It is my opinion that too many security choices are left in the hands of the phone carriers, many of whom have a long history of proving they have no business participating in software distribution or management.

Centralizing Android distribution allows Google to force the software-compatibility and security issues. This proposed centralization is very similar to the methods used by Apple in regards to the Iphone, which often raises the question of why it took Google so long to mimic Apple's successful software management strategy. My opinion here is that the recent history is being re-written by the present events.

Google publicly launched the Open Handset Alliance in November of 2007. When Google started the Open Handset Alliance it, as in Google,  was trying to break into the smartphone market. The Android/Linux platform was unproven, and for that matter, unwanted.  In 2007 there was active market competition from not just Apple's Iphone, but also from Palm and RIM. Case in point 2007 Palm was both profitable and already talking about their next generation software platform. Granted 2007 was also when RIM thought the Iphone was a wormhole product. Just getting the a foothold against the established players in the smartphone meant Google had to take a fundamentally different approach than Apple, which meant allowing the decentralization of updates and allowing phone vendors to do their own things.

Fast forward to today and the market itself has changed. Former smartphone powerhouses Palm and RIM have been mismanaged into irrelevance. One of the strongest players in the dumbphone market, Nokia, suffered not only from internal mismanagement that encouraged in-house competitions to ludicrous levels (Symbian, /Linux, and QT); they also suffered from external mismanagement (Microsoft, Elop, and Windows Phone). Also, yes, I know I just linked to an article written by Andrew Orlowski. As far as I can tell that article is actually accurate. Yes. I checked. Multiple times.

Anyways, the very competitors that forced Google to make concessions to get the OHA rolling to begin with, are, as far as the market is concerned, gone.

There are other factors to consider, such as the emergence of malicious software that attacks mobile devices, and then the whole updating question. Quite frankly Apple pioneered the entire concept of a smart-device getting an operating system and functionality upgrade, something Palm and RIM users had always equated to "spend more money on a new device."

Another unmentioned factor here is Kaz Hirai's push on Android through Sony. Sony needs Android centralized in order for Sony's content-driven Playstation Suite plans to actually work. I'm not too terribly interested in going into why Sony needs a centralized Android since that is it's own story, and will likely be posted on GNiE.

Additionally there is the whole legal quagmire with design patents, software patents, copyrighted API's, and so on and so forth. Google's ongoing triumph against Oracle in a courtroom probably is the straw that broke some of the involved camel's backs.

From an outsiders standpoint I think Google only now has the market muscle to actually push centralization of Android Distribution onto unwilling carriers. From the user's perspective the hopeful outcome is that devices running Android/Linux will be updated within a reasonable amount of time, say a couple of weeks if not days, from the launch of new software versions. I'm not sure I can imagine what the hardware vendors perspective is, but the loss of software competitors means they'll just have to focus on making better hardware. I can imagine what the carriers perspective is right now, Google's taking away their baked-in cash-cow shovelware deals and giving users back their devices. I think AT&T is not going to be happy about this.

And before you ask, I'm singling out AT&T since the Samsung Galaxy II S was flagged for ICS 4.0, and as far as my friends tell me, it's still Absent without explanation.

Saturday, May 12, 2012

Homosexuals: It was never about Rights.

Okay, I've had enough. I think the breaking point for me was somebody who declared that Homosexuals being unable to marry was equivalent to the "Separate But Equal" doctrine. That's a load of horse hockey, and it's probably about time somebody shut the entire concept down. Might as well be me.

Thing is there has been a dramatic push by supporters of the Homosexual Agenda and their counterparts from the Liberal Democrats to compare the lack of legal right for Homosexuals to marry to various legitimate civil rights issues in the past. Comparisons can involve the female right to vote and the civil rights of non-white US citizens. The Homosexuals and their supporters claim that they are being discriminated against. Okay, so the first point of contention here is to define discrimination. defines Discrimination as: "treatment or consideration of, or making a distinction in favor of or against, a person or thing based on the group, class, or category to which that person or thing belongs rather than on individual merit." Let me phrase this in terms of plain English:
  • If I were a business owner of a computer repair shop and I needed a new employee to repair computers it would NOT be discrimination for me to ignore applications who have no experience repairing computers. It would be discrimination if I were to discard an applicant  with experience in repairing computers because they were Catholic. Their religious status makes no difference to the job.
  • If I were a business owner of a bakery and I needed a new baker it would NOT be discrimination for me to ignore applications who have no experience in baking. It would be discrimination if I were to discard an applicant with a Diploma in European Baking and Pastry because they were white. Their color makes no difference to the job.
  • To take this to the extreme, if I were a business owner of a car repair shop and I needed somebody with two hands to help with repairs it would NOT be discrimination to turn down an applicant for that job who was missing an arm. It would be discrimination for me to discard that application because he had a girlfriend. Their sexual status makes no difference to the job.
With the idea of discrimination defined and the concept framed in real world terms, what does this mean in regards to the Homosexual Agenda and it's supporters? How is the lack of legal recognition of marriage's between Homosexuals Discrimination? In order to answer this question we need to define Marriage. defines Marriage as: "the social institution under which a man and woman establish their decision to live as husband and wife by legal commitments, religious ceremonies, etc."

Next Question: Who defines the social institution?
  • Is is the established central government? No. Governments typically establish Civil Unions:
  • Is it a Church? Yes in part, most marriages are referred to as Holy Matrimony, and are performed under the authority of a priest.
  • Is it a religious Convention? Yes in part, the concept of marriage as Holy Matrimony was laid out within the books known as the Jewish Torah, books that are accepted by Jewish, Christian, and Islamic religions.
This now poses the question: If a Government can recognize it's own form of a legal union without regard to a religious mandate, why does a Government need to recognize a Marriage? Let me phrase this question another way: What is the point of Marriage?
  • Is it love for your partner?
  • Is it to have kids?
  • Is it to save money?
  • Is it convenience?
Good questions, but what does Marriage do that a Civil Union does not do? Yes, this will be pertinent in a bit.  Let me add another question right now: What does the Government gain from recognizing a Civil Union or a Marriage?

The short answer is this: From the Government's point of view the sole reason a Government needs to recognize the legal status of people living together is that it provides the Government with a concrete benefit. From the perspective of a Government the concept of Marriage has only one concrete benefit. That benefit is the production of more citizens. All of the fiscal benefits that Married Couples get are designed to do one thing, and one thing only. Aid that couple in producing children.

Remember something I said years ago? That the Homosexual Agenda is just about money? Well, it's not, and I'll get to that in a second. Here though is one half the crux of the Homosexual Marriage push. It is to award Homosexuals with the same financial benefits that Heterosexual couples are awarded. However, Homosexual Couples are physically incapable of fulfilling the Physical Requirements for those benefits.

This goes back to what I opened with in terms of Discrimination. It is not discrimination to withhold or disallow a person from partaking in a specific job, benefit, event, or whatever, if they don't meet the requirements for that specific job, benefit, event, or whatever.

The reality is this: Homosexuals do not qualify for the Benefits of Marriage to a Government. Ergo it is not Discrimination to disallow Homosexuals Citizens the benefits that are granted to Heterosexual Citizens.

I suspect that these statements will produce lots of teeth gnashing and probably earn lots of vicious whining from people who hadn't actually thought this through. I'm not finished lobbing bricks through the glass houses though. Remember a question from just a couple lines ago? What does Marriage do that Civil Union does not do? Yes, let's bring that back up. What DOES a Marriage do that a Civil Union doesn't?

Here's the short answer: a Marriage is generally established by a church or a religious body. A Civil Union is generally established by a government. This difference is the key point on why supporters of the Homosexual Agenda want Marriage Recognition.

What supporters of the Homosexual Agenda want is for the Government to tell the Church what the Church has to recognize.

Subtle isn't it. The same group of people that howl and complain about Separation of Church and State; the same people who have made it all but illegal for Priests and Pastors to even mention politics from the pulpit; the same people who howl about religious persecution; the same people who stamp their feet and point dramatically anytime it even looks like the "Church" might have a modicum of influence on their lives; are trying to influence the "Church" and interfere with matters of the "Church."

In case you missed the point, this is the textbook definition of Hypocrisy and Double Standard. Supporters of the Homosexual Agenda and Liberal Democrats feel they are free to perform the exact same actions they declare nobody else can perform.

To reiterate I expect that these statements will also generate a large amount of teeth gnashing and more whining from people who hate to be called out. Unfortunately for them, I'm not done. When looking at the cold hard logic behind the goals that supporters of the Homosexual Agenda are trying to achieve, the question has to be raised; How did this ever become a big deal to begin with? Why have Homosexuals become some a large part of the perceived American Life?

The roots to these questions are to be found in the so called Kinsey Reports, which were two books published on sexual behavior. Many of the commonly accepted ratios for homosexual market penetration and demographics were taken from data provided by the Kinsey Reports. The problem here is that Kinsey Report were false, and were medically disproved. All of the figures and ratios developed by Alfred Kinsey were, in fact, fraudulent. For the record, Kinsey himself was a pedophile and is confirmed to have committed acts of sexual abuse. In most academic circles this would result in the immediate rejection of any data furnished or provided by a person who had committed such acts.

One of the larger of the legacy problems here is the abject failure of the American Medical Association to act on the status of the Kinsey Reports as fraudulent data items, or to act on the revelation of the crimes committed by Kinsey. These abject failings has been complicated by other failures of the AMA. Since most people are not aware of these failings I have more questions to ponder here.

For example, did you know that most people who claim to have same-sex physical desires also have mental or psychological disorders? Did you know that people who have identified as homosexuals who have received treatment for confirmed and diagnosed psychological problems have reported the loss of same-sex attraction? Did you know that people who have identified as homosexuals who have undergone counseling have reported the loss of same-sex attraction? Did you know that the AMA has blacklisted doctors who have tried to research the link between psychological disorders and homosexual attraction? Did you know that the AMA has worked to block medical reports or research that indicate a link between psychological disorders and homosexual attractions? Did you know that the AMA has worked to block medical reports or research that links specific chemicals and or bio-organic compounds to homosexual attractions?

While that sends quite of few of you to Google with exclamations that such events can't be right and that I have to be wrong, I'm just going to point you to Love Won Out. Which will link you to many of the former homosexuals who have been treated or counseled, and some who have been through treatment for other mental disorders and found themselves without homosexual attractions.

Here's another factor to consider. A few years back there was a push to find a "Homosexual Gene" which would cause somebody to become Homosexual. The entire concept didn't sit right with anybody who was awake during high-school biology where we learned that in order for genes to be passed on, there had to be kids. Going back to the physically incapable of producing children bit from earlier, there is no physical way for Homosexuals to pass on a gene that would cause same-sex attraction.

It is, however, very possible to pass on or generate things like Down's Syndrome, Cerebral Palsy, Autism, and many other known birth or near-birth medical conditions. See where this is going? While it is physically impossible to pass on a "homosexual gene," it would be possible to pass on or generate a chemical imbalance or other mental disorder that would cause same-sex attraction. Keep in mind that hormones and pheromones as methods to modify sexual behavior are scientific facts, not to mention the non-scientific existence of aphrodisiacs which proclaim to modify sexual behavior. This is why the AMA, and for that matter other international medical organizations, blockage of research into the psychological, biological, and chemical effects on same-sex attraction is such a major point of contention. Such blockages are not just irresponsible, there is evidence to support that those blockages have stifled and stymied other areas of potential medical advances. All this blockage for the sake of perpetrating a fraud. Again, anybody who was awake through High-School Biology should have caught this. This is not, as one might say, rocket science.

The medical problem has been complicated by the influx of liberal democrats into positions of authority within news sources; like the Associated Press, Reuters, CBS, NBC, ABC, Microsoft-NBC, and CNN; and entertainment production companies. Homosexuals have been given a free pass for promotion by the people who are actually in charge of creating most of the content that is aired on television or in movies. Ergo there has been the artificial perception that Homosexuals really do make up a large percentage of an "average population."

Economically speaking, that has never been true. From a purely economical standpoint companies that support Homosexuals tend to lose money. The dramatic case in point here is the Disney Corporation which suffered an extended boycott, and only managed to stay profitable by slicing expenditures such as planned cruise lines, planned resort expansions and renovations, the shuttering of the 2D animation studio and the reliance on a third party for Disney family movies, and so on and so forth. The final result of the boycott was the ejection of Michael Eisner and the return of the Disney Corporation to a family friendly oriented company.

The same holds true with the Voting population. The dramatic case in point here is the vote in California on Homosexual Marriage. California is considered one of the hot-spots for supporters of the Homosexual Agenda, and they still got smacked down. To put it bluntly, every single state that has brought up the definition of Marriage as One Man and One Women has passed that measure. Every single state that has brought up the possibility of legally recognizing Homosexual Marriage has defeated the measure.

Put bluntly, supporters of the Homosexual Agenda are neither an Economic nor a Political Factor.

What they are is a bunch of people who have been given a megaphone, and told to have fun with it.

To repeat myself, I realize this posting is not going to be very popular. It is going to attract a lot of people who don't want to discuss things in terms of cold hard facts. It is going to attract attention from people who probably wish I had just stayed dormant instead of laying out another colloquial smack down.

Will this posting have any effect on the political landscape as we move closer to the US elections?

Well, that's really up to the people reading this.

Wednesday, April 25, 2012

Valve asked me some question. Here are my responses to them.

Why do you use Linux (if you do)? 
For me I have several reasons. I've been through most of the stages that many of your other /Linux users will go through, such as:
  • Curiosity as to what /Linux is
  • Curiosity as to what other Operating Systems there are available
  • Knee jerk response to Microsoft
My current usage of /Linux is dictated by well, more pragmatic reasons. Bruce Byfield actually has one of the better rundowns on this subject:

KDE basically, well, dominates in terms of desktop ease of usage. KDE has far more functionally and is far more flexible, and thus far more useful, than any other desktop environment on any operating system. I use Linux specifically because it lets me actually -use- my computer on my terms.

I suppose a better question would be: Why do I still use Microsoft Windows at all? Quick shot there: I'm a gamer. Game publishers don't know how to approach /Linux, and therefor if I want to game I'm either stuck with half-useful emulation solutions or wrappers. Which don't always work.

What would you like to see Valve do here? What about non-game related things? 

These two kind of go together. I've written about this subject before back in 2009:

Okay, yes, that post is a bit on the dreamy side in aspect of micro-payment potential, but the rest of I think is still very pertinent. Commercial publishers, and for that matter commercial developers, don't know how to approach Linux for multiple reasons. How do game publishers handle the packaging question? How do game publishers handle the API questions? How do publishers handle secure purchases?

In fairness some of the issues have been addressed in the intervening years. Android/Linux has helped force the issue on API's used by developers. The Consumer Desktop Linux market is largely split between Distributions onDebian(pure) or Debian(Ubuntu), to the point that if you target Debian(pure) for development you can probably be pretty sure your application will be compatible with whatever /Linux the downstream user has installed. Application stores such as Steam, Itunes, Amazon Android App Store, and GooglePlay have gotten the average customers used to the concepts behind central-package-management and package-repositories.

I think there is a lot of room for Valve to move within Desktop /Linux in respect to games. However, that does come with some caveats. GNU/Linux is not going to be an automatic million-dollar maker out of the door. The commercial games industry has shot itself in the ass so many times on the /Linux subject that most of the commercial games target market no longer cares. Granted the inability of the commercial games market to target Linux has been a fueling factor to the explosion of Independent developers: case in point being things like Humble Bundle, Indie Royale, and Kickstarter.

I suspect that a lot of the Desktop/Linux market is going to be looking for parity first: e.g. games they already have on Steam becoming available for download and play on /Linux. But Valve's been in this position before when Steam was just getting started on Windows, and when Steam first hit OSX. I'm pretty sure that Valve has a handle on what it means to grow a market, and customers will always be looking to purchase entertainment.


As far as non-game in relation to applications I'm not sure there's a lot of immediate room to work. Valve has hinted at a desire in the past to get involved with pushing commercial software packages through Steam and not just games. This is probably a smart move on Valve's part.

However, there is no shortage of non-game software within GNU/Linux, a factor of the commercial market's inability to get consumer software products onto GNU/Linux. Some of the major applications consumers use today, such as Firefox, Chrome Browser, Libre Office, and VLC, have roots in /Linux development. Most GNU/Linux distributions have a wide-range of software preloaded to address most common computer usage requirements, and pretty much all GNU/Linux distributions leverage package-repositories for additional software packages. E.G. major point of using a GNU/Linux to begin with.

Where I think there is a lot of future room to work is within the definition of non-game but still pertaining to entertainment. Expanding Steam to include movies, music, or even e-books, would help broaden the appeal of the Steam platform.

Another long-term factor would be Steam on Android in general. One of the benefits to the Android software ecosystem has been the competing application stores and the ability to load applications that don't come from stores. Sony's gearing up to enter the competition with Playstation Suite, which will connect with other Sony services for music and movies. Sony seems to be setting themselves up where they sell content without regard to hardware. Buy a movie on Playstation Suite and watch it on your PS3, PS4, Vita, or any device capable of running Playstation Suite.

What I don't know is whether or not Valve is actually working with Sony on this positioning. I can very easily see a deal going down between Valve and Sony for SteamPlay Content to automatically include access to that content on Playstation Suite, and vice versa.

Monday, April 23, 2012

Okay. This is interesting.

I just signed into blogger after the Google+ revamp went live to find that Blogger has also received a back-end revamp. 

Unfortunately it looks like while the UI got an overhaul, the HTML generator didn't. 

This post is five sentences of plain text with 298 characters, 367 if you count spaces.

It generated 13 lines of html with 834 characters, 935 with spaces.

Tuesday, March 27, 2012


I kinda of got asked recently why I seriously dislike AT&T as a company. I've made more than a few cracks about their inability to upgrade their network infrastructure and their paying millions for an exclusive phone license. I also spoke out against their attempted purchase of T-Mobile, and pretty much the entire tech community laughed it's collective rear-end off at AT&T's petulant press release on T-Mobile's layoffs.

Then there is the whole problem with AT&T's throttling of subscribers bandwidth. There is an extreme disconnect between what subscribers think they are paying for, and what AT&T thinks the customer is paying for. Couple this bandwidth throttling debacle with AT&T's continuous history of lousy customer service, and I think the basis for my personal dislike for AT&T is outlined. As a company AT&T has been the poster child for executive incompetence and corporate mismanagement.

The problem with AT&T is further complicated that, on the whole, the entire company is basically shamed by a few people the shareholders should kick to the curb. I have friends who have been with AT&T for years in the Southeast USA and have never had a problem with dropped calls or bad phone service. I have friends who have gone into the AT&T store and were given online promotional deals by in-store managers. At least local to where I live AT&T will happily talk discounts for multiyear contracts and multiple phone purchase, something other service providers such a Sprint, Boost, Virgin, and T-Mobile would not discuss.

Then there is the ultimate matter of pricing. AT&T's pricing far undercuts Verizon, and in the Obama Economy, every penny matters. I might not like AT&T's management, but if it was my money on the line for service? If it was other people's money on the line? I am not entirely sure I would not go with AT&T as a cell provider right now.

Friday, February 17, 2012

America is not that Stupid.

Saw an interesting statement by Paul Krugman: "pundits who describe America as a fundamentally conservative country are wrong"

I think Mr. Krugman is right, but not in the way he thinks. I would phrase the statement like this: "America is not fundamentally stupid."

The fact is this: The socialistic economic policies exposed by liberal democrats like Obama and Nancy Pelosi do not work. Just off the top of my head I can reference U.S.S.R., China, Eastern Europe, and today's Western Europe of examples of this fact. The belief that "Big Brother Government" is a solution to problems has cost billions of human lives over any number of centuries.

I think on several levels the majority of the American Public realizes that Big Government is not an answer and that in order for them to personally succeed they need to do things themselves. Case in point here is me. I went out and got an education as a Pastry Chef. I will be soon starting a job putting my new education to use. I have not relied on the government to finance me. I haven't relied on food stamps. I have pulled myself back out of the hole I fell in and am ready to keep climbing upwards.

* * *

The success of the socialistic agenda and the rise of Big Government in the United States has largely succeeded due to a two-fold push. The first fold has been the traditional control of the mainstream news feeds. Organizations like ABC, NBC, MicrosoftNBC, CBS, CNN, Reuters, and the Associated Press are controlled by executives pushing a liberal democrat / socialist agenda. This control over the news has only been disrupted by the rise of the internet. Services such as MySpace, Facebook, and Blogger have given ordinary citizens the ability to connect with each other on a level that has dismantled the traditional news media. Today's American's are more comfortable with turning on the web-browser and doing a quick Google Search to check the background on a story, than just accepting the report on TV at face value.

Okay, to be fair I will address the elephant in the room here. Fox News is not the right-wing leaning pundit it's competitors make it out to be. From a purely political perspective Fox News is a neutral party whose owning corporation is a left-wing body, but finds the neutral positioning to be profitable. It is only in comparison to the other news networks which are entrenched in liberal democrat dogma that Fox News appears to be right wing. Again, rise of the internet. Google Search is your friend. The positioning of Fox News as an untrustworthy news source is dismantled on a regular basis. I'm not going to bother here, I have other things to type about.

The other fold has been the judicial system. Liberal Democrats have been successful in pushing their agenda through the courts. Why? Well, because everytime a Liberal Democrat policy comes up for an actual vote it gets shot down. Dramatic case in point would be granting homosexuals the "right" to marry.

When it comes down to it nobody involved with pushing the Homosexual agenda has actually been able to explain why Homosexuals marrying is a "good thing" and should be recognized under law. After-all the legal advantages of being married were initially drafted with idea of encouraging people to have children. You know, something Homosexuals can't physically accomplish. However the actual purposes of the legal advantages to being married have largely been ignored by those pressing the Homosexual Agenda in favor of banging on irrelevant topics like equal rights and equality. When legislation pushing the homosexual agenda have actually come up for a vote, the legsilation has been defeated even in states where the percentage of people with liberal democrat and socialistic ideologies is very high, such as California. I went into this subject back in 2007:

Since the votes for pro-homosexual legislation has failed the supporters have instead turned to the court system to get their way instead. Again, Google Search is your friend. Look it up. Although I would start here:

* * *

Getting back around to the point I was initially making, the push on socialism and big government has run into roadblocks over the past 4 years alone. The average citizen has seen for themselves the policies and effects of a liberal-democrat aligned political system under Obama. The Tea Party deserves credit as the only reason the degradation was halted with it's mid-presidential term election.

One of the key things to keep in mind here is that the Tea Party is not really a conservative or republican aligned political party, although the mass news media would love to position the Tea Party as such. The Tea Party is comprised of people with liberal philosophies, conservative philosophies, democrat backgrounds, republican backgrounds, and so on and so forth. The Tea Party is also about education.

An average Tea Party meeting is not just somebody handing out a little card and telling everybody else present how to vote. An average Tea Party meeting is citizens getting together and actually trying to learn about the political and economic issues at hand. The result is a voting party that approaches the polls with a better handle over what is being voted on. The problem the liberal democrats have is that educated voters are not likely to vote for policies, legislation, or candidates that have socialism aligned goals or ideals. This goes back to what I said at the start: The Average American is not actually stupid.

The backlash from the Tea Party shocked the, for lack of a better term, Old Guard Republicans. For probably the first time since Ronald Reagan the Republican Party was controlled by a new generation of voters who simply didn't toe the party line.

Now, is the Tea Party an overall conservative party? In some aspects yes, one could say the Tea Party has conservative alignments. These alignments are largely based on education of people who want to learn, not simple beliefs or repeated dogma. As evidenced in the 2010 elections, voter education is the worst enemy of liberal democrats and their policies.

* * *

If we accept that the average America citizen is not fundamentally stupid, then yes, economic recovery is possible. Getting officials into office who oppose Socialistic Policies and understand that a larger Government is going to solve no problems is just a part of the solution for America moving forward.

Saturday, February 11, 2012

Windows 8: Let's get ready to Rumble

Okay, this post is primarily driven by a Google+ stream by SVN. The background of the post is this: Microsoft is planning the launch of the next Windows 8 test release later this month. We do know some of the details that will be changed in the next release compared to the current Developer Preview such as the removal of the Start Button from the Windows 8 interface.

Now, I've been pretty vocal on just how bad the existing Windows 8 Developer Preview is. I've got an installation set up against an Athlon64 X2 @ 2ghz with a RadeonHD 4650 graphics card. Since the preview was released I've shown it to everybody who has come to visit me either for computer help, to pick up baked goods, or just to hang out. Such people have included Mary-Kay consultants, school janitors, retired teachers, car mechanics, restaurant managers, jewelry store managers, and their friends and relatives. The collected response from everybody who has sat down and actually used the Windows 8 developer preview has ranged from "you cannot be serious" to "if this had been on my computer it would have gone into the trash can."

Thing is, I used to have an installation of Ubuntu running their version of Gnome 2.x and I asked people to use it. Most, but not all, of the casual consumers that I showed Gnome 2.x to hated it and had negative reactions. That's one of the reasons I slag on Gnome all the time. The Gnome Human-Interface-Design group's approach for a "Grandma Friendly" desktop is complete and utter horse hockey. Windows 8 is the first time that any Gnome 2.x based Linux has actually been described as an interface that casual consumers would prefer if they were given a choice. That's how bad the Developer Preview is and from the changes Microsoft is making, the upcoming Consumer Beta is going to be WORSE.

As of right now there is a trend towards deliberate design flaws on the part of many Desktop Oriented Linux Distributions. Desktops such as Unity and Gnome 3.x attempt to address non-existent problems such as clutter. The result of such approaches has resulted in user-revolts with Gnome-centric distributions, such as Linux Mint, attempting to add the functionality of Gnome 2.x back into Gnome 3.x.

The good news for Linux distributions is that their consumer-base is largely made up of consumers that are relatively technology-literate, or communicate with people who are technologically-literate. Ergo design blunders like Unity and Gnome 3.x are being countered and the overall negative effects are mitigated.

Windows 8 has no such user-base connections. Most Windows users tend to be technologically-illiterate. This means that the consequences of Microsoft following the footsteps of Unity and Gnome 3.x are going to be far more severe.

One of the core problems Microsoft faces it the attempt to unify the Phone, Tablet, and Desktop operating systems under one single interface. Microsoft has tried such approaches multiple times in the past and those attempts have never worked. Microsoft has been pushing the "tablet" form factor and other mobile solutions for well over a decade, but the ultimate product result is often described as a "solution in search of a problem." Microsoft's previous attempts have tried to shove the existing Windows user-interface architecture into the smaller system form factors. Windows 8 is an inversion of that approach as it tries to scale a small system form factor into something usable on a larger system form factor.

The market realities that Microsoft has itself proven are as follows:

  • An interface that works well for a large-screen monitor will result in ultra-tiny font and near-unusable controls on a small-screen such as those used by a phone.
  • An interface designed for a small-screen such as those used by a phone will look like something designed for children on larger format screens.
  • An interface that is designed for the precise control of a mouse and the multiple inputs of a keyboard will not directly translate to a touch-screen interface; any such translations will require software overhead to provide for keyboard functionality through the user-interface as well as accommodate less precise pointing methods.
  • An interface that is designed to accommodate touch screens with multiple finger-width possibilities will not directly translate to a keyboard and mouse configuration. On-screen Buttons that are sized for a finger to hit will consume an inordinate amount of space for a mouse, and functions that are bound to swipes of the screen and on-screen objects will not be required with other additional input sources.

The Windows 8 Developer preview scraps everything Microsoft has ever learned about user-interface design from their own product releases. Many such mistakes are the very causes of Microsoft's non-factor status in the mobile market.

The mobile revolution as we know it through Android/Linux and IOS/Mach_BSD has largely occurred preciously because Microsoft was not involved. Android/Linux and IOS/Mach_BSD have been successful for many reasons, such as their use of inexpensive and battery efficient ARM hardware. Another reason is that the Android/Linux and IOS/Mach_BSD platforms approached the mobile market with user-interfaces and operating systems that were designed to work within small form factor constrained design limitations.

Android/Linux and IOS/Mach_BSD are not designed to work on large format systems with multiple input methods, and nor is there any real attempt to have either operating system target such systems. Google and Apple maintain completely separate distributions and operating systems to handle traditional desktop tasks; Chromium_OS/Linux and OSX/Mach_BSD.

Considering that Google and Apple have succeeded where Microsoft has unilaterally failed one would think that Microsoft would take some notes. Indeed Microsoft has taken notes on the successes of products from Google and Apple. Windows 8 is indeed supporting the ARM architecture.

So let's get this out of the way first of all. BORING.

For those who don't understand why I say this is boring, I'm just going to give you one link:

The Official release version of the Debian Operating System supports 9 Different Processor Architectures with the Linux kernel. In addition the official release supports 2 of those processor architectures using the kfreebsd kernel.

Unofficial and/or discontinued releases of the Debian Operating System include an additional 9 Different Processor Architectures. Unofficial and/or discontinued versions also include support for two more kernels, Hurd and netbsd.

In comparison Microsoft supporting a single new architecture is downright laughable. Microsoft's attempts at Metro program compatibility is also laughable. When Microsoft covers as many platforms as Debian does with the program support that Debian manages, then we can have a talk about how incredible Microsoft's engineering team is. Till then, Microsoft is still the amateur chump talking big with absolutely nothing to back up the boasting. Sorry if this is a bit too blunt for the people who thought Windows 8 was somehow doing something new... Debian's been doing this multiple architecture / multiple program release thing for over a decade.

Now then. Obviously I am not impressed by Microsoft's support of ARM. I think Microsoft's new position is a knee-jerk reaction to try and shore up the assault on the commercial industries reliance on Microsoft branded products. If nothing else the sales of Android/Linux and IOS/Mach_BSD products have gotten consumers to realize that they don't need Microsoft products on their computing devices.

I also think that Microsoft's design direction taken with Windows 8 is likewise derived from a knee-jerk reaction. Microsoft does not understand where the computing market is going on what amounts to fundamental levels. Microsoft sees Android/Linux and IOS/Mach_BSD as direct competitors and has designed the Windows 8 user-interface to compete against those platforms.

The problem is those platforms are not Microsoft's Competitor. This is:

If I was a Microsoft employee KDE is what would keep me up at night sweating bullets. The KDE 4.x release already incorporates several interesting technologies that make it a more attractive and productive choice for both business and consumer customers. Long time tech writer Bruce Byfield has even gone so far as to state:

However, now that software like KDE development is outpacing proprietary choices like Windows, these basic advantages are more compelling than they have ever been. Increasingly, we are now in an era in which free-licensed software like KDE is not only an ethical choice, but a pragmatic one as well.

Among the interesting technologies KDE offers is it's Activities. Don't worry if KDE Activities are confusing at first, Bruce Byfield has a very good post on the technology at work. To simply state what Activities can do, they create different desktop interfaces that can handle completely different layouts of the user-interface elements.

Recent updates have extended the base functionality of the Activities technology. For example KDE 4.8 attached power-management settings to the KRandR and Activities function. I highly suggest reading drfav's wordpress post on some of the implications of this particular update, as well as the accompanying video. With KDE 4.8 it is now possible to setup different activities to have different power profiles, and those power settings change by just swapping Activities.

So, let's through a little bit fuel onto this fire. KDE currently has 3 different interface configurations. The default configuration is called Plasma Desktop and offers a traditionally oriented desktop design configured for high resolution monitors. There also is a configuration referred to as Plasma Netbook which is optimized for low resolution screens and low-resource hardware. The last is KDE Contour, an interface designed around touch-screen interfaces. The KDE Plasma-Active project also adds a UI layout and design guideline for touch-interface designed applications referred to as Active-Apps. For the most part KDE is capable of switching between each of these different interface configurations while programs remain open and running. KDE achieves this capability by decoupling system functionality from the user-interface.

Okay, as of right now I do not think it is possible to use Activities to change through the Workspace interfaces. Nor do I think it is possible yet for applications to automatically be reskinned on an Activity Switch. For example, if you switch from Plasma-Desktop to Plasma-Contour while Amarok is running, Amarok will still be presented with the keyboard and mouse interface rather than an Active App interface.

Imagine for a second when KDE does gain these abilities and what this could mean for hardware vendors. Sony, for example, could offer a Playstation Tablet backed by KDE. Sony implements the XMB as a Activity bound Workspace interface atop KDE and the user has a Sony Tablet experience while in tablet mode. When the user sits down and attaches their tablet to an external monitor KDE detects the monitor and switches to an Activity with a Plasma-Desktop interface and cranks the clockspeeds up for a desktop computer experience, all without interrupting any running applications.

This technology gap between KDE and any other interface is the stuff of nightmares for companies like Microsoft. KDE is the defining model of how the user-interface problems between hardware devices should have been approached to begin with. In the light of what KDE is doing? Metro isn't just pathetic, it's a complete and utter joke.

I can also make the problem even worse for Microsoft with just 5 words: KDE. Is. Operating. System. Neutral.

I can also turn this into a full on Cthulhu class nightmare with just two more words: Android. Compatibility.

Let me explain. The KDE Software Compilation is primarily designed as desktop environment that runs atop GNU/Linux. It is also an open source project and can theoretically be adapted to any operating system. On paper this means that it should be possible to run the KDE interface atop operating systems built against Android/Linux or WebOS/Linux. This alone could be significant as Android/Linux vendors such as Amazon are already creating and maintaining their own interfaces atop the Android Operating System. The downside here is that getting KDE up and running as an User Interface Environment on Android/Linux or WebOS/Linux would require large amounts of new code, and the KDE developers have been pretty clear that they have no desire to do that work.

The flip side is that Android itself is also an open-source project and the structure of Android/Linux applications are well documented. Arstechnica actually has a pretty article on this subject already. It is fully possible for Operating Systems that are NOT running Android/Linux to provide binary compatibility for Android/Linux applications. Where getting the KDE interfaces to operate atop a non GNU/Linux operating system and kernel would be very difficult, getting Android/Linux applications running atop a KDE/GNU/Linux distribution is incredibly easy.

If you thought this was a bonfire already, let me add some fuel, say, something like: Hypertransport

Imagine for a second buying an ARM tablet loaded with KDE/GNU/Linux. You wander around town doing the normal tablet things with Android Applications. You get home and you plug your tablet into a docking station. This docking station just happens to have a couple of hypertransport links that now connects your tablet to say, an AMD Piledriver processor and a RadeonHD class graphics card. As your tablet shifts from the tablet display to your external monitor the tablet syncs the data you've changed while in tablet mode to say, a larger drive. Other programs, such as Valve's GNU+Android/Linux Steam client and Sony's Playstation Suite synch with data stored on the larger internal-hard-drive, such as a list of installed games.

As the system finishes reconfiguring, the tablet goes from running atop an extremely power-efficient ARM processor and a limited graphics processor, to running with an x86 processor with a gaming class graphics card. From the users perspective they haven't done anything but plugged their tablet into a dock and wait a few seconds, and now they still have everything they were doing running on screen, but a much more powerful computer with far more capabilities.

The kicker? Aside from ARM not running on a Hypertransport bus and KDE not having this software switch functionality yet, and the need for somebody to write a synchronization package to actually perform the data-sync, the rest is already possible today. Hypertransport already supports central processor hotplugging. In addition GNU/Linux running atop multiple processor architectures is also a reality. Technologies such as OpenCL also remove some of the limitations of switching processor architectures.

Imagine for a second the market implications these types of technology advances have for companies like HP, Dell, or Sony. This is stuff that is not 5 or 10 years down the line. This is stuff that could be on the market in less than a years time.

To reiterate what I said earlier. Android/Linux and IOS/Mach_BSD are indeed headaches for Microsoft. They've cracked the ribs of hardware vendors who have spent decades relying on Microsoft. Consumers are now more open to concept of buying computing products that don't carry that magic Microsoft Badge.

For what KDE represents, and what it is doing NOW, it's the competitor that Microsoft should be paying attention to.

Now, will hardware vendors realize the possibilities that a KDE/GNU/Linux system offers them? Will hardware vendors leverage those possibilities in products and make stuff that consumers actually want to buy?

My suspicion is yes. As of right now the real-consumer backlash on Windows 8 is going to make the Vista backlash look like a drop in the bucket. Hardware vendors are likely going to be left scrambling to come up with answers... and KDE will be there waiting to help.

* * *

Update: I received this response from Aseigo

great blog entry; and thanks for the support. it's great to know others "get" what we're trying to achieve here .. and with the growing number of people and companies that are rallying around the technology and the ideas, i think we have a very good chance to be extremely successful in the coming years.. 


Wednesday, January 18, 2012

SOPA: A Partisan Fight.

No, I haven’t forgotten this blog. Like many of my other ventures over the years, such as Mepisguides, my lack of updates was driven by my desire to keep from letting my personal life interfere with my professional life. Now that I’m slowly getting my feet back under me I feel a bit more confident in posting again. 

The subjects this time are the infamous acts of legislation titled Stop Online Piracy Act and Protect Intellectual Property Act. Yesterday, January 18th, was the day that several prominent online sites protested the legislation by shutting down services. Over the course of the day 13 congressmen voiced their position of opposition to the legislation, with several notable sponsors of the legislation pulling their support. Not surprisingly many of the congressmen making an about face were Republicans, which did lead to the surprising comment from Arstechnica’s Timothy Lee:  

The partisan slant of the defections is surprising because copyright has not traditionally been considered a partisan issue.

Well, Timothy is wrong. The issues at hand with the proposed SOPA and PIPA legislation did not involve copyright. The issues at hand were those of personal liberty and personal freedom. I pointed this out on SVN’s Google+ page:

It should, but it probably won't. SOPA was written by people with no concept of personal liberty, property rights, or consumer rights. It was written with no concept of Due Process of Law, and no concept of representation. SOPA was forced into Congressional consideration by outright bribery and extortion by people who believed that they were inherently better than everyone else and thus entitled to do anything they wanted to in order to get their way.

Some of the core issues are really the same as those behind Net Neutrality. The organizations behind the RIAA and MPAA do not believe in the concepts of personal property or consumer rights. Those organizations do not believe that consumers can actually “own” a product. Case in point is the long history of lawsuits against television recording devices such as VCR decks, Tivo, and other video capture products. For decades the organizations that back the RIAA and MPAA have dumped billions of dollars into methods that “protect” the content they sell or broadcast from being copied.

The actions of the members of the RIAA and MPAA reflect strong liberal democrat and socialistic tendencies. Just as the rights of the “state” supersede the rights of the citizen, the “rights” of the RIAA and MPAA supersede the rights of consumers. Legislation such as SOPA and PIPA further this agenda, restricting the rights of the citizen while giving organizations such as the RIAA and MPAA more power to further their agenda’s.

The reality of the situation is that the member organizations of the RIAA and MPAA are still stuck on ancient business models. Broadcast Television, for example, still works on the concepts of advertisers paying for breaks in the storyline of the program. However, more and more consumers are getting their television programs through time-shifting devices such as TiVo; or through streaming services such as Hulu and Netflix. Rather than try and explore new business models the vast majority of television production companies still try to make their shows work around a business model dating from the time of Jack Benny and George Burns.

The Commercial Content industry loves to blame piracy for their financial woes, but piracy really is not the problem at hand. The reality is that people who pirate content are going to pirate content. Not to put too fine a point on it, but the saying “Locks keep out only the honest” is a Jewish proverb dating back to Biblical times, as in thousands of years ago.

The problem the Commercial Content industry faces is that piracy is an increasingly attractive alternative to buying content. Let me put this in real terms for myself. If I rip a DVD using Handbrake I no longer have to put up with that mandatory selection of trailers or promotional content that normal dvd players cannot skip. Additionally, if I have a large series of DVD’s, such as Hogan’s Heroes, I can store entire seasons on a single media server and just browse through my shows as I please. However, the Commercial Content producers do not believe that I have the right to consume my content as I see fit. They believe they have the right to dictate how I consume my content and what I consume my content with. The RIAA and MPAA do not have the rights to make those determinations. Period. Stop.

* * *

Looking ahead a lot of analysts are now wondering what the next move on the part of the RIAA and the MPAA is going to be. It is my opinion that they need to be short-circuited and shut down.

The reality is this: Theft is theft. There are already numerous laws on the books both at the federal and state level within the United States that clearly spell out how theft should be handled. There is no need for special legislation to deal with “Online Piracy.”  The FBI proved this point by taking down the sites and using existing standing laws with absolutely no need for additional legislation

This goes back to what I said earlier. Existing Laws already cover protection and enforcement of Copyright. The issues at hand was never about Copyright. SOPA and PIPA contained no provisions or actions that would enforce copyright that existing laws did not already accomplish. SOPA and PIPA were always, from the day they were bribed into existence, always about limiting consumer rights and giving more power to entities who need to be shut down and dismantled. 

What we do have a -need- for is legislation that states that consumers have rights. We do need legislation that says it’s legal for consumers to copy television content that they paid for to any device they want to in any format that they want to, on any operating system that they want to. We do need legislation that says it’s legal for consumers to copy music or video content that they have paid for to any device they want to in any format that they to, on any operating system they want to. We do need legislation that says it is legal for consumers to run or copy video game content that they have paid for to any device that they want to in any format that they want to, on any operating system that they want. We do need legislation that says it’s legal for consumers to resell, gift, or trade content they have purchased to anybody they want to.

What we -need- is legislation that makes Digital Rights Management that results in Digital Rights Removal illegal.  

The reality is this: Getting rid of Digital Rights Management schemes that limit a user's freedom to own their content at a Federal and/or State level removes one of the largest barriers between Pirated Content and Purchased Content. If I no longer have to deal with DVD’s and Blu-Ray discs that don’t have unskippable trailers, I’d be more likely to pick up a legal copy than go and get an illegal copy that does not contain all of that extra crap. By the same token, I'd be more likely to buy a Blu-Ray movie with a video resolution of 1920*1080 if I was able to watch that video without having to purchase equipment that can pass HDCP signals.

Another side effect of making such DRM schemes illegal would be the crash in costs.  To speak for myself. even with a paycheck I was not buying a whole lot of videos games because I just was not willing to spend $60 a game. Looking at game sales, I’m not the only person whose skipping “hot new release” titles because they are a bit on the ludicrous side of expensive.

The economic rule of thumb here is pretty old: The more people you can potentially sell to, the more people you HAVE to sell to, and thus the more people that are in a position to buy your product. Really. Not that hard. It’s not. You don’t need to be a rocket scientist to wrap your noggin around this economic principle. 

Cratering prices on content and making that content easier to access and easier to use would go a long ways towards making people more likely to buy that content. 

Now, is the United States now in a position to hit the RIAA and MPAA below the belt and dismantle their anti-consumer and anti-personal-rights war? I’d like to think yes. The SOPA and PIPA protests have highlighted just how partisan the core arguments actually are. With an upcoming election year it may be possible to get candidates into office who can push through pro-consumer legislation and return to consumers their rights to use their purchased property as those consumers see fit.  

* * *

Now, as an addendum, I do want to single out services that have implemented non-restrictive Digital Rights Management, notably the Steam network developed by Valve Software and the Desura network.  

Both of these services leverage Single-Sign-On technology which requires a user to sign in and certify their account against a master-server. From that point users can access content distributed through the Valve Steam and Desura networks on any platforms that support the content Valve and Desura offer through those services.  In addition both services have offline implementations that store the user's credentials and allow consumers to still access their content even when no internet access is available.

Now, neither service is perfect, and I'll go more into what could be done with such services as I actually write about them.