Sunday, September 30, 2007

right thing to do would be to obtain an Nvidia card...

As a journalist and runner of a Linux support site, I find myself sometimes making purchases or asking for hardware I personally wouldn't touch with a ten-foot pole. As this blog notes I have severe issues with Intel, yet in order to provide support for the Intel platform I need to have Intel hardware on hand.

I also have severe problems with Nvidia. I have little to no respect for the their development team. However, I do have examples of all of their GPU's aside from the FX lineup and the 8x00 lineup. Well, with the launch of Mepis 7 around the bend, it is going to become more critical to have an Nvidia 8x00 series cards since many users are still somehow sold on the concept that Nvidia has better drivers than AMD/ATi.

Nvidia has also recently managed to make me eat my words. The most recent driver update for the Legacy 96xx driver set actually fixed years old problems with Geforce4 GPU's. Source: Phoronix. Nvidia has also made a concerted effort to improve it's driver release rate, and I see more legitimate bugs and issues being addressed in each readme. A recent post on OrangeCrate indicated that a recent Nvidia driver was successfully rendering at 1440*900.

Which leaves me in a odd position. I don't WANT to buy an Nvidia card... much as the same as I do not WANT to buy an Intel Core processor... but it looks like the right thing for me to do is pick up the hardware.

Either that or improve my "asking for cash or donation skills" to the point of somebody shipping me a card. Doubtful. I ask for money about as well as Microsoft writes server software.

(oh common, I had to take a Microsoft pot shot in there SOMEWHERE).

Open-Licensed / Closed-Licensed

In the last post I used the terms Open-Licensed and Closed Licensed. I've also in the past used the terms gratis and non-gratis software. Why is that?

Well, the primary reason is the confusion surrounding Free Software. Free can have several meanings. It can mean Free as in Freedom. You can do what you want. Or it can mean free as in price, such as somebody tossing you a beer at a ballpark that you didn't pay for.

So, I decided to avoid the term Free Software and clarify the intent.

Open-Licensed software is software that is under an Open-Source license. The source code is available to look at and examine.

Closed-Licensed software is software that is under a proprietary license and the source code is not available.

Gratis-software is software that comes without a cost. You don't use your wallet to pay for gratis-software.

Non-gratis software is software that comes with a price tag. You have to pay to use the software.


Just because software is non-gratis does not mean that is not Open-Licensed. For example, the GPLv2 allows for software to have the source code available, but users must pay in order to get the software.


Now, I realize that trying to get other writers to stop using the term Free software is probably as pointless as asking Microsoft to release WinNT5 under GLPv2. It just ain't going to happen.

QT and GTK : impact.

Some time ago an newcomer to Linux commented about the apparent vile hatred and rivalry between Gnome and KDE, and the newcomer wondered why couldn't both groups just get along... wasn't Microsoft a better target to aim against?

As is well known, I am not a Gnome person. I'll run IceWM, I'll run a 'Box, I love XFCE, and it's my opinion that KDE is hands down the best userfriendly desktop available. As seen in this blog I have a tendency to take all the potshots at Gnome that I can. Keeping that in mind, I am probably NOT somebody you want to talk to if you want an objective and unbiased view of the rivalry between KDE and Gnome. However, the question needs to be asked... can I actually defend my position as being against Gnome? Am I blinded by past events to the point that I am being unreasonable?

Well, the answer isn't as clear cut as the Intel answer was... which is why the Intel post ran first.

To start with, lets go over some history. Gnome was originally created with GTK. GTK stands for the Gimp Tool Kit. Gnome was originally created in response to TrollTech's QT tool kit.

Thing was, QT originally was not open-licensed software. QT originally was closed software, which made the GNU developers nervous. TrollTech used the KDE desktop to promote QT. If you wanted to know what QT could do, you used KDE.

According to Wikipedia GNU developers started two projects back in 1997, one to create a new desktop using a different tool kit than QT, and one to create an open-licensed set of tools identical to QT called Harmony.

Again, according to Wikipedia, TrollTech re-licensed QT under the QPL in 1998 and established a KDE Foundation to insure that if TrollTech went bankrupt, was bought out, or whatever, that QT would continue. Wikipedia continues on to state that TrollTech re-licensed QT again in 2000 to be GPL software.

That's where the Ubersoft comic came in : http://www.ubersoft.net/comic/hd/2000/09/cant-win-losing

TrollTech had done everything that Richard M. Stallman and other GNU developers had demanded they do. TrollTech had played by Open-Source rules and yet GNU developers still railed against TrollTech and QT.

So... why does this hostility continue today?

Well, think about it.

QT was originally closed-licensed software and came with a cost to use. GTK has been open-licensed software from the start. There was a decided philosophical and business split between Linux and Unix desktop interface developers.

Business owners who were more practical in nature were more likely to spend the money for a QT license.

Hardliners for open-licensed software went with GTK+.

As I see it, Gnome developers have a real large problem with communication. The group has a reputation for being a closed clique. If a non-clique developer submits a patch to clean up code or enable a new feature, Gnome developers are known to reject the patches. Over the recent years it's gotten so bad with the clique nature that Linus T. himself went after Gnome on public mailing lists. Okay, so Linus doesn't know chemistry, biology, or physics, but he DOES know coding.

KDE developers? Well... I haven't found one that hasn't at least responded to an email. I don't SEE reports about new developers having problems submitting patches to KDE projects. I don't SEE reactionary rejections just because a coder wasn't in some "in-clique."

Now, don't get me wrong. I'm not saying the GNU Hardliners were wrong to pursue an alternative desktop... If Gnome had NOT been created, QT probably would have never been open-licensed.

***

Now, why do I, personally, rail against Gnome so much? Isn't the corporate choice for Novell, RedHat, and Ubuntu?

Well, I'll be the first in line to admit that I Gnome was the first Linux desktop I tried on some ancient version of Red Hat. I couldn't get anything done. When Ubuntu came out after Mepis, I again tried the Gnome desktop... and again found myself unable to get any work accomplished. The user interface seemed to fly in the face of any sense that I could make out.

Now, okay, fine, somebody can easily claim that I just don't know a good Desktop interface. Yeah, okay, lets examine that idea. I offer up systems through MepisGuides.com that come pre-loaded and pre-configured with XFCE, also GTK based. I've written guides on how to install IceWM. I have installs of AntiX with FluxBox (I think). I have various consoles such as Sega Dreamcast, Playstation 2, Gamecube, Wii, and Playstation 3 all which have various "system" interfaces. I wrote Guides for Cedega that explained it's interface. I have a running Install of Vista RC1, and a running install of Vista Home Premium.
I used to have an install of SymphonyOS, and for a while I had the OLPC LiveCD beta running.

I have no problems switching from one to the other and running through various interfaces.

Yet I have problems with Gnome. That indicates a serious problem when Gnome typically makes less sense to me than Vista.


*********

So... this answer isn't as clear cut. It is mostly opinion, and very subjective. Hey, some people use Gnome and like it. Fine. I don't.

Does this mean that I'll stop saying that Gnome developers have as much business as I do making a Desktop interface? Nope. It would take a lot of "groveling" from Gnome developers before I'd think about revising my opinion. Yeah, so I'm a little harsh. Call it payback for the years of potshots at KDE.

Friday, September 28, 2007

Amazon to do what Apple won't?

Hello from Amazon.com.

If you use Linux, you can currently buy individual songs. A Linux
version of the Amazon MP3 Downloader is under development and will
allow entire album purchases when released.

Our MP3 files contain no digital rights management (DRM) restrictions,
are provided in an industry standard MP3 format, and should be
compatible with most systems capable of reading MP3 audio files.

What would it take to convince me Intel is Open-Source Friendly...

Currently I'm writing an entry about QT versus GTK and how the two different toolkits have an impact on the KDE vs. Gnome hostility. Part of the entry was driven by an Ubersoft comic, located here : http://www.ubersoft.net/comic/hd/2000/09/cant-win-losing

The basis of the comic is Richard Stallman talking down to TrollTech, makers of the QT toolkit. After reading it, it occurred to me that I might be acting like Richard Stallman in regards to Intel. Intel has done a lot for Open Source lately, so I can I continue my statements that Intel is Open Source hostile? Or do I have blinders on that prevent me from being objective about Intel?

Well, lets see.

For starters, Intel has open sourced their wireless drivers. Given that Intel Wireless chips are generally of a high quality with good signal range and transmission rates, this is a good step. Intel has open sourced their graphics drivers... sort of a middling step considering that most game developers get obscene when asked about support on Intel GPU's. Phoronix had a set of slides from IDF that painted Open Source driver development in a positive light.

So.. what has Intel done that is Anti-Open Source.

Well... we have TPM, or Trusted Platform Module, that effectively locks computers out from their users.

We have HDCP, which was developed by Intel, that effectively locks users out of their video and audio content.

We have the Intel Azelia audio specification which has seen no code releases or support from Intel. The current developer working on the Intel HDA audio driver is reported to have had significant hair loss due to the problems with the specification.

We have EMT64, which was a deliberate split in the x86-64 specification.

We have EFI, or Extensible_Firmware_Interface, which replaces the traditional BIOS and can also be used to lock users out of their own hardware.

Then last, but probably not least, there is LinuxBIOS / OpenBIOS. AMD has made significant code contributions to the LinuxBIOS project. The supported motherboard list is filled with AMD motherboards, and as far as I could read, no Intel boards are operational.


******

So, what would it take to convince me that Intel is serious about Open Source and that Intel is not Open Source hostile?

Well, get rid of TPM. Stop including it on motherboards. Make it clear to Microsoft that Intel will not assist in removing the end users controls on their hardware.

Open Source the HDCP specification. Make it public so that users can access their content. Tell the movie studios and RIAA executives that Intel will not help in their attempts to remove end-user control of content.

Provide a working 2 channel user-loadable firmware under a GPL style license, firmware examples for higher channel setups, and driver coding examples for Azelia Spec Audio devices.

Stop the x86-64 knockoff. Support the x86-64 standard in full and quit creating chips that have issues running x86-64 compiled code.

Open the source code and specification to EFI.

Support LinuxBIOS / OpenBIOS. Contribute code and BIOS examples for existing motherboards and insure that EFI is well supported.

These are the steps Intel would have to take before I would consider them Open Source friendly. Until these steps are taken? My statement stands. Intel is Open Source hostile, and end-users need to realize this when buying hardware.

Now, I'm not saying go out and buy AMD only, but there are not a whole lot of choices in the x86 processor market to choose from, although IBM's PowerPC based Cell would be desirable.

Why Walt Mossberg discredited himself

Recently in a chat with an acquaintance talk turned to Dell's Ubuntu efforts. The subject of Walt Mossberg, writer for the Wall Street Journal came up, and the "kiss of death" article was mentioned. I shocked my associate by simply stating the article was invalid and that Walt Mossberg had proven in that article that he didn't have a clue about technology, and certainly didn't deserve to be writing for any newspaper or in any journalistic fashion relating to technology.

Shock, gasp, and recoil in horror. Why do I have such a low opinion of Mr. Mossberg?

Because he didn't use Mepis Linux or mention it at all.

Lets put this in baseball terms so that the impact is a little clearly felt. If you ask a self proclaimed baseball expert to talk about the top baseball franchises, you would be shocked if the expert did not mention the Atlanta Braves in a run-down of the all time greatest franchises. Sure, the Atlanta Braves won the World Series only once, but outside of the 1994 baseball strike the Braves won their division title every single time between 1991-2005. The Braves have one of the best farm team systems in the entire league and it's not often to hear about how some rising star or returning player was given a chance by the Braves. In a recent game against Houston it was mentioned that one of the Houston pictures who had started in the Detroit Tigers organizations had his career ended due to a surgery. It was because of the Atlanta Braves giving the guy a chance in the AA-Class league that the pitcher was back in the majors.

The Braves also gave John Smoltz a chance after his surgery, running the legend as a finisher, then bringing Smoltz back as a starter. In baseball, that is frankly unheard of.

There is no way around it, the Atlanta Braves are one of the greatest Franchises in the history of Baseball, and any expert would quickly discredit themselves for saying otherwise.

So... Mepis Linux... It's the same as the Atlanta Braves... Mepis Linux hasn't hit the ball out of the park that often... It hasn't retained a #1 position on Distrowatch for any significant length of time. It isn't reported on by Linux.com, or any other IT journal on a regular basis.

Yet... what a lot of people don't realize is that Mepis Linux is one of the oldest LiveCD base distributions around, and that many of the features that are creeping into other LiveCD versions now have been in Mepis Linux since 2003.

For example, automatic driver configuration. One of the strong points of Mepis Linux back in 2003 is that it enabled quick and painless installation of Nvidia and ATi drivers through a GUI front end. Some versions of the Mepis Linux 3.x series shipped with both Nvidia and ATi drivers already on the disc, and could be installed as an option during the main installation of the Operating System.

Another example is resolution changing. Back in 2003 Mepis Linux allowed setting a resolution FROM the LiveCD itself on Boot. Mepis Linux also enabled changing of the resolution by the LiveCD with an integrated tool.

More examples included the Mepis OS Center's User Tweaks which enabled easy cleanup of log files and cache files.

The list goes on and on. All of the major features that other Distributions are only now just offering... Mepis Linux had literal years ago.

Even the "new" news about automatic driver updates coming. Just out of wondering, what is APT? Swiss Cheese? Debian has had "automatic" and "userfriendly" driver upgrades for far longer than any other distribution. That isn't news, that is stating the obvious.

The fact is, Mepis Linux was focused on making a userfriendly desktop before the market was even there. When Mepis came on the scene you had... Knoppix... and that was about it.

Now a lot of people get confused about Mepis's changing of the repositories from Debian to Ubuntu, then back to Debian. What a lot of users don't understand is that using Debian or Ubuntu repositories was an offshoot factor to what Mepis Linux was doing, with a lot of users mistakenly referring or treating Mepis as a development offshoot like Kubuntu.

Mepis Linux not only uses a different kernel, it uses a reconfigured monolithic kernel with additional device drivers and kernel modules in order to support a wide range of hardware. Mepis Linux also uses the Tool Chain and sources from one distribution in order to provide the bulk of a usable Operating System while focusing on tuning specific portions of the Operating System.

The hoped for result in using a particular tool chain is that anything else built with that tool chain will be compatible. When Mepis was built using sources provided by Ubuntu's Dapper Drake project and the Dapper Drake tool chain the hope was that all programs compiled using the Dapper Drake tool chain would be compatible with both Dapper Drake an the Mepis 6.x series. That... isn't what happened.

With Mepis 7 Series the Debian Etch tool chain is now in use, which should enable all packages compiled against the Etch tool chain to be binary compatible with both Debian Etch and Mepis Linux. So far... that is what is happening.

By maintaining binary compatibility with custom optimizations on top Mepis Linux offers a best of breed distribution. That's why Mepis Linux was offering all of these "new" features years ago. Back in 2003 Warren wasn't worried about providing Apache, Python, or other tools. As long as Mepis was binary compatible with the sourced distro, the sourced distro could take care of items like those. What Warren and the current Mepis Development team worries about is this:

How to make a userfriendly Linux Desktop.

And it shows. The most common comment seen on the Ubuntu forums was "It works in Mepis!" The constant flood of reports coming in about "I tried such and such hardware and it worked" was certainly encouraging. The support of Intel's IDE-lacking chipsets was another milestone for Mepis Linux that placed it ahead of the crowd.

It also shows in the Community. Mepis Linux was the first Linux Distribution to create an entirely community based forum separate from the Main Distribution, MepisLovers. Mepis Linux is also the only Distribution with a site dedicated to visual guides, Mepisguides.com. Mepis Linux also has seen the creation of International sites for Italy, France, and a site called Mepisimo (I think it's spanish?).

So, the end result is, any tech expert who takes it upon themselves to try, test, or judge a user friendly Linux and does NOT cover Mepis Linux... discredits themselves.

So, whether or not the Wall Street Journal or Walt Mossberg likes it, by skipping Mepis, his writings carry no credibility what-so-ever.

Now, I am purposely trying to avoid stating the obvious. Gnome developers have no business designing a Desktop Interface. That's my opinion. It is also my opinion that it doesn't matter who reviews it, Ubuntu is always going to fail on being user friendly because it uses Gnome. Quite frankly, I think Dell did make a mistake in not choosing a KDE based distribution.

Personally, yes, I would have liked to have seen Dell pick up PCLinuxOS or Mepis Linux, which are both excellent user friendly distributions. It is also my opinion that isn't too late for HP to avoid making the same mistake by going with a Gnome based distribution.

Thursday, September 27, 2007

How well is Linux doing? Just ask Microsoft

Trying to determine how well Linux is doing on the Desktop is a difficult task. Unlike competitor operating systems, the majority of boxes with a Linux installed didn't come from a major retail chain. Yes, this does include the Lindows/Linspire boxes from Wal-Mart online.

Yes, the market is growing. Yes, OEM's are jumping on top of the Linux distribution bandwagon on the desktop, starting with Dell, recently with Levono, and HP is entering the ring.

The desktop market is doing so well that Microsoft finally had to admit defeat in the desktop market.

Microsoft... admit defeat? Wasn't there some law that those words could never be used in the same sentence? Well, the words themselves were not. What Microsoft did is allow licensed OEMs to start selling Windows Xp again.

The fact is, Vista has been getting tromped by desktop oriented Linux distributions. Nobody wants the OS, and their has been a relatively large backlash against Microsoft for continuing the push Vista. Hard-core PC gamers have been hit pretty hard, and for once consoles are looking very attractive with at least one console offering Keyboard and Mouse support.

This though isn't about PC gamers picking up and moving to Linux and the Playstation3.

Thing is, Windows NT5 was GOOD ENOUGH for the average user. The layout wasn't very confusing, the performance was adequate, and the long term stability of the OS was approaching data-center percentage requirements. Okay, granted, Windows NT5 has all the security of a prison made of swiss cheese, but home users generally haven't cared much about that if my experience of computer repair is any indication.

Releasing OEM's to start selling Windows XP again and allowing users to crossgrade down to Windows Xp from Vista is a direct move to cut off interest in Linux on the desktop. I don't think there could be any clearer sign that Linux is on the move as a choice for desktop users.

Now, the question is, will Microsoft finally release Service Pack 5 for Windows 2000 Pro in order to bring it's two NT5 branches back into synch, or is Microsoft going to continue to screw over the few million users of the first version of WinNT5...

Saturday, September 22, 2007

yes... ads.

you might have noticed that an ad has appeared on the top of the page...

While I have stated in the past that I am not going to place ads on MepisGuides.com, I decided to go ahead and try running ads through Blogger. See how well they... "work."

I'm also still debating on whether or not to try ads on the in-development LinuxGuides pages.

AMD Triple Core : more thoughts

Alright, so what exactly do I think of the Triple Core processor... is it a good idea?

Well, my gut reaction is no. The processor matrix from AMD currently has at least 8 active or semi active sockets (754, 939, 940, AM2, AM2+, AM3, Socket A, Socket F). Many of these sockets have similar named processors with major brandings of Athlon64, Sempron, AthlonX2, and AthlonFX.

However, with Quad-Core, AMD is introducing a new brand name, Phenom, which will also cover triple core processors. If AMD leaves Phenom for Quad and triple cores only, then there will be a clear product separation.

Going beyond the gut reaction, this is a surprisingly good move on AMD's part since it's one that Intel can't match. Intel is still reliant on the aging Front-Side-Bus architecture and has no direct-connect system put on the market. The result is that Intel can only do dual core and quad core chips on their architecture.

AMD, however, has HyperTransport, which is a Direct-Connect architecture, and also has the memory controllers on the die of the processor itself. The result is that processors don't have to fight for memory allocation, and can have direct access to their own memory sockets.

Developing cheap motherboards for Triple-Core will also be fairly easy, just remove one of the memory sockets and tracings from an established quad-core motherboard. On the mid-high range of motherboards traced for Quad-core, the triple core just slots right in.

The result is that vendors shipping SMP enabled Linux's will be able to offer higher performing systems right "now" (now being relative to Triple Core reaching OEM) on AMD systems than on Intel Systems. As SMP aware consumer software comes on line, the Triple Cores should start to outpace their dual-core predecessors, even on Microsoft products.

My analysis is that Intel is going to have to shift strategies again, and very quickly. Intel already had to admit that AMD had the right idea on IPC (instructions per clock) being more important than raw speed. Now Intel is faced with the possibility of leaving a market segment unanswered due to not having a Direct-Connect Like Architecture and an on-die memory controller.

While I feel sure that Intel is quite capable of shoving their memory controller onto a processor die, they'll run into a rather large problem. Their current power per watt ratings are going to skyrocket in accommodating the inclusion of the memory controller.

There is also the problem that whether or not Intel wants to admit it, their Direct-Connect Like Architecture is still 2 to 3 years out from mass production. I'm not going to say that Intel's only option is to join the Hypertransport Consortium, but it's one of the few options I see that will allow Intel to remain competitive.

The largest question then is whether or not AMD can feed channel demand for TripleCore processors... We'll know the answer to that in time.

On AMD Triple core

This comes from TeamAti's forum, and are my posts on AMD's Triple Core strategy. Basically one of TheInquirer's writers had written about AMD's Triple Core, and once again showed that nothing on the TheInq should be taken without a tub of salt. Part of TheInq's article concerned comparing coding techniques on the Xbox 360 as being similar to coding techniques on AMD triple core. My first response to the thread consisted of stating that I would have figured TripleCore would have more to do with Fusion than with anything else.

****************

I don't see the connection between TC and Xbox 360...could someone explain this to me? - Shinigami


I don't think there is one. Specifically speaking, the Xbox 360 processor is based on PowerPC and uses technology derived from the Cell Multiprocessor architecture, namely the SPE's.

While AMD and IBM do share technology, such as SOI and APM, they generally work together on x86 and x86-64.

The
other thing to keep in mind is that the report came from The Inquirer, and the site has an earned reputation for being less than accurate. Both The Inq and the The Register are known for going out of their way to put excessive spin on a story... which figures as they were started by the same people. Sometimes I enjoy the over-the-top reporting and speculation, but it helps to keep a keg of salt on hand when reading.


I believe that the report is also factually wrong in developers already working with a Tri-Core system. The fact is, most x86, x86-64, and PowerPC is coded to make use of any threads available. Hence the reason why SMP enabled applications generally scale in performance as more threads are added to the available processing amount. Keep in mind that both Intel and AMD have been offering processor densities of 8 to 32 processors for several years, and both I think can be had in 64 to 128 processor densities.

Specifically this portion

This is a fact, since there are far more applications for Xbox 360 than native quad-core or more multithreaded apps

That is an outright fabrication on behalf of The Inquirer. There is a difference between software that is SMP aware, and software that is optimized for a specific number of available processing threads. The statement also ignores Linux on the Playstation3, which is a whole other matter entirely.

The problem is, much of the software that is built to run on the Xbox 360 is also being cross built to run on Playstation3, and x86 / x86-64. There really isn't a whole lot of optimizations available for triple-core PowerPC alone. There are, however, a lot of optimizations to make the applications SMP aware, and able to appropriately manage the available threads on each platform.


**********

What does this have to do with 360? The 360 uses an IBM triple core. Are they going to start using AMD because the 360 has an ATi GPU? That would be a slap in the face to IBM then. - _[TeamATi]


Again... PowerPC... x86-64. They are
NOT the same architecture. I don't think AMD has any hands in PowerPC development. Microsoft could switch, yes, but the change from PowerPC to x86-64 would mean a completely new bus architecture, as well as a completely new AMD memory controller to handle the Yellowstone RDRAM in use.

Tri-core, is simply a Barcelona Quad-core, with on core disabled by a laser. This enables AMD to use chips that had only 3-cores operational, to be usable. - ColonelCain


Sorry. That doesn't make a whole lot of sense. Reason being AMD's A.P.M. (
Automated Precision Manufacturing) technology. AMD is fanatical about yield qualities. Sure, there are going to be some chips going through that have a forth die failure... but enough chips to make up an entire line-up?

Not out of AMD's fabs. Intel's fabs, oh yeah, I'd believe that story in a heartbeat.

As I understand it, with APM 3.0 a couple years ago whenever Fab 36 was coming online, AMD already had wafer level control during fabbing (
the process of making the chips) and was beginning to work on die-level control. Given that APM development hasn't stood still, there is reason to believe that AMD's current version of APM gives them complete control over the die during fabbing.

The fact is, low-end processors sell more than High-End processors. For every AthonXp that went out the door, there was probably 1.5 to 2 Duron's going out the door... same thing with Athlon64 and Sempron... and Intel Pentium and Celeron.

From a top down perspective, low end parts in processors, graphics cards, hard-drives, memory, power supplies... and everything else for that matter... make up the bulk of actual retail and OEM sales.

Given that Quad-Core is being positioned as a premium product, there are going to be quite a few more dual-cores being made than quad-cores, and sales history would indicate that Triple-Cores would sell more than Quad-Cores.

Ergo, AMD would either need a seriously high failure rate on their quad-core chips in order to fill the Triple-Core market... or the triple-cores are built from the ground up.

My opinion is that the triple core is an empirical test to insure that Fusion will work properly. Triple Core will give realistic information about power consumption, heat output, and voltage control. At the same time, it frees up room to start adding in a Fusion GPU.

AMD's history indicates that they plan ahead, and the idea that Triple Core is a setup for Fusion is a lot more in line with AMD's past than the line that quad-core yields are so bad as to generate another completely line-up.



**********

Surprisingly after these posts were made on the forums, Kyle over at HardOCP came to the same conclusion I did about yield qualities... AFTER talking with AMD directly, which I hadn't.

HardOCP Editorial

However, Kyle did think up of something that I hadn't. Intel's Semi-Quad-Core processors are known to be poor overclocking choices, and Barcelona wasn't looking any better from initial reports that I saw.
“Would AMD rather sell a 2.5GHz Phenom Triple Core or a 1.8GHz Phenom Quad Core when hardly any piece of desktop software in the marketplace can actually utilize more than two cores? - Kyle

Friday, September 21, 2007

NCSoft: Open the Source code to Auto Assault?

NCSoft is one of my... favorite... developers. They aren't afraid to try new methods of distributing a product and recieving payment for the product. Guild Wars is an astounding success, and has easily proven that Large Scale Fantasy MMO's don't require a $15 month charge. City of Heroes / Villains has proven that games don't have to be fantasy based to pull in new players. Dungeon Runners is well on it's way to proving that older technology put out for free or cheap could turn a profit on extended service charges. There is the upcoming eXteel, a favorite of mine from a previous E3, which is a mech combat game similar to Virtua Ontario... if the final release lives up to the demo, eXteel could draw in shooter fanatics.

Looking over NCSoft's portfolio, there haven't been that many losses...

However, as Nintendo has the Virtual Boy... NCSoft has Auto Assault.

Now, I participated in the Beta for Auto Assault. I liked it... I just didn't like it enough to buy or continue to play the game, especially since I was paying and playing both Planetside and City Of Heroes.

As I understood from people who went on to play the final version of Auto Assault, the game attempted to merge traditional RPG elements into a game where the basic concepts were go fast, shoot stuff, break stuff. Auto Assault went on with a dwindling player base and the servers were finally shut down.

Net Devil is going on to work with a Lego based MMO. Could be fun, they certainly proved they could design an MMO with Auto Assault...

but, it seems kind of a shame that Auto Assault was put out to pasture.

What if though, it didn't have to be like that? What if Net Devil and NCSoft could recoup some of the costs of making the game, win a major Public Relations Coup, and not have to worry about hosting or continuing developing Auto Assault?

What if NCSoft were to take a gamble on a different development model...

Say... release Auto Assault under GPLv2.

The setup is fairly simple. Release the executable server and client code under GPLv2. Assign the Copyrights to NCSoft. Trademark the Auto Assault brand-name and artwork.

So, why GPLv2? Why not a different license? For starters GPLv2 allows for the financial sale of a product. The binary product itself does not have to be given away for free, any charge can be associated with the product. The Source Code itself can be made available only to clients, those who have paid for the product. The GPLv2 also means that any changes to the source code have to be given back to NCSoft.

The result is that NCSoft can still put out a retail box or a binary download for a certain amount of money. The source code is only made available once the product has been purchased.

NCSoft is also protected from somebody else taking the code once it is purchased, forking it, and making another game by the trademark and copyright assignment.

If somebody comes up with a different version of Auto Assault, say one that removes the RPG story elements and just leaves the game at it's core elements of go fast, shoot stuff, and break stuff, that version cannot be sold or distributed as Auto Assault. However, the new version must acknowledge Auto Assault as being the basis for the code, and all of the source code changes have to be given back to NCSoft.

Opening the Source code would also give developers hands on access with a fairly modern MMO, which could help coders gain valuable experience in working on a large scale game.

The final advantage is that NCSoft wouldn't have to be responsible for hosting the game servers. Rather, those who purchase the server clients can host their own servers. All NCSoft would have to do is just set up a community page for server hosters to post their address's.

*******

Now, I feel sure that there are some disadvantages I'm missing... but as the primary servers have been taken offline, I don't think NCSoft or Net Devil have anything to lose by opening the game up.

Wednesday, September 19, 2007

The Vancouver Connection?

Okay, here we go, another follow-up to the Linux.com flood. In the blog flood I stated that I smelled manure coming from two sites out of Vancouver Washington. Why is the location significant?

Well, for starters, Vancouver Washington is only 175 miles south of Redmond Washington, about a 2 to 3 hour commute from Microsoft's Corporate Headquarters. Microsoft also has two full offices across the river from Vancouver in Portland Oregon. Back in 2002 and 2003 there were a couple of Anti-Linux sites that were determined to being run by Microsoft that were... oh yes. Registered in Vancouver Washington.

One of the sites also tracked back through Utah, which is known for hosting SCO, widely regarded as a Microsoft puppet.

So I started poking around at the sites I could find that promoted Con Kolivas and REISERFS. The basis for this was Joe Barr's statement from the original Linux.com article.
Given the track record of the Linux kernel, and Torvalds' own history of integrity and straight-talking, the notion of forking the Linux kernel because of Con's wailing and gnashing of teeth makes sense only to those hunkered down in the executive bunkers in Redmond
The more I searched and the more I saw, I kept seeing links back to Vancouver Washington. Inquirers into public ownership and site registration kept going back to Vancouver.

Never the same address, and sometimes not even the same zip code.

Sure, it's a conspiracy theory. Microsoft couldn't possibly be supporting REISERFS and Con Kolivas with the intent of destroying Linux and forking the kernel... now... would they?



The ReiserFS / EXT3 comparison - more details?

After linking the Linux.com thread in a chat channel I was asked by one of the admins for further details on the ReiserFS and EXT3 comparison mentioned in the previous blog flood. I have stated before that I am not a coder. I understand the concepts of coding, and I know how to debug written code, but I'm generally helpless at writing code from scratch and at even simple tasks like compiling. I freely admit this.

My involvement with the ReiserFS and EXT3 comparison came about more because of the hardware I immediately had on hand and was able to get access to. The real problem with benchmarking filing systems is that there really isn't anything tangible to work with. There are tons of programs available that will benchmark graphics and processors ranging from Professional OpenGL and Direct3D suites, to tools built right into commercial games, to home baked applications that record the frame rates.

There are a few tools available to benchmark Hard-drive performance under Windows, like WinBench from Ziff Davis, File Copy Test from Xbit Labs, and PC Mark. At the time of our testing we were only aware of one benchmark that existed for a Linux base, Iometer found at http://www.iometer.org

This of course, in our view, wasn't exactly going to be a proper test then. So we had to come up with our own methods.

Our first tests then involved boot times, some of the results were listed here : http://www.mepisguides.com/Mepis-6/video/what_can_you_use.html

So um.. yeah. Might as well admit it, the What Can you Use post was actually a result of our tests. Video output gave us something that no other benchmark would. We could get an accurate snapshot of the time it took to load up an OS, copy files, delete files, run an FSCK, and so on without having an external program running on top of our host OS's that could affect the performance.

Our actual copying of the files used a couple of different metrics.

One of our tests involved creating 10 different folders and pasting the contents of the Chrono Symphonic MP3 files into each of the files. At about 108megs, this gave us over a gigabyte of files that were an average of 4 megs in size.

We did the same with the FLAC files. At about 455megs, this gave us well over 4gigs with an average file size of about 18megs.

This, we felt, was representative of the average limited computer user who rips CD's. We then proceeded to go through various procedures of burning the files to a disc, using Amarok to manage the files, as well as deleting and copying the files to external USB drives, as well as to a RAID 0+1 setup on an Adaptec PCI 32-bit SCSI card with 4 Drives (don't ask me who made them, I want to say WD although I probably remember incorrectly) in a RAID 0+1 stripe/mirror.

The thing was, with REISER4 used as our default format, or EXT3, we kept running into lots of other bottlenecks... the speed of the Optical drive, the speed of the PCI bus, the network interface speed, the amount of RAM in use, the processor...

But in our average desktop use we couldn't find any major performance difference either way.

Now, I'm sure that had we used more advanced server-level hardware, such as 64bit PCI Slot, or a PCI-X, or a PCI-Express attached SCSI card, or some other higher end setup, we may have seen a difference in our performance tests.

On bog-standard user-purchasable hardware that we could buy off of Newegg at that point and time and use in our computers? REISER4 wasn't any better than EXT3. It wasn't any worse... but, that's not how REISER4 is presented.

Linux.com Repost Flood

The following is a repost of a series of posts made on Linux.com by myself. Several of these comments were made in response to a spammer, or spammers, and the originating threads were terminated. In the interest of preservation I am putting my comments down here. The substance of some of the posts is documented where I have directly quoted the spammer(s).

*************************

well. First thing, nobody was vulgar or obscene until you uttered the first obscenity. So, our gut reaction is going to be that we don't want you to begin with. The second portion of your statement that appears to be designed to inflame opposition is to indicate that Linux is a niche OS. Well, lets talk realistic market share for a minute. Both Xandros and Linspire have each surpassed the total number of retail boxes sold as Apple has. Linspire alone sold more Personal Computers through WalMart than Apple did in all of it's retail channels for over 3 years in a row. Dell, Levono, and now HP are all now offering Linux as a Desktop option, for desktop computers. Linux and Apache just don't rule the server market, the domination is total with the nearest competitor almost 40-50% behind. Okay. We don't have a reliable metric to prove that Linux has over 100million desktop users. We know that the figures cited by Distrowatch are meaningless. We, do, however, have some insight into the server sales of IBM and Sun Microsystems, as well as figures from AMD, and Intel. We know that Linux is the majority option in the server market. We also know that Solaris and other BSD's are not the majority options, and haven't been for several years.


So, Linux isn't in a niche. OpenBSD, FreeBSD, NetBSD, Novell Netware, Solaris, and Apple's own Server business, those are niche products.

The final portion that seems designed to inflame is the line that we did not try to convince you to stay with Linux. Here's a clue for you... We Can't. We cannot make you decide anything. The source code is there for you to look at. The mailing lists are there for you to look at. The developers are there for you to talk to. We cannot convince anybody to chose Linux, or any Open Source product. We, however, can give you the choice to use our product. We also then must recognize your choice to not use our product. I, for one, hate Gnome. I think that it's a dead-end project and I'm fairly convinced that the Gnome-Dev team has about as much business designing a desktop interface as I do. However, I will not stop you from using Gnome. I will try to give reasons why you shouldn't, and I will give you the opinion that XFCE is "Gnome Done Right," but that's the extent. If somebody were to tell you that you could not use Gnome... then I would have to take issue with that. It is your choice.

From my point of view, having read the comments stated, it seems to me that you have already decided that Linux is not for you. That's fine. You have expressed your desire to go to another platform. Fine, great. Other platforms offer competition, and if you want to use them, that's your choice.

What you need to ask yourself is this: What keeps you using Linux? Why would you continue to use the product if you don't like the way it is being managed? What do Solaris and the other BSD's lack that have kept you from moving to them to begin with? Why would you indicate that moving to another platform is a threat that we should somehow take seriously, and that if we don't cater to your whims that such a lack of action would be a bad thing? Linux has hundreds of millions of users, and the product is growing on a daily basis. Losing one or two, or even several hundred isn't going to hurt Linux.

The fact is, we don't have any reason to convince you. We don't even know who "you" are. I find myself echoing the sentiments of other posters. Don't let the door hit yourself on the way out.


****************


Linux users' arrogance can be so high, it actually harms the OS

Deary. That is true of all operating systems, not just Linux. I can easily point to rants made by Theo de Raadt where he defines pure arrogance. I can link to posts on Ubuntu forums where the response consisted of "RTFM" or absolutely no response at all. I can point to newsgroup postings and forums and mailing lists for Apple Mac developers that will make your skin crawl with how superior they think "their OS" is. I can point directly at Microsoft's own top executives and how they brush off the continuous assaults made upon them by Open Source supporters, computer security experts, and even national governments. I can easily bring up hundreds of forum postings about common encounters with Best Buy and Circuit City employees who snob down everything that isn't Windows, or the most recent PC World disaster where the local store refuses to fix a hardware problem after being ordered to do so by Corporate.

The fact is this: Users arrogance alone does not directly help or hinder the overall OS. Now, if you want to think that it does, and you want to list a singular source that I quite frankly have never heard of, hey, fine. That's your social circle, and that's your business.

Speaking for myself, I realize that I can't control everybody's opinions. I can't control everybody's actions. Some people are going to the hole in the south end of a north bound donkey. What I can do is this: Ignore them as best I can, and let my own words and my own actions carry their weight.


***************


hang out on Ubuntu's forums or IRC channel for 5 minutes. I think it'sjust as bad, if not possibly worse. Every group has their amount of... "bad" people. The BSD's I think are more noticeable since they are much smaller in terms of users and developers, which means that covering the BSD side means having to cover the Drama Queens and Kings. It also, I think, doesn't help that the leader(s) of OpenBSD seem intent on being screaming raving lunatics every chance (they) get. As I said in another post, I couldn't tell you who leads FreeBSD or NetBSD, and I understand they have a much larger share of the BSD market than OpenBSD. I can tell you a couple of people involved with Solaris, but other than Sun's CEO, not anybody actually important to the project.

nyways, back to the (semi)point. Covering Linux and Windows? There's enough people trying to make good without being jerks that news pages are not flooded with "he said"/"she said" shenanigans. In several ways, it is about perception.


******************


Torvalds already says he doesn't care about user freedoms, only efficient software.

Hmm... I'm really not sure where to start with this. My first reaction to is point to the numerous comments Linus has made during the GPL3 development process. As I read it, Linus seemed to be concerned about the freedom of the software itself. Specifically in an interview with Forbes Linux stated that he didn't like the GPL3 because it placed limits on what could be done with the software.
For example, the GPLv2 in no way limits your use of the software. If you're a mad scientist, you can use GPLv2'd software for your evil plans to take over the world ("Sharks with lasers on their heads!!"), and the GPLv2 just says that you have to give source code back. That certainly sounds like Linus "cares" about user freedoms.

Does Torvalds even want to end Microsoft's monopoly on the desktop,

I'm not really sure where Microsoft enters into this discussion. Linus said this in an interview with LinuxWorld :
I don't actually see it as a battle. I do my thing because I think it's interesting and worth doing, and I'm not in it because of any anti-MS issues. I've used a few MS products over the years, but I've never had a strong antipathy against them. Microsoft simply isn't interesting to me. The thing is, Linus just doesn't care about Microsoft, as do many of other open source developers. They don't aim for Microsoft because Microsoft isn't a benchmark. Lets be honest, anybody involved in Open Source Communities probably has the opinion that Microsoft writes sloppy and buggy code. Focusing on Microsoft as a target? Well, quite frankly, that would be like aiming for an open sewer pipe if you wanted to go swimming. Speaking for myself, I'm glad that Linus doesn't find Microsoft interesting.

This isn't to say that many of us don't have an interest in breaking Microsoft's monopoly on the desktop, but that battle is actually going to fall to KDE, XFCE, and other quality desktop environments in providing user friendly desktops. That battle is going to fall to the X.org development teams to provide functional dual desktop support and better graphics support. That battle is going to fall to Compiz and Red Hat's AIGLX developers, and it's going to fall on the Samba team, and it's going to fall on individual distributions.

Saying that Linus alone is responsible for the battle against Microsoft is like trying to pin Economy or Gas prices on the sitting President.Reality doesn't work like that. Congress is the target.

The Kernel team has their job, yes, but they are only a small portion of the overall scheme. That, however, doesn't address the real issue of Microsoft's Monopoly. Keep in mind that many business's have a saying, nobody got fired for buying Microsoft. Keep in mind that you, personally, vote with your wallet. I don't expect Linus to speak up for me. I speak up for myself, and the fact is, more Linux users are speaking up. More people are beginning to ask Independent Software Vendors just how independent they are. That is what is changing the landscape, not the work of just one person.

or is Linus content to just maintain the status quo with GNU/Linux withering on the vine?

Um. I'm sorry, but I have absolutely no idea what in the world you are talking about here. Last time I checked Linux usage was up in the Cell Phone market, was up in the embedded device market, and was up in the retail PC market. I've seen hundreds of headlines, stories, and editorials about Ubuntu Linux going on Dell system; about Novell Suse going on Levono systems; Levono looking for an alternative Linux supplier; and HP entering the Linux desktop market. There has been the RadeonHD driver, an official Open-Sourced driver from AMD, and the code dump from Intel for their graphics drivers. I don't know where you get the idea that Linux is withering, the only market that it appears to have decreased in is the web-server market. Even that decrease is not a real decrease as the number of servers not using Linux+Apache, but moving to other services like lighttpd, and the google web hosting service.

For many of us, this is a vote of no confidence in Linus Torvalds' leadership. This is a challenge for Linus to get with the program or get out of the way.

Um. Okay. Get with what program? Do you really want Linus to start targeting Microsoft? I'm also failing to see how this is a vote of No Confidence. Last time I checked, Linux was getting results. OpenBSD, FreeBSD, Solaris, Novell Netware... and so on... I see those products loosing results. Theo De Raadt's latest outrage against the Atheros driver only served to turn several users off from even looking at BSD. It's one thing to be in the right. It's another thing to rant and spew after the problem has been addressed promptly and politely. Now, I couldn't tell you who on earth runs NetBSD or FreeBSD. I could tell you that Jonathon Swartz helps with Solaris over at Sun, but even Sun has found profit in selling Linux servers. I can, however, recognize Alan Cox, Linus T, Andrew Morton, and Ingo Molnar. The only reason I know a guy who used to do Linux driver development by the name of Con K. is because he was a media attention seeker who, in my view, threw a baby's fit when he didn't get his way.

Now, if you have somebody else in mind who would be good enough to lead and manage the Linux Kernel... by all means, speak up. But keep in mind that such a person would have to deal with the existing kernel driver team. I know that many of the Kernel Lieutenants have expressed that they do not want Linus's job. Nobody really wants to try to manage the kernel project. Reminds me of a line from the Sean Connery film, First Knight.
Once in a lifetime, you meet a man so fearless. No man can touch him. While you're waiting for him, you can practice on me. - Lancelot. Sure. Linus may not be what you want, but he's what we have. And like it or not, there is nobody else waiting in the wings.


***********


Audio, in general, is an issue with Linux. Yes, lots of sound cards are supported. But, exactly what is the audio standard? I can tell you that OpenGL is the standard for rendering in 3D, but I'm not aware of any consensus on the default Audio Input. As I see it Linux has two primary audio systems, ALSA and OSS. I also have seen libraries relating to SDL and OpenAL, and I'm aware of Gstreamer, Xine, and I'm fairly certain there is another engine out there.

If you don't get the point, trying to pin the performance of a single application such as Audio playback on a single point of the kernel isn't something that can be done. Yes, the scheduler is going to have an impact on it, but quite frankly, if you are rapidly opening programs? That's more likely to cause a run-in with your RAM access and your hard-drive access.

A quick experiment you can try is to have Microsoft Windows boot up and launch 10 programs at once. Have the 7th program be an audio sound.

Since I know you won't do the experiment as you are no doubt a loyal Linux user without Windows installed on any computers... so I'll go ahead and spoil the story for you. Windows Slows down too.

Now, I know for a fact that if I run Synaptic and have it update my system, I don't get any audio playback loss on my systems. Does it mean that I have more knowledge than you and that I know how to work my systems? Perhaps it does... but Perhaps it doesn't. Each package management system runs differently. Now, I know that OpenSuse's package manager is a system resource hog. I've learned the hardway that when it runs, turn everything off.

Now... if I, somebody who only has OpenSuse on one machine and Mepis on all the others can figure that out... I'm really surprised that it isn't common knowledge.


**************



The time and effort that people put into developing open source software is amazing, but people still need to pay their bills and feed their families. If a code change is proposed that would benefit performance on enterprise servers at the expense of performance on desktop computers, the developers must favor their only source of income.

Performance is performance is performance. Several years ago Intel said 64bit was meaningless on the desktop and continued right on with making 32bit Pentium4 chips while AMD was pushing Athlon64 out the door. Years before that hard-drive vendors said RAID was meaningless on the desktop and finding a hardware RAID card could run into thousands of dollars... now, both Radeon Xpress and Nvidia Nforce are doing RAID on chip in motherboards under $60(US).

The idea that changes made to increase performance in the server market is going to decrease performance on desktop computers is... ludicrous. Performance is performance is performance. Okay, granted, home computers are only now getting the ability to handle 4 threads at once in the Athlon64 Quad-Core Barcelona. Big deal. You could get Servers with 32, 64, and 128 processors. However, all of the work making the applications SMP aware for those massive systems has a direct impact on improving performance on the small systems. Users can take advantage, today, of AMD Barcelona Quad-Core, instead of waiting for software to be made SMP aware.

There is a trickle down effect. Technology developed for theserver market will eventually make it to the home market. Maybe not today, but it will get here. The work done by Evans and Sutherland on their massive R300 Based GPU boxes with 64 GPU's is coming right back and is being used in CrossFire management and setup.

I think the idea that Server performance was hurting Desktop performance was started by Con K. What I don't know is if Con K. was out to distribute F.U.D. because he didn't get his way or if he actually thought he had a legitimate point. I personally agree with Linus. 3D gaming isn't the only performance metric out there. I care a lot about hard-drive efficiency, memory usage, and processor efficiency. I'd rather have a kernel that is built to handle any situation I can throw at it in a decent manner, than have a kernel that can handle one situation really well, while having a hard-time at others. This isn't to say I don't care about 3D performance. But, I know enough that 3D performance comes more from the graphics drivers than the kernel itself. If OpenGL is setup correctly in the drivers, the kernel itself shouldn't have that much impact on the final frame rate and response.



************

Fork the Linux kernel to get REISER4.

hmm? Okay, I must admit I'm a little confused as to how forking the kernel involves getting a hard-drive support format implemented. Never mind a format that is effectively dead as is. REISER4 offered no significant performance or feature improvement over EXT3 (braces for the impact of synthetic benchmarks), and EXT4 has been implemented since 2.16.19. Adding REISER4 support at this point and time is um... Well, I don't want to say meaningless, but it would be a useless manuever. As is, REISER support overall has been being dropped by several shipping distributions.

Fork the Linux kernel to get GRAPHICS that work (not sabotaged crap).

Okay Mate. I have not idea what in the world you are on about here. Whose graphics are sabotaged, and how in the world does the kernel have anything to do with that? I'm afraid I'm just going to have to pass on trying to answer this without further data.

REISER4 is a great filesystem.

Um. No. It wasn't. REISER4 was an improvement to REISER3, so it wasn'tbad. But it wasn't some magical miracle pill that would fix all the problems associated with hard-drive access and response. Note the key word,
was.

GRAPHIC drivers need to work.

Again, what in the world are you on about? I have ATi cards from Radeon PCI AIW to Radeon x1900, and Nvidia cards from TNT2 to Geforce 7900. I have laptops with Intel GPU's and Radeon Xpress GPU's. All of them work great... if I use the appropriate drivers. Thing is, Graphic drivers don't fall upon the kernel developers. The drivers fall upon...

THE X WINDOWS DEVELOPMENT TEAMS : THEY ARE AT X.ORG

Yes, it's the kernels responsibility to handle the driver requests, but it's not the kernel teams responsibility to build each and every driver. It is up to the vendors who make the hardware, and up to those involved with X.org. If you honestly believe that Linus, or the kernel development team for that matter, are alone in responsibility for graphics support, you simply do not understand how a Linux distribution is built. If ATi does their job properly with building the Catalyst Drivers, I should be able to move from one driver to another without changing the kernel. I also look to the people who create my distribution to provide an appropriate ATi driver install. I don't look to ATi to provide an installer for my OS.

There are many LinuxKernel SABOTEURS, that work to prevent the Linux ever truly becoming a threat to Microsoft.

Okay, lets presume that this statement is somehow true. Prove it. (btw, fixed your spelling mistake)

The fact is, Linux as an OS is a threat to Microsoft right now. It is more stable. It is faster on the same hardware. It has far better out of the box compatibility. It actually has a real working 3D desktop. Several distributions are scoring OEM wins and getting Linux as a factory pre-installed option. The LiveCD OS started by Knoppix and blown open by Mepis has ccreated countless derivatives, one such Mepis Knockoff being found on Dell computers now, Mark Shuttleworths Ubuntu Linux.

Where you get the idea that the kernel alone keeps Linux from being a threat to Microsoft is beyond me. The kernel team does their job, and from a technological aspect, they passed anything Microsoft could do over a decade ago.

The real problem is I think somebody has been lying to you. Now, I don't know who. I don't know how you got the opinions you hold. What I do know is what I see, and what I see are a bunch of shouted terms and a couple of links to some sites, two of which are owned by the same registrant out of Vancouver Washington Sorry, but I can smell the manure from here on that.

Now, if you want to hold the opinion that REISER4 is worth being in the kernel, here's how you go about getting it in.
Contact your distribution vendor and ask them why REISER4 isn't offered as an option at install.

Did you get that? Here, I'll say it again

Contact YOUR distribution vendor.

Now, since i don't think you understand, I am going to spellthis out real clearly for you. In baby terms.
REISER4 can be added at any time to any distribution, by anybody who feels like it. There are hundreds of packages, drivers, and other tools that are shipped daily that are part of a reconfigured kernel that are not in the main kernel. That's one of the differences that sets distributions apart, what is added to the kernel. REISER4 does not have to be part of the Linux kernel to be used on your machine. Addition to the Linux kernel as a shipping part of the main tree is an indication that the technology has become widespread or is worth using. The fact that several distributions are stripping REISER support out, period, full stop, indicates that there is a general trend away from the REISER format overall. Now, if the opposite was happening, if REISER4 support was being added in or being made the default option on distributions, then yes, it might make sense to pursue kernel inclusion.

For now? it's just best to forget REISER and start working on the problems with the dozens of other file systems supported by Linux, or by working on getting Sun's ZFS ported to the Linux platform.


***********

Wow, you really are an idiot. If you like Reiser4 so much, just compile your own kernel or send an email to your distribution. That website's benchmark also show that Ext4's performance is very similar to Reiser4's, and Ext4 has the added benefit of providing backwards compatability with Ext3/Ext2.

Thank you for repeating what I already said, but I think it is obvious that the person who is responding has no interest in doing their own research into the matter. The thing is, I have had a much better coder than myself assist in adding ReiserFS into a Mepis 6.0 Final build, and into a Fedora Core 6 build. We tested on a couple of different hard-drives in single drive mode: Hitachi Desktar SATA 300 @ 80gig, Samsung SATA 150 @ 250gig, Western Digital 150 SATA @ 120gig, Western Digital 100 ATA @ 120 gig, Maxtor 133 ATA @ 120 gig, Hitachi Desktar SATA 150 @ 160gig. We also tested on a couple of different RAID 0 formats, using a couple of Promise controller on 2 of the Maxtor 133 ATA @ 120gig, using Nforce3 RAID SATA with the Hitchi SATA 150's @ 160gig, using Radeon Xpress RAID SATA on the Hitachi 300's @ 80gig, and getting fancy by sticking the SATA 300 cards on SATA 150 ports, and by running SATA 150 drives on SATA 300 ports.

Know what we found? No difference from ext3. We went from an AMD K6-2 @500mhz and a Pentium3 @ 500mhz, up through Socket 754 Athlon64 @ 3200+, Socket 939 Athlon64 X2 @ 4200+, Socket AM2 Athlon64 @ 4800+, and had a Socket A 2500+ Barton in there somewhere. We had a couple of systems from Intel, the Radeon Xpress 200m holding an Intel P4 630, and at the time brand spanking new Core2 system using an Intel board.

In our testing we couldn't find any repeatable real life situation where the REISER4 release was faster than EXT3. In most common hard-drive scenarios, there was no difference that could be detected.

So, why didn't we publish our results like good little academic researchers? Note the OSs we were using. Mepis 6.0 and Fedora Core 6. yeah, we were wrapping up our research just as Hans Reiser was being hauled off on murder charges. We made the choice to not publish our results because A: REISERFS was going to be as good as dead with Reiser gone, and B: we felt that publishing our results, even using our casual blogs, would be seen as a hack job against the guy. We've seen it before, the claims of people with an Axe to grind. So we junked it.

Looking back now, with the people who simply refuse to give up on REISERFS entry into the mainline kernel, maybe we should have published our results. At this time though, the details would be approaching over a calendar year out of date, and with improvements to the baseline kernel, and other sections of the GNU system, our original results would be meaningless. I'm not really interested then in repeating all of the tests, even on the same hardware, unless somebody pays for it this go-around. Maybe Namesys improved REISERFS so it is actually going to surpass EXT3. Thing is, EXT3 was "good" enough, and EXT4 is better. EXT4 is backwards and forwards compatible. REISERFS isn't. I could continue on into the technical details of why REISERFS is bad. That wasn't good enough argument a year ago, and I doubt it's good enough argument now.

Tuesday, September 04, 2007

Reggie slams Sony's Home plans

I generally don't read Gameinformer, but I do shop at Gamestop... and I do enough used game purchases that I benefit from the discount a GameInformer subscription gets.

Most of the time the Gameinformer goes straight in the trash. I don't think much of their editorial or review staff. For example, in the latest issue, GameInformer rated Nintendo's E3 Business Summit press conference with a C, and stated that the lineup of software wasn't good enough to make it the number one new console. Nintendo passed the Xbox 360 last month to become the Number 1 new console sold. Instead of stopping the presses and saying "oops, we was wrong yet again" GameInformer went to press.

So, I'm glancing through and I see that they are interviewing Reggie. Like other reporters, GameInformer is hung up on the perception that Nintendo isn't serious about online. After saying that Mario Kart, Strikers, FIFA, and Madden were nice steps, they pursued the social networking angle.

The question was "But what about something like Home? That would seem to be more tailored to a casual audience".

Reggie's Response: I think it's been done before, right? It's called Second Life.

wow.. I would have never guessed that Reggie was so good at throwing Insults...

The thing is, Second Life is kept alive by the Furry Fandom, discussed already in a previous post, and only by a few dedicated people at that. Second Life barely has a 10th of the number of regular players as even low-end free online RPG's, and is dwarfed by titles like City of Heroes, and Everquest, not even thinking about the order of magnitude difference moving up to current MMO titan World of Warcraft.

From the perspective of a profitable game, Second Life is anything but.

In the MMO market, or social networking, there really is no greater insult than comparing a product to Second Life.