Has Microsoft Lost The Plot?

I made a comment on a blog post over at The Loop which I quite liked. I didn’t want it to just disappear so I thought I’d put a copy here:

Microsoft have lost the plot.
They appear to be hanging *everything* on two fallacious premises:
1) Users want the exact same interface across every device they have.
2) Users want to have MS Office on every device they have and will start buying devices with MS Office on once they are available.
No. 1 is quite daft to me – and many others from my experience. On my mobile touch screen device, I want an interface that is designed to work well on a mobile touch screen device. On my desktop, I want an interface that is designed to work well on a desktop. Etc.
As far as this “we’ve listened to our users and bought back the Start button” thing goes – from what I can tell, it’s not the Start button that people wanted back and/or missed from the pre-Metro days – it was the Start *Menu*.
And they haven’t bought that back at all. Forcing people to change interface paradigms every time they want to find an app is, again, daft.

What?! A “Windows only” hard disk??

Came across these Western Digital Black2 Dual Drive units the other day. What a cool idea I thought! Just the thing for a couple of laptop users here at work who prefer the speed of an SSD in their laptops but would like a bit of extra room for VM’s and the like.

So I ordered three – one for my work iMac as well. I’d recently upgraded the iMac with an older 256GB SSD – which was fine – though not a huge amount of space obviously.

The units arrived and I set about hooking it up to the iMac using the handy USB->laptop drive SATA cable that it came with.

Only the 120GB SSD drive portion appeared.

Digging a little further, I found that the unit is supported in Windows only – you need to install software in Windows to enable access to the 1TB disk in the Black2 Dual.

Bugger.

Once I’d finished ranting about how crap it was that a hard disk vendor would build a hard disk only for Windows, I had a think. There *must* be a way to make this work – surely! Just because they haven’t built this ‘enabler’ software for OSX surely doesn’t mean that it absolutely wouldn’t work.

So, I went to work getting it set up on a Windows laptop, figuring that I could unlock the disk in Windows & then pop it back in my iMac and configure a Fusion drive across the two disks.

Annoyingly – or cleverly I guess, depending on how you look at it – you cannot unlock the 1TB portion of the drive while it is connected via USB. It actually has to be resident inside the machine (or at least, connected to the SATA bus) to unlock.

This meant I had to image the existing SSD in the laptop onto this one – using the afore mentioned handy cable.

Once it was unlocked, I connected it up to the iMac and converted the partition table from MBR to GPT using the gdisk utility. Note that the 1TB portion shows up as a partition NOT a second hard disk I had suspected it might based on the reviews I’d read.

I removed all the partitions from the first 128GB of the disk and created an EFI partition then ran the Apple Recovery Disk Assistant tool to create a Recovery partition on the new disk.

Excitedly, I then used the directions here to create the Fusion drive.

diskutil cs create Fusion disk3s2 disk3s3

Unfortunately this resulted in a POSIX Input/Output error so it seemed like that was the end of the road.
Frustrated, I posted a brief report into a MacRumours forum in which I’d left a question.

Overnight, “Weaselboy” replied with a few further links to check which renewed my hope that it might work.

One in particular – this excellent (as usual) article from Anandtech described how the controller uses LBA to address the different areas of the disk. Here was the reason for my renewed hope.

Ok, I thought, let’s just wipe the whole thing in my iMac and create the partitions again.

So I did.

And, this time, it worked.  Similar steps would mean this disk could be ‘enabled’ for use in a linux machine as well – it would work really well with / mounted to the SSD and /home on the 1TB mechanical portion.

Ubuntu LTS Server upgrade – really difficult?

At my place of work, we use a Java-based trouble-ticketing system from Atlassian called Jira.

It is hosted on a LAMP server virtual machine in our production VMware environment. The system has been in daily use (well, week day use) since near the end of 2008 – requiring minimal maintenance in that time (the occasional reboot after security updates have been installed).

Up until yesterday, we had been using Ubuntu 8.04 LTS Server. I decided it was time to move to the latest LTS release – 10.04 – which was released earlier this year and had just received it’s first .1 refresh.

Some googling around revealed the potential for various issues with the process so I took a snapshot before beginning – just to be safe.

I then found this link which detailed how to upgrade the server to the next LTS release.

I was shocked at how simple the process appeared to be – surely not?! This is that crazy technical, awful command line operating system with a really high cost of ownership isn’t it?

So, SSH’ing into the server, I took a copy of /etc (just being extra safe again), fired up a screen session and ran the command as instructed on the page above.

sudo do-release-upgrade


Various lists were obtained from the internet and upgrades calculated, I then had to press Y to show my acceptance of the results.

Everything slowed down at this point due to our internet connection speed (changing soon, yay!). I disconnected and went to sleep.

This morning, I connected back to the server and screen session to find a reboot necessary. So, Y again and a reboot later the 10.04.1 based system was up and running.

I fired up a browser and pointed to the Jira system – fail. Oh noes, I thought, now it gets difficult.

Well, no, not really. Over the course of various Ubuntu releases since 8.04, the sun-java6-* packages were moved into the partner repository.

So, I uncommented the partner repository in /etc/apt/sources.list, ran an apt-get update and reinstalled the sun-java6-jre package.

A reboot (only to test that everything would start by itself as it should) and Jira is running again, no data lost and inbound email requests to the system are working. Awesome.

Just so you get the significance of that, imagine doing an inplace upgrade (eg not a fresh install) of a Windows 2000 Server running IIS5 and SQL 2000 and have it coming out running Windows Server 2008, IIS7 and SQL 2008.

Two reboots, no data loss, no restores necessary and all done remotely. And Jira was actually still running and available for most of the time except when the box was rebooting and having java re-installed.

Yep, *really* difficult. Watch out.

what a difference an AHCI makes

Last week, I noticed how, whenever huge disk IO was taking place on my Quad-core – with 4Gb of RAM and 64bit Ubuntu – workstation at work, the whole desktop environment would pretty much grind to a halt.

SSH’ing in from a remote machine and using top, iotop and nethogs didn’t show anything particularly heart stopping going on either.

I googled around and found that this seemed to be a fairly common problem with any of the newer kernel releases.

One post in particular said that a person had fixed the problem by disabling the SATA disk controllers AHCI mode in the BIOS – switching it back to IDE.

Cool I thought – let’s have a go! Interestingly, the BIOS was already set to IDE. I decided I’d try enabling AHCI instead.

Wow – what a difference that made. I then remembered one of the other posts I came across that just said to switch the BIOS setting as that forces the OS to load a different disk controller driver.

It certainly did the trick – said work-beastie is now much faster and more responsive under load.

segfaulting multimedia processes -or- The Case of the Badly Cooled……Case

A wee while ago (yes, I’m catching up on things I’d hoped to blog about for a while!), I had a problem with my home PC. This culminated in a post to the Ubuntu Forums.

General stability of this machine is great – it’s normally on for weeks at a time serving the familys various document/web/email/printing needs – and has done this for about four years with the only major hardware change being a new 7600GT graphics card (most recently – about 12 months ago) and a new Socket 478 P4 Extreme Edition CPU about 18 months ago).

So, what do you guys think? Hardware or software? And how do I troubleshoot this one further? (BTW, I’ve been a linux user for about 8 years now, so I’m not really a guru and definitely not a noob. Perhaps more of a goob. 😀 )

Basically, I had an issue where, whenever I’d do some ‘heavy lifting’ tasks – like audio or video encoding, the app would just disappear. Very odd it was – I tried all sorts of things to fix it. New linux distro’s, replacement RAM etc.

Starting the processes from the command line, I was able to see that the app termination was actually a segfault – which I subsequently found in the dmesg log. That and two other distro’s (lenny and a Fedora Core live CD) gave errors in dmesg about the CPU overheating:

Turned out to be the CPU overheating. Interesting, there was nothing in dmesg about the CPU overheating – though, when I had Debian on, it did show messages about that – and, when I booted into the Fedora 10 Live CD, it also complained about the CPU overheating in dmesg.

So, to solve the problem, I transplanted the guts of my box into a new case which breathes better and also used the correct heatsink for my CPU (one with a copper core).

The problem was that I was using the same case and heatsink from my old P4 2.8Ghz which wasn’t cutting it with the new P4EE 3.4Ghz and the amount of heat it generates.

Once the correct heatsink and better case with more efficient thermal dynamics were in place, the differences in internal temperature were quite remarkable:

If anyone is interested, here’s some temps from lm-sensors that show the difference in internal temps between the two cases and heatsinks. These are both just at system idle with no loading.

Before
SDA: 37C | SDB: 34C | GPU: 57C | CPU: 40C

After
SDA: 33C | SDB: 28C | GPU: 40C | CPU: 23C

During loading, the CPU was getting to around 70C, now its able to stay around the 57C – and with no segfaulting going on! Yaay!

The rather cool thing – from my point of view anyway – is that Windows would merely have blue screened under the same circumstances (or just rebooted as the default blue screen setting dictates). Obviously that would make things much harder to troubleshoot.

So linux dealt with the overheating by terminating the offending process. A much more elegant way of handling things – don’t you think?

A change of OS can void warranty…

I’m totally gobsmacked.

Today, I was talking to a friend about her new Vista laptop and the various troubles she’s been having. She said she’d love to run Ubuntu on it but couldn’t because removing Vista would void the laptop’s warranty.

Now, I can understand playing with the hardware itself voiding the warranty (overclocking etc), but formatting the hard drive and installing another OS? What the ….

I did a bit of googling on the subject and it turns out to be quite wide-spread too – apparently a lot of people have been told this by various manufacturers.

Watch out for that I’d say.. certainly makes even more sense to buy a machine with nothing on it.

Security and feeling safe.

I’m at a regional managers meeting in Hamilton at the moment. At tea tonight, I was talking with a guy about a country he visited about three years ago.

He relayed how there were armed guards escorting him around as the company he worked for had had staff held hostage by local people to extort money from them. So there was a lot of real in your face type security – the guards, airport type luggage and people scanning at hotels etc.

The next thing he said intrigued me and I instantly saw an interesting parallel.

He said, “I decided I’d never go back there again – I just didn’t feel safe.”

I thought it ironic that one would (quite rightly) feel unsafe in a place where there was a huge security presence – all these extra, and very obvious, protocols and safeguards. In fact, the very presence of these things created a sense of things not being safe.

Amazing how millions of Windows users have yet to identify this…..

interesting vista….

Just helping a friend set up a new PC he bought. Brand new dual-core HP – nice piece of kit for an off the shelf thing.

Took an age for Vista to configure itself and be ready to start tweaking and installing stuff. Actually was quicker to boot an Ubuntu live CD, repartition the hard disk and install a whole operating system (and more!).

Rather interesting I thought.