Everything is a Ghetto

While reading this controversial link bait, consider buying my product/service

Can Google Create Community?

I have been meaning to blog for ages about a site I use called Goodreads. It is really excellent, I signed up ages ago and cant remember exactly how I found it but I think it might have been a Rails app. Anyway, it is a fairly simple idea, log what books you read (or want to), rate them and share reviews with others, I am on there as thattommyhall. What convinced me to use it was the really stellar monthly emails with interviews with authors and picks of books with a certain theme.

I have been playing with my new HTC Hero and there is a Google Android app that can scan books and add them to the (easilly overlooked) My Library section of Google Books. While the app is good, there is no feeling of community there really. Orkut does not seem to have taken off and they released Wave today but I dont know if they can really get a feeling of community in their apps. I love the stuff they put out, use gmail and docs in particular pretty much constantly but think facebook, flickr and lots of other sites somehow seem to get more of a communtity vibe, perhaps it’s even the google branding - it all looks the same.

Anyway, I have things to do today, if anyone gets an invite to Wave and wants to send me one, please do.

Getting VMware Certified Professional (VCP) on vSphere 4

On Saturday I took and passed the VCP410 exam to get VCP4.

It was not really that hard, though I have been reading about vSphere since before it shipped, follow loads of blogs on VMware, installed it as soon as it came into beta and migrated my companies clusters to it relatively early. I would say if you have VCP3 and have used vSphere you should be OK.

The frustrating thing about the exam was the questions on the config max document, in my view if you are approaching the maximums you could just look it up and memorisation is a pointless exercise. A lot of the maximums are just decisions someone in vmware made, how many NFS stores by default ? (8), max? (64). What it the tree-depth per resource pool? (12… unless you use DRS, then it’s 10). This kind of memorisation is stupid, pointless, hoop-jumping and will be the difference between passing and failing for lots of people.

The exam (like most IT certs) is multiple choice so the questions are fairly mundane and of course there is only 1 correct answer. When interviewing candidates, I always prefer questions that start “what is your” rather than “what is” as anything that is so unsubtle as to only have one answer is probably too uninteresting to spend time discussing.

I did do a nights worth of revision however, using:

Also worth considering are

You may like to see the things I have added to delicious on vmware over the last few years.

Good luck if you take it too!

Hiatus, Departure, Return

It’s been ages since I blogged as I have been mad busy in work though a lot has happened recently.

I have:

  • Left thebigword

  • Packed up my house

  • Sold/Gave away most of my possessions (keeping only books and my PC, as my friend Ben said “proves you are principally concerned with knowledge”)

  • Left Leeds

  • Gone to India

  • Returned (earlier than planned but refreshed and excited about the future)

  • Set up a limited company to go contracting

  • Started plotting a move to London

  • Begun making big lifestyle changes - drinking less, eating better and training for the Paris Marathon, a triathlon and a return to India for some Hardcore Mountaineering next year

All is well in TomLand, expect more posts now I’m not so sillybusy!

FusionIO ioDrive

Well, I got my hands on one of the fusion-io ioDrives a couple of weeks ago. unfortunately they do not work in the version of VMware ESX that we are using, though they are working on drivers for the 64bit ESX4. I did not have time to set up a physical machine to test on our application running SQL server 2005 so I have just quickly done some IO benchmarks in Linux at home. I was going to test btrfs and its SSD mode at the same time it but hit too many problems trying to get the drivers and the btrfs module in the kernel together.

First setup a 4 drive raid0 array for comparison

root@George:/home/tom# fdisk -l | grep 500
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde doesn't contain a valid partition table
Disk /dev/sdb: 500.1 GB, 500107862016 bytes
Disk /dev/sdc: 500.1 GB, 500107862016 bytes
Disk /dev/sdd: 500.1 GB, 500107862016 bytes
Disk /dev/sde: 500.1 GB, 500107862016 bytes

root@George:/home/tom# mdadm --create /dev/md0 -l 0 -n 4 /dev/sd[bcde]
mdadm: array /dev/md0 started.
root@George:/home/tom# mkfs.ext2 /dev/md0
root@George:/home/tom# mkfs.ext2 /dev/fioa

fio

I was looking for an iometer-a-like for linux to quickly get some semi-meaningful results (bonnie++ was returning results saying it was too quick to measure or something) Fio lets you create a text file description of a workload, with a choice of IO libs and loads of options, you can set concurrent jobs also, see this excellent linux.com article for more info

random-read-test-aio-32thread-20G.fio

1
2
3
4
5
6
7
[random-read]
rw=randread
size=20G
ioengine=libaio
iodepth=32
direct=1
invalidate=1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
root@George:/fusionio# fio /fio/random-read-test-aio-32thread-20G.fio
random-read: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=32
Starting 1 process
random-read: Laying out IO file(s) (1 file(s) / 20480MiB)
Jobs: 1 (f=1): [r] [100.0% done] [139M/0K /s] [35K/0 iops] [eta 00m:00s]
random-read: (groupid=0, jobs=1): err= 0: pid=25705
  read : io=20480MiB, bw=157614KiB/s, iops=39403, runt=133056msec
    slat (usec): min=4096, max=4096, avg=4096.00, stdev= 0.00
    clat (usec): min=324, max=325739, avg=788.05, stdev=1238.41
    bw (KiB/s) : min=45800, max=196536, per=100.24%, avg=157996.42, stdev=22439.38
  cpu          : usr=9.23%, sys=71.81%, ctx=2180058, majf=1, minf=698
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued r/w: total=5242880/0, short=0/0
     lat (usec): 500=0.01%, 750=78.27%, 1000=10.86%
     lat (msec): 2=9.31%, 4=1.11%, 10=0.30%, 20=0.10%, 50=0.04%
     lat (msec): 100=0.01%, 500=0.01%
Run status group 0 (all jobs):
READ: io=20480MiB, aggrb=157614KiB/s, minb=157614KiB/s, maxb=157614KiB/s, mint=133056msec, maxt=133056msec


Disk stats (read/write):
  fioa: ios=5244775/2, merge=0/0, ticks=303464/0, in_queue=0, util=0.00%

random-write-test-aio-32thread-20G.fio

1
2
3
4
5
6
7
[random-write]
rw=randwrite
size=20G
ioengine=libaio
iodepth=32
direct=1
invalidate=1
root@George:/fusionio# fio /fio/random-write-test-aio-32thread-20G.fio
random-write: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=32
Starting 1 process
random-write: Laying out IO file(s) (1 file(s) / 20480MiB)
Jobs: 1 (f=1): [w] [100.0% done] [0K/18210K /s] [0/4446 iops] [eta 00m:00s]
random-write: (groupid=0, jobs=1): err= 0: pid=7105
  write: io=20480MiB, bw=23406KiB/s, iops=5851, runt=895978msec
    slat (usec): min=4096, max=4096, avg=4096.00, stdev= 0.00
    clat (msec): min=1, max=322, avg= 5.30, stdev= 6.29
    bw (KiB/s) : min=    0, max=94080, per=99.99%, avg=23404.82, stdev=13749.60
  cpu          : usr=3.19%, sys=27.08%, ctx=5303118, majf=0, minf=4369
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/5242880, short=0/0
     lat (msec): 2=14.96%, 4=46.44%, 10=17.79%, 20=20.45%, 50=0.23%
     lat (msec): 100=0.06%, 250=0.06%, 500=0.01%
Run status group 0 (all jobs):
  WRITE: io=20480MiB, aggrb=23406KiB/s, minb=23406KiB/s, maxb=23406KiB/s, mint=895978msec, maxt=895978msec
Disk stats (read/write):
  fioa: ios=164/5339709, merge=0/0, ticks=28/961368, in_queue=0, util=0.00%

Now the RAID0 array,took all night to complete the same tests.

root@George:/raid0# fio /fio/random-read-test-aio-32thread-20G.fio ;
random-read: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=32
Starting 1 process
Jobs: 1 (f=1): [r] [100.0% done] [4136K/0K /s] [1010/0 iops] [eta 00m:00s]
random-read: (groupid=0, jobs=1): err= 0: pid=20079
  read : io=20480MiB, bw=3475KiB/s, iops=868, runt=6033707msec
    slat (usec): min=4096, max=4096, avg=4096.00, stdev= 0.00
    clat (usec): min=3, max=1048K, avg=36802.65, stdev=37338.47
    bw (KiB/s) : min=  982, max= 4367, per=100.10%, avg=3478.65, stdev=279.09
  cpu          : usr=0.55%, sys=1.84%, ctx=4607245, majf=0, minf=30634
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued r/w: total=5242880/0, short=0/0
     lat (usec): 4=0.01%, 50=0.01%, 100=0.01%, 250=0.03%, 500=0.09%
     lat (usec): 750=0.01%, 1000=0.01%
     lat (msec): 2=0.02%, 4=0.93%, 10=16.14%, 20=26.65%, 50=32.57%
     lat (msec): 100=16.79%, 250=6.56%, 500=0.19%, 750=0.01%, 1000=0.01%
     lat (msec): 2000=0.01%
Run status group 0 (all jobs):
   READ: io=20480MiB, aggrb=3475KiB/s, minb=3475KiB/s, maxb=3475KiB/s, mint=6033707msec, maxt=6033707msec
Disk stats (read/write):
  md0: ios=5242880/4900, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    sdb: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%
    sdc: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%
    sdd: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%
    sde: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%


root@George:/raid0# fio /fio/random-write-test-aio-32thread-20G.fio

random-write: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=32
Starting 1 process
random-write: Laying out IO file(s) (1 file(s) / 20480MiB)
Jobs: 1 (f=1): [w] [100.0% done] [0K/9427K /s] [0/2301 iops] [eta 00m:00s]
random-write: (groupid=0, jobs=1): err= 0: pid=2789
  write: io=20480MiB, bw=10075KiB/s, iops=2518, runt=2081336msec
    slat (usec): min=4096, max=4096, avg=4096.00, stdev= 0.00
    clat (msec): min=1, max=773, avg=12.31, stdev= 7.27
    bw (KiB/s) : min=    0, max=20552, per=100.09%, avg=10083.67, stdev=2143.10
  cpu          : usr=1.62%, sys=11.61%, ctx=5332417, majf=0, minf=10120
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/5242880, short=0/0
     lat (msec): 2=0.01%, 4=0.01%, 10=23.86%, 20=75.38%, 50=0.70%
     lat (msec): 100=0.04%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
Run status group 0 (all jobs):
  WRITE: io=20480MiB, aggrb=10075KiB/s, minb=10075KiB/s, maxb=10075KiB/s, mint=2081336msec, maxt=2081336msec
Disk stats (read/write):
  md0: ios=159/5353967, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    sdb: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%
    sdc: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%
    sdd: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%
    sde: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%

Key figures are IOPS, bw (bandwith) and completion time.

IOPS: read - 39403/868 ~45x write - 5851/2518 ~2x

bw read - 157614/3475 ~45x write - 23406/10075 ~2x

Time: read - 133/6033 ~45x write - 895/2081 ~2x

I am surprised the array wrote so fast ~800 IOPS for read and write was my expectation, I did not have time to repeat with a different IO lib and cant explain why random writes would be 3x faster than random reads on a RAID0 array so I think its to do with libaio.

In short my home PC temporarily had read IOPS equivalent to 200 hard disks spinning at 15’000 RPM, ace.

Prices are as follows:

  • 80GB ioDrive (SLC) US$3,600
  • 160GB ioDrive (SLC) US$7,200
  • 320GB ioDrive (MLC) US$14,400
  • 320GB ioDrive Duo (SLC) US$11,900
  • 640GB ioDrive Duo (MLC) US$9,795

See http://www.fusionio.com/Products.aspx for more information.

ZFS, It’s Sometimes Good to Know How Screwed You Are.

I have just had a disk fail on my NAS, actually it happened ages ago but I was too broke to replace it. At the same time as one being faulted, another was degraded through having too many errors. Below is my interaction with ZFS to discover the extent of the problem and “fix” it.

Christmassy Shizzle

Last weekend I went down to London again for a Christmas jolly with some great old uni friends, it was a lovely weekend, good wholesome fun.

On the Saturday I went to see the Wallace Collection while some girlies shopped on Kensington Highstreet (where the sales are apparently excellent but I’m not sure I could have tolerated a moment). There was an Osbert Lancaster exhibit there that was great, alongside the permanent collection which is excellent. I had never heard of him before but hes is definitely worth investigating, the blurb on the wall described him as a dandy aesthete, something I have always considered myself.

In the nighttime we watched the 1951 Scrooge with Alastair Sim in an incredible private cinema, ate nice food, got a little squiffy (port was involved you will be surprised to learn)

On the Sunday we went to see the Babylon exhibit in the British Museum and I returned to Leeds feeling tired but ace. Babylon has appeared in art of all forms, not least a few Jazz numbers.

Byzantium at the Royal Academy of Arts

I was in London a few weeks ago and saw the wonderful exhibit at the Royal Academy of Arts.

I have been interested in Byzantine history since a historian friend at university described how they were essentially the eastern Roman empire, called themselves Romans, spoke Greek, were Christian and survived well into the Middle Ages. As ever, wikipedia has a good introduction to the topic.

Off to London again next weekend, going to see the Babylon exhibit at the British Museum and have a lovely Christmas celebration with some good old friends.

Edinburgh

I have recently returned from Edinburgh, I caught the tail end of the Fringe festival. It was a good trip, and the first time I have had more than a day off work since February. I saw quite a few acts in the final 3 day.It’s been ages since I blogged and I’m out of the habit so I’ll just post loads of vids.

Summer Fun

I have been working really hard of late and have decided to block book a load of long weekends this summer and get outdoors a bit. I have been thinking about doing a long distance path for ages and have decided to do one in early August, probably the West Highland Way. It is 95 miles and I reckon I can walk 20 a day so should be able to fit it in if I take a Friday and a Monday off work. I have just gone shopping for some kit so I can do it as lightweight (and brutal) as possible, and so got my gadget fix at the same time. This is ambitious as I have done nearly nothing for almost 3 years, but fuck it. I am in Snowdonia next week and will see just how bad my fitness is and the next six weeks I will do as much prep as I can.

From Alpkit.com, a great store selling direct from the factory at low cost.

Hunka Bivy, £30 Bivy

Gourdon 30L Watertight Rucksac, £20 Gourdon Bags

I wish they had the Wee Airic mat in stock, but i got a thermarest one instead (cost 3 times as much!) Wee Airic, £17.50 Airic

From golite.com:

Ultralite Poncho/Tarp, £26 Poncho

JetBoil, £46. Been thinking about one of these for a while, very efficient use of the gas, boils real quick and stows in the 1L pot. JetBoil

I am well excited about it.

BodyWorlds in Manchester

I went a few weeks ago to see BodyWorlds at the Museum of Science and Industry (mosi) in Manchester.

I have only just had chance to get the pics off my phone and am amazed at how well they came out.

  Tennis Player  All one body, look at the shared foot. Magnificent Beast[](http://www.thattommyhall.com/wp-admin/upload.php?style=inline&tab=browse-all&action=view&ID=61&post_id=-1210457292&paged) This was the highlight for me, what incredible musculature.

Blood Vessels This is amazing, enough features remain with just the blood vessels that you could probably recognise him if you knew him in life.

  Newton’s Cradle? Either a real life visible human or a macabre newtons cradle.

It was a great day out.