Discoverable data centre infrastructure

David Cuthbertson of Square Mile Systems was kind enough to demonstrate his AssetGen software to myself and Denis last week.

Once the data has been inputed into a CMDB like AssetGen all sorts of very impressive reports can be generated very quickly.

Implementing a CMDB involves a heavy up front investment because you have to manually enter at least 50% of your infrastructure and associated dependencies.

The cause of the steep initial investment in CMDB is the invisibility of infrastructure in the data centre to auto-discovery software, meaning that infrastructure cannot be auto-discovered in the same way as devices on the network.

Denis was chatting to a chap working for a well known insurance company over the weekend and they have 8,000 devices on their network that they’ve given up trying to track down.

If your server cabinets knew what equipment they contain then maintaining a CMDB would require a far lower initial investment of time and money. In addition, intelligent infrastructure would make tracking changes to the infrastructure much easier.

Intel study shows no effect from using none conditioned air

Intel have carried out a limited pilot to find out how a data centre would perform without the usual data centre environmental controls [PDF].

The top and bottom of it was that the servers, over a nine month test period performed as well whilst exposed to regular none air conditioned air and limited air filtration as servers in a fully air conditioned data centre.

Does this mean that you can switch off all of your air conditioners and circulate none conditioned air instead? No, I’d wait for longer follow up studies before you do that. πŸ˜‰

[via vnunet.com]

Cabling as data centre art

The folks over at Pingdom spotted some great data centre cabling art.

208731724 7bd1fa539d o

Courtesy of Digital:Slurp.

865724585 fadbaae80e o

Courtesy of ChrisDag. Looks like something off Star Trek

120563181 41aef4460b o

Courtesy of mbm3290.

491617896 0d1de9736d o

Courtesy of Jeff Newsom. I wish our cabling looked like this.

184018928 74be66c04b b

Courtesy of tim d. Don’t like the look of the power cable though!

And of course there are some downright scary ones. πŸ™‚

Looking forward to 2008

We expect two main trends to continue to drive business throughout 2008:

  • Convergence — a lot of people not normally associated with computers and communications are being drawn in, most notably electricians working in the building industry. With things getting sticky in the housing market, it is likely that a lot of electricians will be looking for alternative sources of revenue;
  • Heat in the data centre — its not just the planet’s environment that’s warming up…servers keep getting hotter too with only modest signs that things are going to change any time soon. The data centre environment is going to be a concern for a while yet.

Mid March we will be going to the ELEX show in Harrogate. Given the first item above, you won’t be surprised to know that we’ll be showcasing cable testers aimed at the converged electrician.

Devices for measuring and alerting on environmental conditions keep getting better. We expect that trend to continue throughout 2008. In fact, Sensatronics have just released the first firmware upgrade for their rack-mount environment monitor. I’ll post more fully about that when I’ve collated all of the new features.

In addition, we’ve had good results with network enabled thermometers in non IT environments too. Warehouses and cold storage facilities gain the same benefits from convergence with the network as the IT industry has over the last decade or so.

At the top end of the cable tester market, Agilent continue to build a very fine platform with fibre, 10 gig and alien cross talk capabilities. We can look forward to more great products from them. The great thing with the Agilent approach is that you are freed from the buy, trade-in cycle. I suppose, for the more cynical reader, you replace it with the buy then perform repeated software upgrades cycle. πŸ˜‰

With economic conditions uncertain, it looks like 2008 is going to be interesting to say the least. πŸ™‚

The elephant and the cloud

Elephant flying on clouds

The most interesting thing about technology change are the odd juxtapositions it throws up. If you’d asked me a few years ago who would be the leader in cloud computing, I wouldn’t have predicted that it would be Amazon.

Sure Amazon know how to run very large websites. How did they go from e-commerce pioneer to cloud computing? It’s kinda like your local supermarket deciding that they’d like to build ships.

The odd thing is: where is Microsoft? You would have thought they would be very keen to get the developer eye balls currently heading towards Amazon.

I’m sure Microsoft could build an infrastructure around the .NET runtime, virtualise it and rent it to people on a scalable infrastructure.

Microsoft are the obvious company to deliver the cloud computing service. They have a large developer following, have a mature tool set, languages and libraries developers are already familiar with.

The main problem with Amazon’s offering is that, for Microsoft developers, you have to start from scratch. You’ve got to learn a whole raft of new technologies and languages. If you’ve no alternative then that’s what you do. But, if Microsoft can deliver cloud computing using tools you already know, then they are in the driving seat.

One thing is certain: creating scalable websites just got a whole lot easier and cheaper.

Update June 2013: Microsoft have indeed built a scalable .NET based PaaS offering leveraging their developer toolset, called Windows Azure. It is maturing very nicely.

Compute upon a cloud

Data centre worker

Interesting what Amazon is up to…first with cloud storage then cloud computing and now cloud databases. Is the art of data centre management going to be concentrated into a few massive data centres?

We currently rent a single Sun box, running Linux oddly enough, in a data centre to run all of our websites and email. One of the down sides with renting a machine is the limited capacity of storage, CPU and bandwidth. If you go the Amazon way then capacity becomes elastic. You can increase it when you need to and reduce when necessary.

The upside of renting is that your costs are known beforehand.

Would we consider moving over to a service like Amazon? Yes, but with a few reservations:

  • Data security — we need to be PCI DSS compliant because we handle online payments. We must ensure that card holder data cannot be compromised;
  • Budget limits — how can we make sure that we don’t run up ridiculous bills either through programming error or a breach in security;
  • Support — who are we going to call when things go wrong?
  • Denial of Service — will the cloud come with DoS mitigation services and insurance?
  • Firewall — you better be sure you’re going to need a firewall. PCI DSS mandates a firewall, but you need to make sure that access to your ports are limited. That’s best done off server.

We really are at the beginning of the virtual computing and cloud computing revolutions. I expect the IT world will look very different when both have run their respective courses. Though, of course, both virtual and cloud computing are very much bound together.

One side effect of concentrating more and more computing into central hubs is the head count reduction that will likely follow. If your data centre disappears or reduces in size why employ so many people to manage it?

What is likely to happen is that a layer of service providers will be created to allay a lot of the above concerns, especially the support issue. Amazon probably won’t be interested in problems with my particular virtual image, but a service provider who built the virtual image in the first place will be.

Virtual computing will provide challenges to software licenses. Any software that is licensed per CPU is going to be very expensive to run inside a virtual image that can be executed on very large computers and indeed many computers at the same time.

Data centre heating effects

One of the side effects of the recent RackSpace outage in their Dallas/Fort Worth data centre has been finding out just how quickly their data centre heats up when the air conditioning system fails.

Our backup generators kicked in instantaneously, but the transfer to backup power triggered the chillers to stop cycling and then to begin cycling back up againβ€”a process that would take on average 30 minutes. Those additional 30 minutes without chillers meant temperatures would rise to levels that could result in data loss and irreparably damage customers’ servers and devices. We made the decision to gradually pull servers offline before that would happen.

WOW! 30 minutes from air-con failure until temperatures reach a level when servers start being damaged. I knew the temperatures would go up fast, but I didn’t think they’d heat up that fast.