I enjoyed creating planet network management so much I thought I’d do it again this time centred around data centers. Friends, I give you planet data center! I’ve populated the site with feeds from this site’s blogroll and my feed reader… if you know of any other data center oriented blogs worth reading, please get in touch.
If you can think of any other planet sites you would like to see, I’m all ears! 🙂
Everybody knows that one of the biggest consumers of electricity in data centre is the air conditioning system. There are two main avenues for reducing the cost of air conditioning, either make the air conditioning system more efficient so that it consumes less electricity, or remove the requirement to use so much air conditioning in the first place by running your data centre hotter.
It looks like running the data centre hotter is gaining some ground. The Rackable CloudRack C2 is a new server that can run safely at temperatures around 40°C rather than the more normal range of 20 to 23°C.
The main problem is that, just having one component of your data centre capable of running at high temperatures isn’t terribly useful. Expect to see a lot more manufacturers releasing high temperature capable equipment. Cost in the data centre is bound to be a button pusher for a lot of data centre managers in these recessionary times.
David Cuthbertson of Square Mile Systems was kind enough to demonstrate his AssetGen software to myself and Denis last week.
Once the data has been inputed into a CMDB like AssetGen all sorts of very impressive reports can be generated very quickly.
Implementing a CMDB involves a heavy up front investment because you have to manually enter at least 50% of your infrastructure and associated dependencies.
The cause of the steep initial investment in CMDB is the invisibility of infrastructure in the data centre to auto-discovery software, meaning that infrastructure cannot be auto-discovered in the same way as devices on the network.
Denis was chatting to a chap working for a well known insurance company over the weekend and they have 8,000 devices on their network that they’ve given up trying to track down.
If your server cabinets knew what equipment they contain then maintaining a CMDB would require a far lower initial investment of time and money. In addition, intelligent infrastructure would make tracking changes to the infrastructure much easier.
Intel have carried out a limited pilot to find out how a data centre would perform without the usual data centre environmental controls [PDF].
The top and bottom of it was that the servers, over a nine month test period performed as well whilst exposed to regular none air conditioned air and limited air filtration as servers in a fully air conditioned data centre.
Does this mean that you can switch off all of your air conditioners and circulate none conditioned air instead? No, I’d wait for longer follow up studies before you do that. 😉
Working in the server room and busting for a pee? You need to install a toilet where it’s convenient…yes folks somebody installed a server room so that you’ve got to go through the women’s toilet in order to enter the server room.
Just think of all of the time you’d save not having to walk all the way to the mens toilet.
The folks over at Pingdom spotted some great data centre cabling art.
Courtesy of Digital:Slurp.
Courtesy of ChrisDag. Looks like something off Star Trek
Courtesy of mbm3290.
Courtesy of Jeff Newsom. I wish our cabling looked like this.
Courtesy of tim d. Don’t like the look of the power cable though!
And of course there are some downright scary ones. 🙂
We expect two main trends to continue to drive business throughout 2008:
- Convergence — a lot of people not normally associated with computers and communications are being drawn in, most notably electricians working in the building industry. With things getting sticky in the housing market, it is likely that a lot of electricians will be looking for alternative sources of revenue;
- Heat in the data centre — its not just the planet’s environment that’s warming up…servers keep getting hotter too with only modest signs that things are going to change any time soon. The data centre environment is going to be a concern for a while yet.
Mid March we will be going to the ELEX show in Harrogate. Given the first item above, you won’t be surprised to know that we’ll be showcasing cable testers aimed at the converged electrician.
Devices for measuring and alerting on environmental conditions keep getting better. We expect that trend to continue throughout 2008. In fact, Sensatronics have just released the first firmware upgrade for their rack-mount environment monitor. I’ll post more fully about that when I’ve collated all of the new features.
In addition, we’ve had good results with network enabled thermometers in non IT environments too. Warehouses and cold storage facilities gain the same benefits from convergence with the network as the IT industry has over the last decade or so.
At the top end of the cable tester market, Agilent continue to build a very fine platform with fibre, 10 gig and alien cross talk capabilities. We can look forward to more great products from them. The great thing with the Agilent approach is that you are freed from the buy, trade-in cycle. I suppose, for the more cynical reader, you replace it with the buy then perform repeated software upgrades cycle. 😉
With economic conditions uncertain, it looks like 2008 is going to be interesting to say the least. 🙂
The most interesting thing about technology change are the odd juxtapositions it throws up. If you’d asked me a few years ago who would be the leader in cloud computing, I wouldn’t have predicted that it would be Amazon.
Sure Amazon know how to run very large websites. How did they go from e-commerce pioneer to cloud computing? It’s kinda like your local supermarket deciding that they’d like to build ships.
The odd thing is: where is Microsoft? You would have thought they would be very keen to get the developer eye balls currently heading towards Amazon.
I’m sure Microsoft could build an infrastructure around the .NET runtime, virtualise it and rent it to people on a scalable infrastructure.
Microsoft are the obvious company to deliver the cloud computing service. They have a large developer following, have a mature tool set, languages and libraries developers are already familiar with.
The main problem with Amazon’s offering is that, for Microsoft developers, you have to start from scratch. You’ve got to learn a whole raft of new technologies and languages. If you’ve no alternative then that’s what you do. But, if Microsoft can deliver cloud computing using tools you already know, then they are in the driving seat.
One thing is certain: creating scalable websites just got a whole lot easier and cheaper.
Update June 2013: Microsoft have indeed built a scalable .NET based PaaS offering leveraging their developer toolset, called Windows Azure. It is maturing very nicely.
Interesting what Amazon is up to…first with cloud storage then cloud computing and now cloud databases. Is the art of data centre management going to be concentrated into a few massive data centres?
We currently rent a single Sun box, running Linux oddly enough, in a data centre to run all of our websites and email. One of the down sides with renting a machine is the limited capacity of storage, CPU and bandwidth. If you go the Amazon way then capacity becomes elastic. You can increase it when you need to and reduce when necessary.
The upside of renting is that your costs are known beforehand.
Would we consider moving over to a service like Amazon? Yes, but with a few reservations:
- Data security — we need to be PCI DSS compliant because we handle online payments. We must ensure that card holder data cannot be compromised;
- Budget limits — how can we make sure that we don’t run up ridiculous bills either through programming error or a breach in security;
- Support — who are we going to call when things go wrong?
- Denial of Service — will the cloud come with DoS mitigation services and insurance?
- Firewall — you better be sure you’re going to need a firewall. PCI DSS mandates a firewall, but you need to make sure that access to your ports are limited. That’s best done off server.
We really are at the beginning of the virtual computing and cloud computing revolutions. I expect the IT world will look very different when both have run their respective courses. Though, of course, both virtual and cloud computing are very much bound together.
One side effect of concentrating more and more computing into central hubs is the head count reduction that will likely follow. If your data centre disappears or reduces in size why employ so many people to manage it?
What is likely to happen is that a layer of service providers will be created to allay a lot of the above concerns, especially the support issue. Amazon probably won’t be interested in problems with my particular virtual image, but a service provider who built the virtual image in the first place will be.
Virtual computing will provide challenges to software licenses. Any software that is licensed per CPU is going to be very expensive to run inside a virtual image that can be executed on very large computers and indeed many computers at the same time.