Day 2: Equinix
On the 29th of September, myself and the Startup Catalyst crew visited the Silicon Valley Equinix data centres.
The data centres were huge, monolithic cubes with no windows, signage or banners. We approached what appeared to be an entrance and started looking around for signs of life. After a few moments of us standing around awkwardly, the front doors mysteriously swung open slowly, as if in a horror movie, presenting us with a dark, empty hallway. A security guard walked into view and directed us to the correct data centre. From there, we were greeted by security personel who escorted us into the facility. Our passports were checked and we were each given a name badge with our associated security clearance on it. Food and drinks were strictly forbidden inside the data centre, so they were locked up inside a secure room to await our return.
From there, we were introduced to Bill Strong, the Senior Director of IBX Operations in the North West Region. We then moved into a chamber reminiscent of an airlock. Before we could progress into the next section of the facility, we needed to ensure that the door to the previous section had been sealed, presumably to prevent someone subtly sneaking in after us. While we waited, we admired an artwork of neatly arranged ethernet cables on the wall. Here’s an example
We were in. Well, almost. We had progressed through three doors (one to enter the main facility and two for the airlock), each requiring a full biometric hand scan of an employee with clearance before it would open. On the other side of a glass wall was a gargantuan sea of servers and racks, neatly arranged into groups by large metallic cages. Bill scanned his hand and let us through the next door, where I was promptly blasted by a wave of cooled air.
From here, we walked down the hallway and various features of the centre were explained to us. The data centre is essentially a realestate provider. Floor space is rented to customers, who bring their own infrastructure, including servers, switches etc. Customers are charged for space and power consumption. Equinix provides a plethora of optical fibre connectivity options and SV5 (the data centre) is known as a “carrier neutral” facility. A carrier is a company that owns network infrastructure, such as Verizon, AT&T, Sprint, Telstra, Optus etc. This means that customers have plenty of choice, leading to high competition between carriers and great prices for connectivity. They even provide copper telephone connectivity for older servers that still require it. Equinix will also run direct fibre connections between servers when requested.
Other features of the data centre include fire protection, which consists of localised water sprinklers. A fire may damage a single server but other customers would not be affected. They had previously used an oxygen depleter, but it had an unfortunate damaging effect on hard drive platters. Putting out a fire was also a global event (i.e. the entire facility was affected by it) so a small fire could potentially damage servers in the entire facility. The facility also uses a sensitive smoke detection system to find fires very early. Bill used the analogy “if you lit a match and immediately put it out, an engineer would be notified immediately and likely come running”
We then walked upstairs and looked through a window at a massive battery system capable of running the facility for 12 minutes. While that may seem small, the centre uses the equivalent power of 40 to 50 thousand houses which I found really impressive. These batteries provide the critical time necessary to start up two diesel generators with enough fuel to provide enough power for three days. Several contracted companies are on standby to refill these tanks if mains power is expected to be down for a longer period. The company has recently announced plans to run the centre entirely off renewable energy and are looking at wind farms to provide the power.
This was followed by a trip to the roof where I asked one of the engineers about whether solar panelling could be viable, but supposedly the region is not suited for it. We then stood inside one of the cooling vents, which was similar to walking inside a giant 16ºC-22ºC leaf blower. After this, we walked back through the hall of endless servers towards phase 3 of the building. This section is still under construction. as it is highly cost effective to expand the interior of the building as required, rather than setting up everything from the start. In total, the entire cost of the building was upwards of $100 million, with a cost of roughly $100,000 per square metre.
It was the end of our tour, and we were escorted outside into a board room. Here, Bill Norton, Vice President of Research at IIX, took the stage.
“Nobody expects everything to work perfectly all the time,” he said, only that “you need keep customers in the loop.” I found this pretty interesting – in Australia I find that people are fairly intolerant of things breaking.
He told us the story of an engineer working at 111 8th Avenue Carrier Hotel in New York, and how one day everything went wrong. At the hotel, 9 generators were installed on the roof to supply power in the event of an outage. Previously, these generators were supplied by a large fuel tank on the roof. However, due to 9/11 attacks, large fuel tanks were no longer allowed on the roofs of buildings. To solve this at hotel, a smaller tank was installed on the roof and connected to a larger tank on the ground. The two tanks were connected by a pipe and pumping system. The story goes like this. The engineers on the site checked the pump every now and again by switching it on and listening to whether the internal motor made any sounds. They never checked if fuel was actually being pumped from the large tank to the small tank.
One day, the power went out. The generators kicked in and power was restored to the building. Customers were notified that the mains power had stopped, but the generators had kicked in and all was good. Then, the generators started to stop. The engineer checked the small fuel tank, only to find that it was empty. He ran down numerous flights of stairs (since the power was out and the elevator had stopped working) and checked the large fuel tank. The large fuel tank was full, and he could hear the pump running. Why was the small tank empty? Little did the engineer know that the polarity of the pumps was reversed, meaning that the pumping direction was also reversed (i.e. fuel was being pumped from the small tank to the large tank and away from the generators). He reversed the polarity and fuel started pumping to the small tanks and towards the generators.
The generators were not starting. When they ran out of fuel, the generators rapidly ran their startup motors in an attempt to restart. The engineer ran back up the stairs and discovered that the startup motors had burnt out. He ran back down the stairs, across town, picked up new motors, ran back to the hotel, back up the stairs and installed them.
The generators were not starting. When the small tanks were drained by the pump, the sludge at the bottom had gone into the generators and blocked their filters. The engineer discovered this, ran across town, got new filters, ran back and installed them.
The generators started up again and power was restored to the building. Residents were not impressed. Not because the power went out, as “nobody expects everything to work perfectly all the time,” but because they were not kept in the loop. The last piece of information sent was that everything was working well. No communication was sent after the power was lost.
We said our goodbyes and bestowed both Bills with the gift of Australian chocolate. While photos are strictly forbidden, an exception was made and the security guards took a photo of us in the main lobby. I enjoyed the visit and hope that someday I will be able to use one of Equinix’s data centres in a startup.