Reducing emissions in an institutional context will require processes and behavioural change, not a "one-size-fits-all" approach
This blog post is the eighth of an eleven-part series on CSRwire that summarizes key lessons from the new book Cold Cash, Cool Climate: Science-based Advice for Ecological Entrepreneurs.
In a world trying to minimize climate risks, large institutions will need to modify their structures and behavior, and entrepreneurs can help them do that by creating supporting products, processes and services. To reduce emissions in an institutional context, companies will typically follow a set of steps like the ones below (it’s a similar process to the one that an entrepreneur might use to develop game-changing new innovations, as I discussed earlier in Envisioning the Future We Want to Create):
- Create a baseline inventory of corporate greenhouse gas emissions and track over time.
- Project greenhouse gas emissions and commit to an aggressive improvement level in a future year to which the company will work toward.
- Link greenhouse gas emissions to each business function.
- Identify opportunities for transformational environmental improvements (using whole systems integrated design) and create business plans for capturing them.
- Assign responsibility for implementation.
- Implement highest impact and most profitable changes in business processes.
- Measure impacts over time.
- Reward technical staff and managers for achieving improvement targets.
- Reevaluate opportunities each year and implement the highest impact and most profitable opportunities first.
- Lather, rinse, repeat.
These “steps to sustainability” will look familiar to companies that are already taking the climate issue seriously, and they apply equally well to other environmental issues (for a brief CEO-level treatment giving practical advice like this for how businesses should respond to the climate issue see the Harvard Business Press book Climate Change: What's Your Business Strategy?).
Institutional processes like these hold great promise for rapid and large-scale innovation because they can be easily replicated with the help of information technology, thus taking full advantage of increasing returns to scale. Consider CVS Pharmacy, for example, which distributes innovations in ordering and other processes to thousands of stores worldwide once they’ve been tested in a few stores, with terrific – and predictable – results.
Many companies use Six Sigma programs as a way to institutionalize processes like the steps above. In that case, the focus is broader than just on energy or emissions, but the idea is the same, assigning cross-departmental teams to identify opportunities, giving those teams responsibility for capturing those savings, and measuring the results. The teams are then rewarded for the real savings they produce, and in general, they find more and more.
Another historical example is from Dow Chemical in the 1980s and early 1990s. Ken Nelson, an engineer with Dow USA, created a contest among lower level employees to root out waste and save energy. The first year of the contest they found dozens of projects with a measured return on investment (ROI) of 173 percent per year, and over the dozen-year life of the contest, the projects saved $110 million per year for an audited average ROI of about 200 percent. Those savings went straight to Dow’s bottom line, and never petered out.
The program stopped when Nelson retired, but it is the archetypal example of how opportunities for energy savings are a renewable resource.
Bypassing the Problem: Cloud Solutions
Sometimes the way to address institutional problems is to bypass them altogether. In almost all “in house” data centers – ones run by companies whose primary business is not computing – the budgets for the facilities department, which is responsible for electricity and cooling, is separate from the budget for the IT department – the folks who buy the computers. That means that the IT folks don’t care one bit for buying more energy efficient servers because the savings accrue in another department’s budget. The company as a whole loses in this case, because the total cost of delivering computing services is much higher than it needs to be.
Cloud computing providers (like Google, Microsoft, and Amazon) have fixed this problem in their own facilities, providing substantial cost savings in delivering computations for users. That’s why for certain kinds of computing, it no longer pays to use “in house” information technology.
When people think of Lawrence Berkeley National Laboratory, where I worked for more than two decades, they often think of huge supercomputers and Nobel prize winning scientists, but even that pinnacle of computing excellence decided in the last few years to shift its email, calendar, and other routine computing services to the cloud.
For those who wrestle with the split incentive I identify above, it’s usually much easier to contract for cloud services than it is to wrestle with the difficult internal institutional problems that lead to inefficiency in many of these facilities.
Hit the Budget: Assigning Responsibility, Designing Efficiency
Of course, that still leaves millions of dollars of wasted energy and capital on the table, so ultimately the best thing is to fix the root cause by assigning one person responsibility and authority for the whole data center, forcing the competing departments to operate under one budget, and making total cost to the company the ultimate arbiter of how things are done.
This way when someone requests IT resources they understand the full cost of their actions. In the best case, programmers are charged the total cost per computing cycle, which forces them to consider the cost of inefficient coding as well. That ideal is hard to achieve, but that’s the only way to ensure efficient use of computing resources, because people don’t think about efficient use of resources when they are ostensibly free.
One company that has developed a new way to design efficiency into new leased facilities is Vantage Data Centers, in Santa Clara, Calif. I’ve visited their facilities twice and intensely quizzed their engineers. The key institutional innovation they’ve developed is what they call “collaborative design services,” where they work closely with the incoming tenant to make a facility that has “cooling, airflow, and power distribution overhead” of 15 to 35 percent of the total electricity used to run the computers. That compares favorably to overhead of 80 to 90 percent for typical “in house” data centers, and at the low end is similar to the overhead reported by the cloud computing companies like Google, Microsoft, Amazon, and Facebook, who lead the industry in reducing overhead.
Vantage’s collaborations with their customers are the closest thing I’ve seen to a whole systems integrated design process that involves two separate institutions. That’s not easy to do. Typically such processes are found in product design inside companies like Apple, where the designers have complete control, but it’s rare in the data center industry for separate companies to coordinate in this way in facility construction.
More typically, data centers are built in a “one-size-fits-all” approach, with bad consequences for the efficiency and cost effectiveness of those facilities.
Next: Reasons for optimism.
Using Information Technology To Change The Game
Envisioning the Future We Want to Create
Addressing the Underlying Drivers of Emissions Growth
The Scope of the Problem
So You Want to Solve the Climate Problem...
Cold Cash, Cool Climate: Some Fundamentals
Cold Cash, Cool Climate: Science-Based Advice for Ecological Entrepreneurs