Low Tech Mag
It is surprisingly difficult to build a carbon neutral sailing ship. This is even more the case today, because our standards for safety, health, hygiene, comfort, and convenience have changed profoundly since the Age of Sail.
On board the ship `Garthsnaid' at sea. A view from high up in the rigging. Image by Allan C. Green, circa 1920.
The sailing ship is a textbook example of sustainability. For at least 4,000 years, sailing ships have transported passengers and cargo across the world’s seas and oceans without using a single drop of fossil fuels. If we want to keep travelling and trading globally in a low carbon society, sailing ships are the obvious alternative to container ships, bulk carriers, and airplanes.
However, by definition, the sailing ship is not a carbon neutral technology. For most of history, sailing ships were built from wood, but back then whole forests were felled for ships, and those trees often did not grow back. In the late nineteenth and early twentieth century, sailing ships were increasingly made from steel, which also has a significant carbon footprint.
The carbon neutrality of sailing in the 21st century is even more elusive. That’s because we have changed profoundly since the Age of Sail. Compared to our forebears, we have higher demands in terms of safety, comfort, convenience, and cleanliness. These higher standards are difficult to achieve unless the ship also has a diesel engine and generator on-board.The revival of the sailing ship
The sailing ship has seen a modest revival in the last decade, especially for the transportation of cargo. In 2009, Dutch company Fairtransport started shipping freight between Europe and the Americas with the Tres Hombres, a sailing ship built in 1943. The company remains active today and has a second ship in service since 2015, the Nordlys (built in 1873).
Since then, others have joined the sail cargo business. In 2016, the German company Timbercoast started shipping cargo with the Avontuur, a ship built in 1920.  In 2017, the French Blue Schooner Company started transporting cargo between Europe and the Americas with the Gallant, a sailing ship that was built in 1916.  All these sailing ships were constructed in the twentieth or nineteenth century, and were restored at a later date. However, a revival of sail cannot rely on historical ships alone, because there’s not enough of them. 
The Noach, built in 1857.
At the moment, there are at least two sailing ships in development that are being built from scratch: the Ceiba and the EcoClipper500. The first ship is being constructed in Costa Rica by a company named Sailcargo. She is built from wood and inspired by a Finnish ship from the twentieth century. The second ship is designed by a company called EcoClipper, which is led by one of the founders of the Dutch FairTransport, Jorne Langelaan. Their EcoClipper500 is a steel replica of a Dutch clipper ship from 1857: the Noach.
“Old designs are not necessarily the best", says Jorne Langelaan, "but whenever proven design is used, one can be sure of its performance. A new design is more of a gamble. Furthermore, in the 20th and 21st century, sailing technology developed for fast sailing yachts, which is an entirely different story compared to ships which need to be able to carry cargo.”More economical sailing ships
These two ships – one under construction and one in the design phase – have the potential to make sail cargo a lot more economical than it is today. That’s because they have a much larger cargo capacity than the sailing ships currently in operation. As a ship becomes longer, her cargo capacity increases more than proportionally.
The EcoClipper500 is a full-scale replica of the Noach.
The 46 metre long Ceiba is powered by 580 m2 of sails and carries 250 tonnes of cargo. The 60 metre long EcoClipper500 is powered by almost 1,000 m2 of sails and takes 500 tonnes of cargo. For comparison, the Tres Hombres is not that much shorter at 32 metres, but she takes only 40 tonnes of cargo – twelve times less than the EcoClipper500. A larger ship is also faster and saves labour. The Tres Hombres requires a crew of seven, while the EcoClipper500 only has a slightly larger crew of twelve.Life cycle analysis of a sailing ship
Although the EcoClipper500 is still in the design phase, she will be the focus of this article. This is because the company conducted a life cycle analysis of the ship prior to building it.  As far as I know, this is the first life cycle analysis of a sailing ship ever made. The study reveals that it takes around 1,200 tonnes of carbon to build the ship.
Half of those emissions are generated during steel production, and roughly one third is generated by steel working processes and other shipyard activities. Solvent-based paints as well as electric and electronic systems each account for roughly 5% of emissions. The emissions produced during the manufacturing of the sails are not included because there are no scientific data available, but a quick back-of-the-envelope calculation (for sails based on aramid fibres) signals that their contribution to the total carbon footprint is very small. 
The EcoClipper500 has a carbon footprint of 2 grammes of CO2 per tonne-kilometre, which is five times less than the carbon footprint of a container ship.
If these 1,200 tonnes of emissions are spread out over an estimated lifetime of 50 years, then the EcoClipper500 would have a carbon footprint of about 2 grammes of CO2 per tonne-kilometre of cargo, concludes researcher Andrew Simons, who made the life cycle analysis for the ship. This is roughly five times less than the carbon footprint of a container ship (10 grammes CO2/tonne-km) and three times less than the carbon footprint of a bulk-carrier (6 grammes CO2/tonne-km). 
Looking aft from aloft on the 'Parma' while at anchor. Alan Villiers, 1932-33. Villiers's work vividly records the period of early 20th century maritime history when merchant sailing vessels or ‘tall ships’ were in rapid decline.
Transporting one ton of cargo over a distance of 8,000 km (roughly the distance between the Caribbean and the Netherlands) would thus produce 16 kg of carbon with the EcoClipper500, compared to 80 kg on a container ship and 48 kg on a bulk carrier. The proportions are similar for other environmental factors, such as ozone depletion, ecotoxicity, air pollution, and so on.
Although the sailing ship boasts a convincing advantage, it may not be as big as you might have expected. First, as Simons explains, there’s scale. A container ship or bulk carrier enjoys the same benefits over the EcoClipper500 as the EcoClipper500 enjoys over the Tres Hombres. It can take a lot more cargo – on average 50,000 tonnes instead of 500 tonnes – and it needs only a slightly larger crew of 20-25 people. 
Second, fossil fuel powered ships are faster than sailing ships, meaning that fewer ships are needed to transport a given amount of cargo over a given period of time. The original ship on which the EcoClipper500 is based, sailed between the Netherlands and Indonesia in 65 to 78 days, while a container ship does it in about half the time (taking the short cut through the Suez canal).Building a fleet of sailing ships
There’s two ways to further lower the carbon emissions of sailing ships in comparison to container ships and bulk carriers. One is to build ships from wood instead of steel, such as the Ceiba. If the harvested trees are allowed to grow back (which the makers of the Ceiba have promised), such a ship may even be considered a carbon sink.
However, there’s a good reason why the EcoClipper500 will be made from steel: the company’s aim is to build not just one ship, but a fleet of them. Jorne Langelaan: “There are few shipyards who can deliver wooden ships nowadays. Steel makes it easier to build a fleet in a shorter period.”
A possible compromise would be a composite construction, in which a steel skeleton is clad with timber keel, planks, and deck. Andrew Simons: “This would reduce the carbon footprint of construction by half. It could also be feasible to make superstructures and some of the mast sections and spars from timber instead of steel.”
Driving sprays over the main deck of the 'Parma'. Alan Villiers, 1932-33.
Towards the future, another possibility to further decrease a sailings ship’s emissions per tonne-km is to build it even larger. While the EcoClipper500 has much more cargo capacity than the cargo sailing ships now in operation, she is far from the largest sailing ship ever built.
Historical ships such as the Great Republic (5,000 tonnes), the Parma (5,300 tonnes), the France II (7,300 tonnes), and the Preussen (7,800 tonnes), were more than 100 metres long and could take more than ten times the freight capacity of the EcoClipper500. Langelaan already dreams of a EcoClipper3000.Passengers
Most cargo sailing ships travelling across the oceans today can also take some passengers. Fully loaded with cargo, the EcoClipper500 takes 12 crew members, 12 passengers, and 8 trainees (passengers who learn how to sail). If the upper hold deck is not used for cargo, another 28 trainees can join, so that the ship can take up to 60 people on board (with a smaller cargo volume: 480 m3 instead of 880 m3).
The carbon footprint for passengers amounts to 10 g per passenger-km, compared to roughly 100 g per passenger-km on an airplane.
Consequently, and since ocean liners have disappeared, the EcoClipper500 also becomes an alternative to the airplane. According to the results of the life cycle analysis, the carbon footprint for passengers on the EcoClipper500 amounts to 10 grammes per passenger-kilometre, compared to roughly 100 grammes per passenger-kilometre on an airplane. Transporting one passenger thus produces as much carbon emissions as transporting 1 tonne of freight.Engine or not?
Importantly, the life cycle analysis of the EcoClipper500 assumes that there is no diesel engine on-board. On a sailing ship, a diesel engine can serve two purposes, which can be combined. First, it allows to propel the ship when there is no wind or when sails cannot be used, for example when leaving or entering a harbour. Second, combined with a generator, a diesel engine can produce electricity for daily life on board of the ship.
For most of history, energy use on-board of a sailing ship was not too problematic. There was firewood for cooking and heating, and there were candles and oil lamps for lighting. There were no refrigerators for food storage, no showers or laundry machines for washing and cleaning, no electronic instruments for navigation and communication, no electric pumps in case of leaks or fire.
However, we now have higher standards in terms of safety, health, hygiene, thermal comfort, and convenience. The problem is that these higher standards are difficult to achieve when the ship does not have an engine that runs on fossil fuels. Modern heating systems, cooking devices, hot water boilers, refrigerators, freezers, lighting, safety equipment, and electronic instruments all need energy to work.
Crewman of the 'Parma' with a model of his ship. Alan Villiers, 1932-33.
Modern sailing ships often use a diesel engine to provide that energy (and to propel the ship if necessary). An example is the Avontuur from Timbercoast, who has an engine of 300 HP, a 20 kW generator, and a fuel tank of 2,330 litres. Large sail training vessels and cruising ships have several engines and generators on-board. For example, the 48m long Brig Morningster has a 450 HP engine and three generators with a total capacity of 100 kW, while the 56m long Bark Europa has two 365 HP engines with three generators – and burns hundreds of litres of oil per day.
Depending on the lifestyle of the people on board, the emissions per passenger-km may rise to, or surpass, the levels of those of an airplane.
Obviously, the emissions and other pollutants of these engines need to be taken into account when the environmental footprint of a sail trip is calculated. Depending on the lifestyle of the people on board, the emissions per passenger-km may rise to, or surpass, the levels of those of an airplane. To a lesser extent, electricity use on-board also increases the emissions of cargo transportation.Energy use on board a sailing ship
The EcoClipper500 has no diesel engine on board, which is a second reason to focus on this ship. Obviously, a sailing ship without an engine cannot proceed her voyage when there’s no wind. This is easily solved in the old-fashioned way: the EcoClipper500 stays where she is until the wind returns. A ship without an engine also needs tug boats – which usually burn fossil fuels – to get in and out of ports. For the EcoClipper500, these tug services account for 0.3 g/tkm of the total carbon footprint of 2 g/tkm.
Without a diesel engine, the ship also needs to generate all energy for use on board from local energy sources, and this is the hard part. Renewable energy is intermittent and has low power density compared to fossil fuels, meaning that more space is needed to generate a given amount of power – which is more problematic at sea than it is on land.
Renewing caulking on the poop of the 'Parma'. Alan Villiers, 1932-33.
To make the EcoClipper500 self-sufficient in terms of energy use, a first design decision was to shift energy use away from electricity whenever possible. This is especially important for high temperature heat, which cannot be supplied by electric heat pumps. The ship will have a pellet-stove on board to provide space heating, as well as a biodigester – never before used on a ship – to convert human and kitchen waste into gas for cooking. Thermal insulation of the ship is another priority.
Nevertheless, even with pellet-stove and biodigester (which themselves require electricity to operate), and with thermal insulation, energy demand on the ship can be as high as 50 kilowatt-hours of electricity per day (2 kW average power use). This concerns a “worst-case normal operation” scenario, when the ship is sailing in cold weather with 60 people on board. Power use will be lower in warmer weather and/or when less people are taken. During an emergency, the power requirements can amount to 8 kW, while more than 24 kWh of energy can be needed in just three hours.Hydrogenerators
How to produce this power? Solar panels and wind turbines are only a small part of the solution. Producing 50 kWh of energy per day would require at least 100 square metres of solar panels, for which there is little space on a 60 m long sailing ship. Vulnerability and shading by the sails make for further problems. Wind turbines can be attached in the rigging, but their power output is also limited. The low potential of solar and wind power are demonstrated by the earlier mentioned sailing ship Avontuur. She has a 20 kW generator, powered by the diesel engine, but only 2.1 kW of solar panels and 0.8 kW of wind turbines.
The hydrogenerator is the only renewable power source that can provide a large sailing ship with enough energy for the use of modern technology on board.
The hydrogenerator is the only renewable power source that can provide a large sailing ship with enough energy for the use of modern technology on board. Hydrogenerators are attached underneath the hull and work in the opposite way as a ship’s propeller. Instead of the propeller powering the ship, the ship powers the propeller, which turns a generator that produces electricity. In spite of its name and appearance, the hydrogenerator is actually a form of wind energy: the sails power the propellers. Obviously, this only works when the ship is sailing fast enough.
Furling sail on the main yard of the Parma. Alan Villiers, 1932-33.
The EcoClipper500 will be equipped with two large hydrogenerators, for which Simons calculated the power output at different speeds, taking into account the fact that the extra drag they produce slows down the ship somewhat. He concludes that the EcoClipper500 needs to sail at a speed of at least 7.5 knots to generate enough electricity. At that speed, the hydrogenerators produce an estimated 2,000 watts of power, which converts to roughly 50 kWh of electricity per day (24 hours of sailing).
At a lower speed of 4.75 knots, the generators produce 350 watts, which comes down to 8.4 kWh of energy over a period of 24 hours – only 1/6th of the maximum required energy. On the other hand, at higher speeds, the hydrogenerators produce more energy than necessary. At a speed of almost 10 knots they provide 120 kWh/day, at a speed of 12 knots this becomes 182 kWh/day – 3.5 times more than needed.Saltwater batteries
According to her hull speed, the EcoClipper500 will be able to sail a little over 16 knots at absolute top speed – this is double the minimum speed required to generate enough power. Achieving this speed will be rare, because it needs calm seas and strong winds from the right direction. Nevertheless, in good wind conditions, the ship easily sails fast enough to produce all electricity for use on board.
Good wind conditions can last for days, especially on the oceans, where winds are more powerful and predictable than on land. However, they are not guaranteed, and the ship will also sail at lower speeds, or find herself in becalmed conditions – when hydrogenerators are as useless as solar panels in the middle of the night.
Because she has no engine, the EcoClipper500 faces a double problem when there’s no wind: she cannot continue her voyage, and she has no energy to maintain life on board.
Because she has no engine, the EcoClipper500 faces a double problem when there’s no wind: she cannot continue her voyage, and she has no energy to maintain life on board. The first problem is easily solved but the second is not. Life on board goes on, and so there is a continued need for power. To provide this, the ship needs energy storage.
To cover the needs for three days drifting in cold weather, an energy storage of 150 kWh would be required, not taking into account charge and discharge losses. Five or seven days of energy use on-board would require 250 to 350 kWh of storage. For emergency use, another 25 kWh of energy storage is needed.
Scraping the deck onboard the 'Parma'. Alan Villiers, 1932-33.
Not having an engine, generator and fuel tank saves space on board, but this advantage can be quickly lost again when one starts to add batteries for the hydrogenerators. Lithium-ion batteries are very compact, but they cannot be considered sustainable and bring safety risks. That’s why Jorne Langelaan and Andrew Simons see more potential in – very aptly – saltwater batteries, which are non-flammable, non-toxic, easy to recycle, have wide temperature-tolerance, and can last for more than 15 years. Like the biodigester, they have never been used on a sailing ship before.
Unlike lithium-ion batteries, saltwater batteries are large and heavy. At 60 kg per kWh of storage capacity, a 150 kWh battery storage would add a weight of 9 tonnes, while a 350 kWh storage capacity would add 21 tonnes. Still, this compares favourably to the total cargo capacity (500 tonnes), and the batteries can serve as ballast if they are placed in the lower part of the ship’s hull. The space requirements are not too problematic, either. Even a 350 kWh energy storage only requires 14 to 29m3 of space, which is small compared to the 880m3 of cargo volume.
The emissions that are produced by the manufacturing of the hydrogenerators, biodigester, and batteries are not included in the life cycle analysis of the ship, because there are no data available. However, these emissions must be relatively small. Hydrogenerators have much higher power density than wind turbines, and thus a relatively low embodied energy. A quick back-of-the-envelope calculation learns that the carbon footprint of 350 kWh saltwater batteries is around 70 tonnes of CO2. Human Power
There’s another renewable power source and energy storage on board of the EcoClipper, and that’s the humans themselves. Like the pellet stove and the biodigester, the use of human power could reduce the need for electricity. Nowadays, cargo ships and most large sailing ships have electric or hydraulic winches, pumps, and steering gear, saving manual labour at the expense of higher energy use. In contrast, EcoClipper sticks to manual handling of the ship as much as possible.
Crew at the capstan of the Parma, weighing anchor. Alan Villiers, 1932-33.
Simons and Langelaan are also considering the addition of a few rowing machines, coupled to generators, to produce emergency power. Two rowing machines could provide roughly 400 watts of power. If they are operated around the clock in shifts, they could supply the ship with an extra 9.6 kWh of energy per day (ignoring energy losses) – one fifth of the total maximum electricity use.
In fact, as I tell Simons and Langelaan ten rowing machines operated continually in shifts would provide as much power as the hydrogenerators at a speed of 7.5 knots. If there are 60 people on board, and everybody would generate power for less than one hour per day, no hydrogenerators and batteries would be needed at all. “A very interesting thought”, answers Simons, “but what impression would we be painted with?”Hot Showers?
Even with a biodigester, hydrogenerators, batteries, and rowing machines, the passengers and crew on board the EcoClipper500 would be far short of luxurious, and perhaps too short of comfortable for some. For example, if 60 people on board the ship would take a daily hot shower – which requires on average 2.1 kilowatt-hours of energy and 76.5 litres of water on land – total electricity use per day would be 126 kWh, more than double the energy the ship produces at a speed of 7.5 knots.
The ship could supply this energy at a higher sailing speed, but there would also be a need for 4,590 liters of water per day, a quantity that could only be produced from seawater – a process that requires a lot of energy. Even a crew of 12 taking a daily hot shower would require 25.2 kWh of energy per day, half of what the hydrogenerators produce at a sailing speed of 7.5 knots. The Bark Europa is the only sailing ship mentioned in this article that has hot showers in every (shared) cabin, but it is also the ship with the biggest generators and the highest fuel use.
On the forecastle head of the Parma in fine weather. Image by Alan Villiers, 1932.
Andrew Simons: “On the EcoClipper500 there needs to be a manageable compromise between energy use and comfort. Energy use on board will have to be actively managed. Resources are finite, just like for the planet. In many ways the ship is a microcosm of challenges that the wider world has to face and find solutions to.”
Jorne Langelaan: “At sea you are in a different world. It doesn’t matter anymore if you can take a daily shower or not. What matters are the people, the movements of the ship, and the vast wilderness of ocean around you”.Measuring the right things
This article has compared the EcoClipper500 sailing ship with the average container ship, bulk carrier, and airplane in terms of emissions per tonne- or passenger-kilometer. However, these values are abstractions that obscure much more important information: the total emissions that are produced by all passengers and all cargo, over all kilometres.
The international ocean freight trade increased from 4 billion tonnes of cargo in 1990 to 11.2 billion tonnes in 2019, resulting in more than 1 billion tonnes of emissions. International air passenger numbers grew from 1 billion in 1990 to 4.5 billion in 2019, resulting in 915 million tonnes of emissions. Consequently, lowering the emissions per tonne- and passenger-kilometre is neither a necessity nor a guarantee for a reduction in emissions.
If we cut international cargo traffic more than fivefold, and passenger traffic more than tenfold, then the emissions of all container ships and airplanes would be lower than the emissions of all sailing ships carrying 11.2 billion tonnes of cargo and 4.5 billion of passengers. Vice versa, if we switch to sailing ships, but keep on transporting more and more cargo and passengers across the planet, we will eventually produce just as much in emissions as we do today with fossil fuel powered transportation.
The mizzen of the 'Grace Harwar'; view aft from the main crosstrees. Alan Villiers, 1932-33.
Of course, none of this would ever happen. The amount of cargo that was traded across the oceans in 2019 equals the freight capacity of 22.4 million EcoClippers. Assuming the EcoClipper500 can make 2-3 trips per year, we would need to build and operate at least 7.5 million ships, with a total crew of at least 90 million people. Those ships could only take 0.5 billion passengers (12 passengers and 8 trainees per ship), so we would need millions of ships and crew members more to replace international air traffic.
We should not be fooled by abstract relative measurements, which only serve to keep the focus on growth and efficiency.
All of this is technically possible, and as we have seen, it would produce less in emissions than the present alternatives. However, it’s more likely that a switch to sailing ships is accompanied by a decrease in cargo and passenger traffic, and this has everything to do with scale and speed. A lot of freight and passengers would not be travelling if it were not for the high speeds and low costs of today’s airplanes and container ships.
It would make little sense to transport iPhones parts, Amazon wares, sweatshop clothes, or citytrippers with sailing ships. A sailing ship is more than a technical means of transportation: it implies another view on consumption, production, time, space, leisure, and travel. For example, a lot of freight now travels in different directions for each next processing stage before it is delivered as a final product. In contrast, all sail cargo companies mentioned in this article only take cargo that cannot be produced locally, and which is one trip from producer to consumer. 
This also means that even if sailing ships have diesel engines on board, they would still bring a significant decrease in the total emissions for freight and passenger traffic, simply because they would reduce the absolute number of passengers, cargo, and kilometers. We should not be fooled by abstract relative measurements, which only serve to keep the focus on growth and efficiency.
Kris De Decker
- Support Low-tech Magazine via Paypal or Patreon.
- Subscribe to our newsletter.
- Buy the printed website.
 Between 1978 and 2004, the Avontuur was operated as sail cargo vessel under Captain Paul Wahlen. The Apollonia, originally built in 1946, is another cargo sailing ship in operation since 2014. It is 19.5 metres long and carries 10 tonnes of cargo.
 Very recently, Grain de Sail was buillt and launched for Trans-Atlantic shipping of wine and cocoa. She is a modern sailing ship without an engine, built from aluminium, and can take 35 tonnes of cargo.
 Andrew Simons: “There are plenty historical sailing ships, but either very costly to get into service as a regulatory compliant cargo vessel, because they are still used for other purposes, or not suitable.”
 Unfortunately the envelope got lost.
 In the case of the EcoClipper, most of the emissions are produced during the construction of the ship, while in the case of bulk carriers and container ships, they are mainly produced during operation and fuel production.
 The largest container ships now take 190,000 tonnes of cargo.
 There is not much data available on saltwater batteries, but they are less energy-intensive to build than many other types of batteries. The calculation is based on an estimate of 66 kg CO2/kWh of storage capacity and three generations of batteries over a period of 50 years.
 Almost one third of all cargo transported are fossil fuels themselves.
 The study can be downloaded when you subscribe to EcoClipper’s newsletter. The research is based on a typical life cycle analysis, but note that this is not a peer reviewed study.
In the mid 20th century, whole cities' sewage systems safely and successfully used fish to treat and purify their water. Waste-fed fish ponds are a low-tech, cheap, and sustainable alternative to deal with our own shit -- and to obtain high protein food in the process.
After we eat and drink, we excrete into toilets, which use water to flush our effluent into municipal sewage systems. By and large, the resulting sewage is either untreated, or treated in different kinds of wastewater treatment plants, the most advanced of which are expensive to run and have high energy demands. 
But even if sewage is treated, effluent is still high in levels of nitrogen, phosphorous, dissolved oxygen, and biological matter—essential nutrients for life on Earth. This causes eutrophication. The high levels of these nutrients lead to algal blooms, which in turn may produce toxins leading to mass fish deaths and biodiversity loss in rivers, lakes, and oceans. 
Essentially, the core of the issue is that rather than nutrients being recycled, as occurs in most ecosystems, it’s a one-way flow. Fixing these problems by, for example, making water use more efficient, or using more energy-intensive sewage treatment plans, doesn’t solve the root of the problem: the nutrient cycle is leaky. And you can’t fix a leaking sink by changing the amount or kind of water you use.Too much of a good thing
If we want to fix the leaking sink, we need to move away from the idea that human waste is inherently toxic, or that human activity is always bad for the environment. This way of thinking is grounded in the assumption that humans are somehow separated from nature. The logical conclusion of this assumption, then, is to separate us from natural cycles even more: building more refined, chemically and energy-intensive sewage treatment, building hard boundaries between food production and watersheds, and, failing that, using large-scale geoengineering experiments to clean our rivers.
But the main issue here is not that we are somehow toxic and so a burden to our environment. It’s that the nutrients we are releasing into the environment are too highly concentrated. This is especially the case when it comes to the “problem” of eutrophication. Caused by high-nutrient wastewater and agricultural run-off, it is generally thought as a bad thing. But consider the Greek root of the word: “becoming well fed.”The main issue is not that we are somehow toxic and so a burden to our environment. It’s that the nutrients we are releasing into the environment are too highly concentrated.
Eutrophication is only bad because good nutrients like nitrogen, carbon, and phosphorous, necessary for the majority of biotic life, are too concentrated—causing rapid algal growth, leading to too little oxygen in the water, as well as too many toxins produced by algae, both of which are deadly to fish. However, fish eat algae, so if algae growth were to be slowed down a bit, fish populations would multiply instead. The problem is not that wastewater is polluted, but that there is too much of a good thing, too highly concentrated for the ecosystem to absorb.How to fix a leaking sink
I first learned about the system of treating sewage through aquaculture when I lived in Hanoi. There, I found out that it’s actually very common, especially in poor agricultural communities, to reuse human waste for production.
This basic idea can also be brought to scale. During the communist period in China, many fish farmers had limited access to fish feed and local state cooperatives started organizing human waste collection systems. Eventually, in many Chinese cities, up to the 1990s, trucks and boats collected human manure in cities—some run by the state and some clandestine, illegal operations—and transported them to aquaculture operations in peri-urban land. From 1952 to 1966, about a third of fertilizers (which includes fish feed) used in China came from nightsoils, and by 1966, 90% of excreta were recycled.  Incidentally, today, massive seaweed production off the coast of China has likely greatly reduced the likelihood of eutrophication—an accidental form of bio-remediation and nutrient recycling. 
Image: Sewage water being pumped into a fish pond in the outskirts of Hanoi, Vietnam. Source: Edwards, 2005. 
Image: Wastewater after treatment in fishponds, Hanoi. Source: Edwards, 1996. 
One interesting large-scale example is the system that emerged in the outskirts of Hanoi in the 1960s. Hanoi, the capital of the newly independent communist nation, fighting a drawn-out war against Western occupying forces, had no municipal wastewater treatment. Sewage led out into two rivers, which flowed south and eventually merged with the Red River. During the communist period of collectivization of farmland, Vietnamese farmer cooperatives were excluded from the international market and so often used whatever resources available to them to feed fish, such as slaughterhouse wastes or spoiled grains. Seeing the untreated wastewater in the canals—a resource out of place—farmers started pumping it into large ponds.Seeing the untreated wastewater in the canals—a resource out of place—farmers started pumping it into large ponds. After trial and error, and investing the little they had in infrastructural improvements, they determined the right sewage-and-freshwater ratio needed that would dilute the wastewater enough so the fish wouldn’t die. They also let the untreated sewage water sit in primary and secondary ponds before mixing it into fish ponds, effectively killing harmful pathogens and allowing large solids to sediment, further promoting algal growth. Image: A local retail fish market in Yen So commune. Anders Dalsgaard. Source: Thi Phong Lan, Nguyen, et al. "Microbiological quality of fish grown in wastewater-fed and non-wastewater-fed fishponds in Hanoi, Vietnam: influence of hygiene practices in local retail markets." Journal of Water and Health 5.2 (2007): 209-218.
Farmers also grew plants such as duckweed and water hyacinth on the water and on its banks, which could then be fed to livestock—and had the dual benefit of drawing out heavy metals from the water. They also practiced fish polyculture, where species like catfish, carps, and tilapia were farmed together, and thus were more effective in cleaning the water and protecting small fry from predators. Every year, the ponds were drained, and the sludge at the bottom was then applied to nearby fields, further reusing available nutrients.
Eventually, these farmers developed a system that, by 1995, provided 40-50% of Hanoi’s total fish supply every year. Scientific measurements showed that the water from the fish ponds, when pumped back into the river, was well below the World Health Organisation’s recommended level for biological oxygen demand—an indicator to determine the efficiency of water treatment systems.  Essentially, they had created a water treatment plant for a city of 1.5 million people, at almost no cost to the state.A “low-cost folk technology” serving an entire city
You might be thinking: sure, this is one example of an interesting, but ultimately doomed, alternative to wastewater treatment. It is an aberration, and couldn't possibly be maintained for long. Unfortunately for your internal cynic, it actually can be. The city of Kolkata (formerly Calcutta), India—population 14.8 million—has the largest sewage-fed aquaculture system in the world. Though farmers had been using sewage to feed fish in different ways since the 19th century, the system became more developed starting in the 1940s.
Image: Fish ponds in the East Kolkata Wetlands, the largest sewage-fed aquaculture system in the world today. Source: iStock.
During the British colonial period, administrators built a series of canals through the city that functioned as its sewers. These let out into the Bidyadhari River. However, this river quickly silted up and became unusable. As a result, an adjacent wetland area transformed from tidal salt marshes to primarily freshwater marshes. Two sewage canals were then built in 1940 to further extend the city’s effluent to the ocean. It was at this point that local farmers started rerouting the sewage water into fish ponds in the former salt marshes, growing vegetables on the banks of the sewage canals, and forming cooperatives to manage the wastewater.
Though the Kolkata system was developed over time, it is quite systematic. Every year, ponds are first drained and sludge is applied to fields. Sewage water is fed into the pond slowly at low depth and allowed to sit for two weeks. This basically mimics conventional sewage treatment systems, where sewage is first treated through stimulating algal and bacterial growth, harmful sediments are left to settle, and most parasites are killed because their eggs and worms die if they don’t find a host within two weeks. Then, fish are stocked in another pond, and slowly sewage water is introduced into the pond at a sewage-to-water ratio of 1:4. All of this requires skill and knowledge developed over generations, allowing farmers to know when oxygen levels are too low, which could kill the fish.   The resulting effluent can reach the water quality of conventional treatment. 
Image: A sluice gate made of bamboo at the Eastern Kolkata Wetlands. Water hyacinth is grown to help purify the water and to feed livestock. Source: Mukherjee, 2020. 
Image: Every year, the ponds are drained, and the sludge at the bottom is applied to nearby fields, further reusing available nutrients. Source: Take pride in the East Kolkata Wetlands (Facebook-page).
Through trial and error and good judgement, local farmers have developed a wastewater treatment system that is extremely efficient and adaptive to local conditions. They can distinguish the kind of effluent—industrial or domestic—through the hues it gives off, and will control or dilute it when necessary. For example, sewage from tanneries can be toxic to fish, so they will not use it. They vary water levels according to season, weather, and available quantities of effluent. They know the hue of greenish-black the water needs to be to have an optimal oxygen and ammonia level for fish. They can tell whether there is too little oxygen by paying attention to the degree by which fish come up to the surface to gulp air. Farmers harvest snails in the water to protect fish growth, which are then crushed and fed to ducks, whose droppings in turn fertilize fish ponds and nearby soils. They plant water hyacinths and duckweed to absorb heavy metals from the sewage water.  The Kolkata fish farms provide 40% of the region's fish production and process 80% of the city’s sewage. The Kolkata fish farms provide 8000 tons of fish per year to the city, or 40% of the region’s fish production. It processes 80% of the city’s sewage, and reduces the nutrient and organic loads of the city’s sewage water by 50-90%, while keeping bacterial loads to an acceptable level under WHO guidelines. It is calculated to save the city an equivalent of $64,400,000 per year in sewage treatment costs—making Kolkata an “ecologically subsidized city”.  The system provides farmers a return over investment of 28% and provides 200,000 people with a livelihood.  While profit shouldn’t itself be the goal of this system—it’s a public service, after all—it certainly helps to defray costs of wastewater treatment. In a small municipality in Karnal, northern India, one study showed that municipal sewage-fed fish ponds, installed in the 2010s, provided over $25,000 of net profit per year to the municipality, as well as indirect benefits such as improving nearby soils through the sale of treated wastewater to farmers.  Image: Waste-fed fish ponds provide steady sources of protein for small-holder farmers. Source: Fish Farming in the East Kolkata Wetlands, Ramble On, Priya Mallic. Image: Fish harvested from the East Kolkata Wetlands. Source: Fish Farming in the East Kolkata Wetlands, Ramble On, Priya Mallic. Still, when introduced into small rural communities, the benefits extend far beyond monetary profit, to the social, cultural, and ecological services provided by the fish ponds. This includes improving soil quality, adaptability of local communities to climate change, leisure (e.g. fishing with friends), and steady sources of protein for small-holder farmers. For example, even if they don’t sell the fish, a small sewage-fed fish pond can provide a family of six with 8kg of fish, per person, per year—a significant raise in protein intake for many rural communities.  In the case of the Eastern Kolkata Wetlands, the fish ponds also help to recharge the ground water—a serious issue in India where many aquifers are nearing depletion.  Kolkata’s wetlands are a “low-cost folk technology”  treating the majority of the sewage of a city with a population the size of New York. This is made possible through the development of a vast human-fish-plant ecosystem, a city-scale wastewater treatment plant that emerged through the creativity, ecological knowledge, and direction of local farming communities. Over 90 systems in Germany in the early 20th century By this point, your internal cynic might have come up with another counter-argument: sure, so it works at scale. But you would have to be pretty desperate, and poor, to stoop down to farming fish in sewage water. While it might work in India, and worked for a while in Vietnam and China, it would never work in developed countries, where there are higher sanitation standards, and where no one would want to eat the fish farmed in sewage anyway. Image: A view of the former sewage-fed aquaculture system in Munich, Germany, today a bird sanctuary. Photo: Peter Schleypen, 2012. Source: Historisches Lexikon Bayerns. You may be surprised to learn that, in fact, over 90 such systems existed in Germany in the early 20th century.  Up until the 1990s, the city of Munich still processed most of its wastewater through fish farming. Indeed, Germany has pioneered some of the more detailed and rigorous scientific investigation into the large-scale viability of sewage-fed fish ponds, as early as the 1890s. Up until the 1990s, the city of Munich in Germany still processed most of its wastewater through fish farming. Like in China, wastewater-fed fish ponds have a long but unappreciated history in Europe. Castle moats, monasteries, and villages often had wastewater-fed fish ponds. As cities grew rapidly in the 19th century, untreated wastewater was simply flushed into rivers, leading to the collapse of fisheries across Europe as well as generally unsanitary conditions and the spread of disease. There was a growing recognition that sewage should be treated; one common indicator of adequate treatment methods was that trout are able to live in the treated water. As a result, civil engineers and scientists constructed small fish ponds to test the quality of municipal sewage treatment plants. Gustav Oesten, a civil engineer charged with wastewater treatment in Berlin, began to experiment in the late 1880s with using fish to treat wastewater, and to harvest fish as a secondary product of sewage treatment. He was able to spend the good part of a decade conducting experiments with different fish species, designs for ponds, and various local and weather conditions.  Image: Feed channel for the fish ponds of the Munich sewage-fed aquaculture system. Image by Bjs (CC BY-SA 3.0), Wikimedia Commons. Through these experiments, he showed conclusively that fish growth accelerates in sewage water, and that fish in turn help purify sewage water. Trout were not very good fish for this purpose, because they cannot tolerate water with high oxygen levels—common in wastewater systems, a byproduct of rapid algae growth. Carp, who can come up for air when oxygen levels are intolerable, grew very well—those fed with sewage far exceeding production of those in normal fish ponds. But, using trout, he proved that the water was of high enough quality to enter back into the water shed. His experiments suggested that fish ponds could be designed to help address Europe's water crisis and, at the same time, provide an economic return through the sale of fish. By the beginning of the 20th century scientists throughout Germany started conducting more small-scale experiments. Bruno Hofer, a fish scientist better known for pioneering the study of fish pathologies, started scaling up these experiments, showing in the early 1900s that wastewater of larger institutions like hospitals, breweries, and factories, as well as smaller municipalities could theoretically be treated with fish ponds. He even went further, and “dared” to propose such a system for a city as large as Munich—a notion that was perhaps considered outlandish at the time. Image: A sprinkler introducing secondary treated wastewater diluted with river water into a wastewater-fed fishpond in Munich, Germany. Source: Edwards, 2005.  By 1929, however, after several successful implementations of Hofer’s design around Germany, the city of Munich built its own fish pond wastewater treatment system, which served the whole city until the 1990s. This was the largest such system implemented at the time in the world, initially designed to process the wastewater of 500,000 people. The system was so efficient that the water leaving the ponds, fully treated, was comparable to natural water in quality and nutrient level.  Many applications As these examples illustrate, sewage-fed aquaculture is a solution to many interlinked problems. It processes waste—from agriculture, livestock, and cities—and cycles those nutrients back into the system through food and agricultural production. It reduces nitrogen and phosphorous levels in the water, preventing eutrophication further downstream. It reuses available water, slowing down the water cycle and replenishing groundwater. It further reduces unnecessary inputs like chemical fertilizers, phosphates, and energy-intensive fish feed. Finally, it creates jobs and a source of income, especially necessary in poor countries. If we were to calculate the fertilizer potential of sewage water alone, this would be reason enough to develop systems to reuse it. For example, one study estimated that, in the year 2000, all of India’s sewage was worth an equivalent of $2,000,000 per day in fertilizer costs.  In other words, on any given day, all of India is flushing several million dollars down the toilet. Waste-fed fish ponds would be of great help in capturing this wealth. Perhaps counter-intuitively, scientists have found that waste-fed fish ponds may actually be especially useful for arid countries, where water is scarce, by re-using wastewater for protein production.  Fish ponds don’t have to be for productive use alone. They can be integrated into wetlands and conservation areas, leisure fishing, tourism areas, or educational sites. They provide opportunities for improving biodiversity and making urban life more permeable for nature. Fish ponds don’t have to be for productive use alone. They can be integrated into wetlands and conservation areas, leisure fishing, tourism areas, or educational sites. Another reason waste-fed fish ponds continue to be relevant is that it is low-cost and low-tech, and therefore has little barriers for implementation. While high-tech, high-input systems like hydroponics, vertical gardening, and automated agriculture are getting a lot of press these days, the fact is that the majority of the world’s farmers have little to no access to capital and relies on small, but mostly sustainable, interventions to feed a stunning 70% of the global population.  Waste-fed fish ponds offer a source of subsistence at little financial risk to these small farmers.  Equally, when developed at the municipal level, they offer small towns, villages, and resource-poor communities opportunities to defray the costs of wastewater treatment, as well as generating local employment and improving sanitation.   Why don't we do this more often? Despite many advantages, most sewage-fed aquaculture systems have either been totally stopped or are in decline. So what happened? The first possible reason, and the one that most people might raise, is the “yuck factor”. Perhaps it's just too gross for most people to eat fish grown from poop. By and large, this wasn't the problem: consumers' surprising acceptance of waste-fed fish is a constant in the research on urban fish ponds.  Furthermore, about 10% of the world’s population probably already consumes food irrigated with wastewater , and, even in the European Union, where agricultural regulations are famously strict, many farmers already apply sewage sludge to their fields—but European consumers don’t seem to care too much. Image: Tilapia. The second possible reason for their decline is that it's not safe. And, it’s true, here is where the most care needs to be taken in designing effective wastewater treatment. There is good evidence showing that sewage treatment in fish ponds can be as safe as conventional methods. Some of the strongest evidence to support this comes from a city-sized experiment conducted in the 1980s in Lima, Peru, sponsored by the World Bank and the United Nations Development Project. Aid agencies worked closely together with the city government to design a large-scale aquaponic sewage treatment site.  The site was basically a city-sized proof-of-concept. Endless measurements were taken over its two decades of operation, adjusting different variables throughout the project’s lifespan, and controlling for changes in volume of sewage and weather. It was found quite conclusively that fish-based sewage treatment was not only a viable and economical alternative for low-income countries, it also met stringent World Health Organization guidelines for water sanitation. The fish were also tested for human consumption. In all three trials, 100% of fish tested were rated at “very good” in safety levels.  This study wasn't alone: numerous studies have investigated the safety of fish grown in sewage ponds.  More than just a leaking sink If it's not the “yuck factor” or safety, then what was it? In Hanoi, the waste-fed fishponds were not fully recognized for their potential, and peri-urban development in the 1990s began to encroach on the fish ponds. As the communist era came to an end, land near the city became increasingly valuable, and ponds were filled up for housing construction. Sewage became mixed with untreated industrial effluent, leading to large amounts of sewage being poisonous to fish, in turn leading farmers to switch to pelleted feed, by then increasingly available as Vietnam’s domestic market was opened to foreign trade.   Today, Hanoi only treats 22% of its sewage, the rest flows directly into its river systems, and 180,000 cubic meters of waste water are discharged every day into the To Lich river, the same river that serviced the fish ponds.   The disappearance of fish ponds in Germany can also be largely attributed to urban growth. As cities grew, peri-urban areas—where fish ponds necessarily needed to be placed due to them having to be close to sewage lines and sources of fresh water—became more valuable. Pressured by booming real estate prices, less availability of land, high costs of labour, as well as diminishing returns on investment as domestic fish breeding had to compete with international markets, governments inevitably chose to close the fish ponds, or convert them into more conventional sewage treatment plants. Even in Munich, the largest system built in Germany, management was costly and became less and less appealing to the municipality. Munich’s fish ponds were eventually converted into an estuary, where migrating birds come to rest. Fish production is no longer its primary goal, and the estuary only absorbs a small percentage of Munich’s wastewater.  Image: The East Kolkata Wetlands in 2005. Source: Google Earth. Image: The East Kolkata Wetlands in 2019. Source: Google Earth. The system at Kolkata is still operational, but suffering from similar symptoms. At their peak, fish ponds in the East Kolkata Wetlands were as large as 12,000 hectares. This has shrunk to 4,000 hectares due to encroaching urban development. In Kolkata, too, workers struggle to deal with industrial effluent such as that from the sizeable leather tanning industry, which is poisonous to the fish and indiscriminately dumped into the municipal wastewater system.     Thankfully, unlike Hanoi’s government, the city of Kolkata and the Indian government recognized the importance of this system, and put in a series of regulations to protect it from further development. Still, informal and illegal development—where developers fill up ponds with debris overnight and then build on it as farmers are forced to abandon it—is slowly chipping away at the wetlands. So the main driver of their disappearance is urban expansion into the peripheries. This is largely due to the global speculation on real estate—which constitutes 60% of all capital investments today.  When given a choice between selling peri-urban land to the highest bidder, and pairing sewage treatment with some fish production, most officials won’t think twice—the fish ponds have got to go! A second reason is the high prevalence of toxic chemicals in our water systems—which are too concentrated for ecosystems, and aquaculture systems, to absorb. We should ask ourselves if it’s really worth it to permit these products if they make it harder for us to mend the ecological rift between our settlements and their surroundings. More messy, organic systems are often derided as backwards and primitive, when in fact they may be far more appropriate and sustainable than the energy-intensive, easily replicable “solutions” valued by planners and engineers. A third reason is the relatively cheap cost of fossil fuels. In most industrialized countries, it is much more rational to choose for sewage treatment plans with a small land footprint but a large carbon footprint. In a world where energy is cheap, environmental costs can be pushed further and further downstream. But they will eventually circle back to us, and already are. Finally, a significant factor, and one which we shouldn’t ignore, is the bias of our leaders and of professional engineers against more messy, organic systems like that of wastewater-fed aquaculture. Such low-tech solutions are often derided in popular culture as backwards and primitive, when in fact they may be far more appropriate and sustainable than the energy-intensive, easily replicable “solutions” valued by planners and engineers.  Each reason points to a deeper problem: our economy's inability to value the right things. Like so many sustainable solutions today, and many of those discussed on this website, sewage-fed fish ponds suffer from the “you can’t change this one thing without changing the whole system” problem. These systems are beset by global real estate speculation, toxic chemicals in our food and household products, contamination by industry, the cheap price of fuel, and the deep-seated idea that humans are separate from the ecosystems they are embedded in. At the root of it all is a system of value that is not in line with our ecological needs as a species, and as a member of Earth’s living community. Fish ponds are a low-tech, low-cost, safe, and sustainable way to fix our society's leaking sink. But when we get down there on our hands and knees, we might find a lot of other things that need fixing. Aaron Vansintjan Thank you to Henning Fehr for doing research on the fish pond system in Germany, Michael DiGregorio for telling me about the Vietnamese system, Phuong Anh Nguyen for the extra research into it, and Geert Vansintjan for always keeping me inspired. * Support Low-tech Magazine via Paypal or Patreon. * Subscribe to our newsletter. * Buy the printed website. References  For example, in many developed countries, sewage treatment often involves constant automated stirring of large ponds of water—a system which is hard to maintain and takes a lot of energy. While sewage treatment only accounts for 4% of national energy use in the US, they account for up to 50% of municipal energy use—a significant portion of the domestic energy footprint. That means that towns and cities could actually decrease their energy impacts significantly if they switched to different treatment plants. See https://betterbuildingssolutioncenter.energy.gov/sites/default/files/Primer%20on%20energy%20efficiency%20in%20water%20and%20wastewater%20plants_0.pdf  It also contributes to a little-understood phenomenon called coastal darkening, where our ocean floors become muddier and darker, leading to a lower albedo, or reflectivity, of the Earth’s surface, in turn triggering global heating as well as reduced ability for marine life to receive daylight. https://www.hakaimagazine.com/news/the-environmental-threat-youve-never-heard-of/  Edwards, P. (2003) Philosophy, principles and concepts of integrated agri-aquaculture systems. In: Gooley, G. J., & Gavine, F. M. (Eds.), Integrated agri-aquaculture systems: a resource handbook for Australian industry development. Rural Industries Research and Development Corporation.  Edwards, P. (2015). Aquaculture environment interactions: past, present and likely future trends. Aquaculture, 447, 2-14.  Edwards, P. (1996). Wastewater reuse in aquaculture: Socially and environmentally appropriate wastewater treatment for Vietnam. The ICLARM Quarterly, January.  Mukherjee, J. (2020). Blue Infrastructures. Springer Singapore.  Ho, L., & Goethals, P. L. (2020). Municipal wastewater treatment with pond technology: Historical review and future outlook. Ecological Engineering, 148, 105791.  Edwards, P. (2009). Traditional asian aquaculture. In New Technologies in Aquaculture (pp. 1029-1063). Woodhead Publishing.  A term attributed to Dhrubajyoti Ghosh, a high-profile activist for the Eastern Kolkata Wetlands.  Banerjee, S., & Dey, D. (2017). Eco-system complementarities and urban encroachment: A SWOT analysis of the East Kolkata Wetlands, India. Cities and the Environment (CATE), 10(1), 2.  Kumar, D., Chaturvedi, M.K., Sharma, S.K. and Asolekar, S.R., 2015. Sewage-fed aquaculture: a sustainable approach for wastewater treatment and reuse. Environmental monitoring and assessment, 187(10), pp.1-10.  Lightfoot, C., Bimbao, M.A.P., Dalsgaard, J.P.T. and Pullin, R.S., 1993. Aquaculture and sustainability through integrated resources management. Outlook on Agriculture, 22(3), pp.143-150.  Datta, S. (2006). Waste Water Management Through Aquaculture. Journal of Environmental Management. 1. 339-350.  Mukherjee, J. (2020) citing Dhrubajyoti Ghosh.  Edwards, P. (2005). Development status of, and prospects for, wastewater-fed aquaculture in urban environments. Urban Aquaculture. Costa-Pierce B, Desbonnet A, Edwards P, Baker D, editors. Wallingford Oxfordshire: CABI Publishing, 45-59.\  Prein, M. (1988, December). Wastewater-fed fish culture in Germany. In Edwards, P. and Pullin, RSV Wastewater-Fed Aquaculture. Proceedings of the Internation al Seminar on Wastewater reclamation and Reuse for Aquaculture, Calcut ta, India (pp. 6-9).  One issue with the fish ponds in the German case was the high variability of the weather. Less sun in the Fall and Spring meant that algal production was much lower, in turn impacting fish growth and the ability of the system to treat wastewater at constant rates. In the winter months, ponds will often freeze, leading to oxygen deficiencies and fish deaths. As solar radiation can fluctuate throughout the day, the fish ponds require daily management to balance fish growth, algal growth, nutrient removal, and too much sewage that would lead to fish deaths.  Calculated using the Indian Rupee to US Dollar exchange rate in 2000, adjusted by the author for inflation of USD in 2021 from data provided by Jana, B. B., Heeb, J., & Das, S. (2018). Ecosystem Resilient Driven Remediation for Safe and Sustainable Reuse of Municipal Wastewater. In Wastewater management through aquaculture (pp. 163-183). Springer, Singapore.  In Israel, for example, mid-century kibbutzim colonies, which were often limited in the groundwater available to them, experimented in the 1960s with reusing sewage for fish production.In Egypt, the government has put its hope in wastewater-fed aquaculture, in an attempt to increase domestic protein production and maximize use of water.   See also Kolkovsky, S., Hulata, G., Simon, Y., Segev, R., & Koren, A. (2003). Integration of agri-aquaculture systems the Israeli experience. In: Gooley, G. J., & Gavine, F. M. (Eds.), Integrated agri-aquaculture systems: a resource handbook for Australian industry development. Rural Industries Research and Development Corporation.  El-Zohri, M., Hifney, A. F., Ramadan, T., & Abdel-Basset, R. (2014). Use of Sewage in Agriculture and Related Activities. In: Pessarakli, M. (Ed.), Handbook of plant and crop physiology. CRC Press.  In Germany in the 20th century, consumers at first rejected these fish, but municipalities engaged in public communication campaigns to convince people otherwise.  In Lima, Peru, researchers conducted a study of whether the fish were accepted by consumers at the market, and were surprised to find out that people weren’t so bothered when they found out where the fish came from.  In Kolkata, too, sewage-fed fish still constitute 40% of the local fish market, even when consumers have alternatives available.  WHO (2015) Sanitation. Fact sheet no. 392. World Health Organization, Geneva  Cointreau, S. J. (1990). Aquaculture with treated wastewater: A status Report on studies conducted in Lima, Peru. Applied Research and Technology (WUDAT), Technical Note No. 3. The World Bank Water Supply and Urban Development Department: p. 1-56.  In a fourth trial, only 6% were rated as “unacceptable”, but this was because they deliberately increased the ratio of sewage-to-water above the acceptable level, to mimic an “accident”. Still, these same fish were then rated as “very good” when the sewage level was decreased for a subsequent 30 days. This shows that even in the case of an accident, fish can easily recover to being safe for consumption. See UNEP International Environmental Technology Centre. (2002). Environmentally Sound Technologies for Wastewater and Stormwater Management: an International Source Book (Vol. 15). International Water Assn. : Where there are insufficient resources to build sanitary requirements into the system, researchers recommend that cleaning, butchering, and packaging be done in sanitary conditions, so that fish muscle does not risk being contaminated with pathogens on the skin or in intestines. Cooking fish thoroughly is also recommended—and in Kolkata, local cuisine fortunately does not include raw fish. Another proposal is to transfer fish to clean water ponds two weeks before harvest; this both reduces the risk of pathogens being present in fish muscle and intestines, and helps to eliminate possible unpleasant odours. Edwards P. (1990) Reuse of human excreta in aquaculture: A state-of-the-art review. Draft Report. World Bank, Washington DC. And when it comes to the presence of toxic chemicals, there is also good evidence to show that this is not a significant problem. However, this does depend on local conditions. For example, people in industrialized countries use many more detergents and pharmaceuticals that may impact the fish. This includes a broad category of toxins called “emerging contaminants” which are found in new products like beauty products and certain pharmaceuticals. There have been little recent studies in industrialized countries on the effects of these products on sewage-fed fish—in large part because these systems had largely been phased out by the time these household commodities became more prevalent in the last fifty years.         Edwards, P. (2004). Decline of wastewater-fed aquaculture in Hanoi. Aquaculture Asia, Volume IX (4, October-December): 13-14.  Hoan, V. Q., & Edwards, P. (2005). Wastewater reuse through urban aquaculture in Hanoi, Vietnam: status and prospects. Urban aquaculture. CABI International, Wallingford, 103-117.  Saigoneer (2019). Only 13% of Vietnam's Urban Sewage Is Treated Before Discharge. The Saigoneer. https://www.saigoneer.com/saigon-environment/17571-only-13-of-vietnam-s-urban-sewage-is-treated-before-discharge  Kiet, Anh. (2019). No technology can radically clean Hanoi's polluted river if sewage not treated: Mayor. Hanoi News. http://hanoitimes.vn/no-technology-can-clean-hanois-heavily-polluted-river-if-people-keep-pouring-sewage-into-it-mayor-300420.html  Bunting, S. W. (2007). Confronting the realities of wastewater aquaculture in peri-urban Kolkata with bioeconomic modelling. Water Research, 41(2), 499-505.  Jana, B. B. (1998). Sewage-fed aquaculture: the Calcutta model. Ecological Engineering, 11(1-4), 73-85.  Stein, S. (2019). Capital city: Gentrification and the real estate state. Verso Books.  Mara, D. (2013). Domestic wastewater treatment in developing countries. Routledge.
If the electricity for a vertical farm is supplied by solar panels, the energy production takes up at least as much space as the vertical farm saves.
Urban agriculture in vertical, indoor “farms” is on the rise. Electric lights allow the crops to be grown in layers above each other year-round. Proponents argue that growers can save a lot of agricultural land in this way. Additional advantages are that less energy is needed to transport food (most people live in a city) and that less water and pesticides are required.Which crops?
The vertical farms that have been commercially active for several years all focus on the same crops. These are agricultural products with a high water content, such as lettuce, tomatoes, cucumbers, peppers, and herbs. However, these are not crops that can feed a city. They contain hardly any carbohydrates, proteins, or fats. To feed a city, it takes grains, legumes, root crops, and oil crops. These are now grown globally on 16 million square kilometers of farmland - almost the size of South America. 1Growing wheat vertically
An art installation currently presented in Brussels - The Farm - explores what it would take to grow wheat in a vertical farm. For the experiment, 1 square meter of wheat was sown in a completely artificial environment. By measuring the input of raw materials such as energy and water, the project shows the extent to which natural ecosystems support our food production. When wheat is planted in the ground next to each other, instead of above, the sun provides free energy and the clouds free water.A loaf of bread for 345 euros
The experiment shows that growing 1m2 of wheat in an artificial environment costs 2,577 kilowatt-hours of electricity and 394 liters of water per year. The energy required for the hardware production (such as lighting) is not included in these results, so this is an underestimate. The building’s energy cost is also not taken into account, and that concerns both the construction and its use, for example, for heating, cooling and pumping water.
The cost calculation does include the price of the equipment (1,227 euros). The lifespan of the infrastructure is estimated at 8 years. Converted, the production of 1 m2 of wheat in an artificial environment costs 610 euros per square meter per year (including infrastructure, electricity, and water). Of this, 412 euros goes to electricity consumption and only 1 euro to water consumption. This calculation may be an overestimate because the installation is set up in an exhibition space.
The “farm” produces four harvests per year. With every harvest, enough wheat is grown to make one loaf of bread (580 grams), which has a cost of at least 345 euros. Each loaf contains 2,000 kilocalories, the amount that an average person needs per day. As a result, 91 m2 of artificially produced wheat is necessary for each person, with a total cost of 125,680 euros per year.The paradox of vertical farming
Artificial lighting saves land because plants can be grown above each other, but if the electricity for the lighting comes from solar panels, then the savings are canceled out by the land required to install the solar panels. The vertical farm is a paradox unless fossil fuels provide the energy. In that case, there’s not much sustainable about it.
Calculated at a yield of 175 kilowatt-hours per square meter of solar panel per year, the indoor cultivation of 1 m2 of wheat requires 20 m2 of solar panels. This is a underestimate because the calculations are based on the average yield of a solar panel. There is much less sunlight in winter than in summer. In reality, the vertical farm requires many more solar panels to keep operating all year round. There is also a need for an energy storage infrastructure, which costs money and energy too. Finally, solar panels’ production also requires energy, which would demand even more space if the production process itself were to run on solar panels.Innovation?
All this criticism also applies to vertical farms where lettuce and tomatoes are grown. In this case, there is a significant reduction in water use. These companies are profitable, but only because the process relies on a supply of cheap fossil fuels. If solar panels supplied the energy, the extra costs and space for the energy supply would again cancel out the savings in terms of space and costs. The only advantage of a vertical farm would then be the shorter transport distances. Still, we could just as well make transport between town and countryside more sustainable.
The problem with agriculture is not that it happens in the countryside. The problem is that it relies heavily on fossil fuels. The vertical farm is not the solution since it replaces, once again, the free and renewable energy from the sun with expensive technology that is dependent on fossil fuels (LED lamps + computers + concrete buildings + solar panels). Our lifestyle is becoming less and less sustainable, increasingly dependent on raw materials, infrastructure, machines, and fossil energy. Unfortunately, this also applies to almost all technology that we nowadays label sustainable.
Kris De Decker
Proofreading: Eric Wagner
- Subscribe to our newsletter
- Support Low-tech Magazine via Paypal or Patreon.
- Buy the printed website.
Can we make modern health care carbon-neutral and maintain the levels of care, pain relief, and longevity that we have come to take for granted?
Illustration: The human powered hospital. By Golnar Abbasi & Arvand Pourabbasi. Taken from Human Power Plant: Human Powered Neighbourhood, Melle Smets & Kris De Decker.The environmental footprint of the health care sector
Health care is one of the most important economic sectors in high income countries, but its environmental footprint is underreported and not often considered. Most research into sustainable health care is less than five years old. A 2019 research paper calculated that the sector accounts for 2-10% of national carbon footprints across all OECD countries, China, and India, with an average share of 5.5% overall. [1-2]
The data refer to the year 2014, when the health care sectors of all these 36 countries combined were responsible for 1.6 Gt of greenhouse gas emissions. This corresponds to 4.4% of the global total emissions that year (35.7 Gt) – almost double the share of aviation. The US has the most carbon-intensive health care system, accounting for up to 10% of national carbon emissions.  It also produces 9% of national air pollution, 12% of acid rain, and 10% of smog formation nationally.
The environmental footprint of health care keeps increasing. For example, in the US, the health care sector’s greenhouse gas emissions grew by 30% between 2003 and 2013.  The rise in emissions couples an increase in spending – in fact, the emissions are often calculated based on spending. US national health expenditures as a percentage of Gross Domestic Product (GDP) increased from 3% in 1930, to 5% in 1960, to 10% in 1983, to 15% in 2002, and to 17.7% in 2019. [4-5] In the EU, health care spending per capita more than doubled between 2000 and 2018, and total spending is now at 9.9% of GDP. 
If the whole world would copy today’s US health care system, the global carbon footprint of the health care sector would amount to almost half of total emissions worldwide in 2014.
The 36 countries whose health care systems together cause 4.4% of global emissions only have 54% of the worldwide population. The remaining 46% of the population produces little or no health care related emissions because they don’t have access to health care. If we were to extend the OECD-China-India health care system globally, emissions would double to about 8% of the worldwide total. Furthermore, there are very large differences between these 36 countries. If the whole world were to copy the US health care system, the global carbon footprint of the health care sector would amount to around 16 Gt – almost half of total emissions worldwide in 2014.Intense spotlights, high power medical equipment
What makes modern health care so resource-intensive?  To start with, modern hospitals are high energy users, primarily because of large plug loads from medical devices, lighting, ventilation and air-conditioning. [3, 8-12] In operating rooms, the high power use is mainly due to the use of intense spotlights and ultra clean ventilation canopies. In intensive care units and diagnostic imaging departments, medical equipment dominates the power load. 
Technologically Advanced Operating Room. iStock.
An MRI-scanner in Taipei, Taiwan (2006). Image: Kasuga Huang (CC BY-SA 3.0).
Like so many other sectors in modern society, health care has come to rely on all types of machines and devices.  Some of this medical equipment has very high power use. For example, an MRI-scanner, one of the most powerful diagnostic imaging technologies, can use as much electricity as more than 70 average European households. A 2020 study calculated that high-tech medical diagnostic technology (both MRI- and CT-scanners) was responsible for a whopping 0.77% of global carbon emissions in 2016. 
The power use of smaller medical equipment is poorly researched, but an inventory of two US hospitals showed that they had 14,648 and 7,372 energy using devices, of which the infusion pumps alone consumed more electricity on aggregate than an MRI-scanner.  The high density of medical equipment also increases the electricity use of air-conditioning in hospitals. Resource use along the supply chain
Even more energy – around 60% of the total – is used indirectly along the supply chain. [1,3,10,15]. This concerns the procurement of medical equipment, pharmaceuticals, and other medical products.
To start with, the growing number of medical devices used in hospitals also needs to be manufactured and brought to market. This requires activities such as the mining of resources and the construction and operation of research laboratories, factories and transport vehicles. This "embodied energy" of the medical equipment supply chain is very poorly researched. A study calculated that the production of an MRI-scanner requires more than half the fossil fuels used in the production of a passenger jet, and that the embodied energy is one third of the total energy use of the machine. 
Modern health care is also highly dependent on pharmaceuticals, which account for between 10 and 25% of total health care emissions, depending on the country. [15,17] A 2019 study revealed that the global pharmaceutical industry produces more greenhouse gases than the global automotive industry: 52 MtCO2 versus 46 MtCO2.  However, there is almost no data about the environmental footprint of specific pharmaceuticals, because corporate secrecy prevents scientists from making life cycle analyses.
Pharmaceutical Manufacturing Laboratory. Source: iStock.
Rubber gloves production line. Source: iStock.
Face mask production line. Source: iStock.
Single-use disposable products are another source of health care energy use and pollution. [19-24] These products are worn by medical personnel and patients (face masks, gloves, overshoes, hats, drapes, gowns). Towels, basins, sterile plastic packaging, and utensils such as syringes, laryngoscopic handles and blades, anaesthetic breathing circuits, and even surgical instruments are also provided for single use. These disposable products are supplied to hospitals in so-called custom packs, which are sets of prepackaged sterile products for any specific medical procedure you can imagine. In principle, once a pack is opened, all items are discarded, even if they were not used.
When these practices are questioned, it is often for the hospital waste they create -- the average patient in a hospital produces at least 10 kg of waste per day.  However, the environmental footprint increases significantly if the embodied energy and waste in the supply chain for making these disposable products is considered too. A study of cataract surgery in the UK – cataracts are the main cause of blindness worldwide – shows that the manufacturing of disposable materials accounts for more than half of the total carbon footprint of the procedure. Anesthetics & Vaccines
Finally, some specialist medical drugs produce emissions too. Inhalation anesthetics, which suppress the central nervous system and are a cornerstone of surgery, are potent greenhouse gases, which evaporate into the atmosphere after they have been inhaled by the patient (vented to the outside through the high energy ventilation systems of modern operating rooms).  Maintaining a 70 kg adult anesthetized for an hour produces from 25 kg (using isoflurane) to 60 kg (desflurane) CO2-equivalents, which corresponds to the emissions of driving an average European car (121gCO2/km) for 200-500 km (or driving it for around 4 hours). 
Pressurized dose inhalers, which are used to treat asthma and chronic obstructive pulmonary disease, also release potent greenhouse gases. Globally, around 800 million pressurized dose inhalers are manufactured annually, with a total carbon footprint that corresponds to the yearly emissions of more than 12 million passenger cars. [17,27] Vaccines are another key element of modern health care. They release carbon emissions not only through their development and production, but also by their resource-intensive distribution, which involves a dedicated cold chain. I could not find any reference to its environmental footprint.Carbon footprint of medical procedures
Health care services often involve all of the above mentioned sources of emissions: medical devices, pharmaceuticals, and disposable materials. When the emissions in hospitals and along the supply chain are combined, it becomes possible to calculate the environmental footprint of medical procedures.
Operating room in cardiac surgery, 2020. Source: iStock.
For example, studies of cataract surgery and reflux control surgery in the UK estimated the carbon footprint to be 182 kg and 1 ton of emissions, respectively, which corresponds to between 1,517 km and 8,333 km of driving. [28,29] Renal dialysis, a treatment to replace kidney function, produces 1.8 to 7.2 tonnes of emissions per patient per year, equal to the emissions of 15,000 to 60,000 km of driving. [28,30]The limitations of carbon and energy efficiency
Although data on its environmental footprint is still incomplete, it seems quite clear that modern health care is not compatible with a transition to a low carbon society. The big question is whether or not this can be fixed without lowering the levels of care, pain relief, and longevity that people in high income societies have grown accustomed to.
Many efforts and studies into health care sustainability aim to reduce energy use and emissions without affecting the quality of medical treatments, often explicitly so. For example, the authors of a 2020 study into the Austrian health care system write that it’s “crucial to understand how the health care sector can reduce its emissions without undermining its service quality”.  Elsewhere, researchers write that “any solution that would reduce environmental impacts while reducing performance at the same time cannot be deployed”. 
As a consequence, many researchers tend to focus on improving carbon and energy efficiency. These strategies aim to deliver the same "performance" or "service quality" but with less energy (thanks to more energy efficient equipment), or with less emissions (owing to more renewable energy sources). 
The quality of medical treatments continues to improve, resulting in extra energy use that erases the carbon or energy savings that result from efficiency.
The problem is that the quality of medical treatments continues to improve, resulting in extra energy use that erases the savings that result from carbon and energy efficiency. For example, in 2012 researchers calculated that MRI-scanners could be made 10-20% more energy efficient with relatively simple changes in design and operation.  Some of their proposed changes are now in use, but the energy use of MRI-scanners has not decreased, on the contrary.
Medical Scientist working on brain tumor cure in a Research Center. Source: iStock.
A first reason is that MRI-scanners now come with higher field strengths (which offer diagnostic images of higher accuracy) and with larger boreholes (which improve patient comfort and allow obese or very muscular patients to be scanned). These innovations have improved the quality of care, but they have done so at the expense of extra energy use. In the 2012 study, the average power consumption per scan before the energy efficiency improvements was 15 kWh. A 2020 study measured an energy use of 17 kWh and 23.6 kWh per scan for an MRI-scanner with a field strength of 1.5 and 3 Tesla, respectively. 
Second, MRI-scanners with better diagnostic capabilities also increase energy use in unexpected ways, because medical equipment, pharmaceuticals, and treatments shape and change each other.  For example, doctors used to diagnose a patient through physical examination and communication, and only used diagnostic services to confirm the diagnosis, if necessary. Now, diagnostic tests happen upfront and drive the decision process, resulting in more tests and higher energy use. The introduction of new pharmaceuticals can foster increasingly energy-intensive diagnostic practices, too. For example, certain cancer treatment drugs are now being designed to treat a very specific tumor subtype, which requires more and more accurate medical imaging to identify the tumor subtype. 
Adding more renewable energy sources could potentially lower the emissions of health care both on-site and throughout the supply chain, but as the energy use of medical treatments continues to increase, this outcome is unlikely. Besides, a quick calculation shows that, even without further growth in energy use, a carbon neutral US health care system would gobble up the entire US renewable energy production – sun, wind, hydroelectric, wood, geothermal, biofuels, and waste.  The challenge is only slightly smaller in other high-income countries. Finally, renewable energy would not solve all of the health care sector’s environmental damage, and would not even eliminate all of its carbon emissions.Sufficient health care?
To reduce the environmental footprint of modern health care, we need to question the trend towards ever greater reliance on energy-intensive technologies and services. The same holds true in other domains of life. 
However, while some people see the charm and real advantages of frugal and past ways of living when it comes to comfort or convenience, few would be tempted to apply the same principles to health and longevity. After all, the health care equivalent of travelling more slowly or wearing an extra sweater at home may be living a shorter life, suffering more pain, or being less mobile in old age. For example, if we would stop using MRI-scanners, or only use those with a field strength up to 1.5 Tesla, the lower diagnostic accuracy will lead to some cancers not being detected, resulting in lower cancer survival rates, and a lower average life expectancy. Or at least, so it seems.
The surgeon, a painting by David Teniers, 1670s.
Barber-surgeon extracting a tooth, a painting by Adriaen van Ostade, 1630.
If health care is viewed in a historical context, it seems clear that there is a powerful connection between the use of energy-intensive medical technologies on the one hand, and the health and longevity of a population on the other hand. Even looking back less than a century shows much lower health outcomes and survival rates for all kinds of diseases, and today’s global average life expectancy (72.6 years) is higher than in any high-income country back in 1950.
Hospitals date back to antiquity, but they merely welcomed those gone mad or awaiting death. In the middle ages, surgery happened at the barbershop, where “barber-surgeons” offered blood-letting, tooth extractions, and amputations alongside the more usual haircuts and shaves. They brew their own anesthetics based on herbs and alcohol, which could be just as deadly as the treatment itself.  A look at the “developing” world today also seems to suggest a clear connection between health care emissions, which are very modest, and life expectancy, which can be 20 to 30 years below that in high income countries. [37-41]
However, if one digs deeper, the connection between energy use and longevity is not as strong as it seems. This is proven by the USA, which has the most expensive and unsustainable health care system in the world, but ranks behind most European countries on the Health Care Access and Quality Index (which measures death rates from 32 causes of death that could be avoided by effective medical care). US citizens also have a lower life expectancy than European citizens. Clearly, there are other factors at play, too.Resistance to disease
To start with, the quality of a health care system is not the only determinant of health and longevity. Here’s where history does have an important lesson to teach us. Medical knowledge dating back to antiquity viewed health in a more holistic way and placed great emphasis on building up the body’s inherent resistance to disease. For example, Hippocrates, often referred to as the father of Western medicine, prescribed diet, gymnastics, exercise, massage, hydrotherapy, and swimming in the sea. 
One could argue that our forebears had no other choice than to focus on preventing disease, because they had few treatments available. However, the wisdom of their approach is more obvious than ever. Nowadays in high income societies, many patients need medical treatment because of so-called lifestyle diseases – those caused by poor or excessive nutrition, lack of physical activity, stress, or substance abuse. Typical health risks are cardiovascular disease, diabetes type 2, depression, obesity, some types of cancers, and higher susceptibility to infectious diseases. Industrial society has given us effective medical treatments, but it's also making us sick.
This means that health and longevity can be promoted in other ways than through an increasingly resource-intensive health care system. By addressing the broader determinants of health and longevity, we could make a switch from curative to preventive medicine. [15,43] Preventive medicine is not about the government telling us not to smoke (and then cashing in tax money on the sales of cigarettes). Rather, it concerns systemic changes that go beyond behavioural change.
Rush hour in São Paulo, Brazil, 2005. Public domain.
For example, significantly reducing the use of cars in our societies would bring a surprisingly large number of health benefits that would lower the need for energy-intensive medical treatments. It would decrease the health damage done through traffic accidents and through air and noise pollution. It would make people more physically active (preventing many lifestyle diseases), and it would free a lot of public space for people to come together, for kids to play, and for trees to grow (all important factors for the mental health of a population). Finally, reducing the use of cars may easily save more greenhouse gas emissions than the health care system produces.
Switching to a healthier food production system, addressing the environmental damage done by the plastic industry, reducing poverty and social inequality, introducing shorter work hours, and more meaningful jobs are other examples of preventive medicine. We have not achieved the higher life expectancy of today only because of better health care systems. We also got it because of better education, sanitation, safety and traffic regulations, welfare systems, crime control, and a more reliable food supply. The low average life expectancy in poor countries is also partly due to these factors.
Preventive medicine would also reduce the health damage done by the medical treatments themselves. This concerns health damage resulting from medical errors or side effects of pharmaceuticals and more indirectly from the pollution that the health care sector generates. For example, air pollution from health care services contributes to the prevalence of asthma, which in turn increases the demand for health care. Climate change and other environmental damage threaten younger and future generations with even larger health impacts, for example through crop failures, spread of diseases, extreme weather events, and natural disasters. The law of diminishing returns
Second, within a health care system, medical practices with higher energy use do not necessarily lead to increased health outcomes in a proportional way. Like so many other sectors in industrial society, curative health care is vulnerable to the law of diminishing returns: it takes ever more energy to gain ever smaller increases in health outcomes.  Conversely, this means that a relatively small decline in the quality or specifications of medical treatments could yield comparably large reductions in resource use and emissions.
Infection control is a good example. The development of general anesthesia in the 1840s made surgery possible, but at the time over 90% of surgical wounds became infected, often leading to death.  The first major decrease in infection rates followed antiseptic practices (1880-1900), and the second followed the introduction of antibiotics (1945-1970). By 1985, the overall infection rate had decreased to about 5%. Since then, a lot of resources have been invested to achieve incremental gains towards 100% sterility, mainly by replacing reusable supplies by single-use, disposable products. 
Operating room nurse preparing instruments for surgery at the 3rd Station Hospital, Korea. 1951. Source: US National Library of Medicine.
If properly decontaminated, reusable supplies carry no increased infection risks, but cross-contamination between patients sometimes happens by mistake. Nevertheless, some scientists have advocated for a return to reusable products, which have a much lower environmental footprint in most cases. For example, the use of reusable laryngoscope handles produces 16-25 times less greenhouse gases than single-use, disposable ones. . The researchers admit that their approach may increase deaths from surgical infections. Still, they argue that the health damage caused by the production of single-use disposable supplies is even more considerable.
When it comes to maximizing returns, less affluent societies can teach us some lessons. Comparisons of cataract surgery in the UK and in India have shown that the same treatment (phacoemulsification) in India's Aravind Eye Clinics is much cheaper and produces only 5% of the emissions and 6% of the solid waste in the UK. This is mainly because the Indian surgeons reuse as many supplies, devices, and drugs on as many patients as possible. [26,46-49] In addition, they use locally manufactured supplies, implants, and drugs, and they apply a dual-bed system in which one patient is operated while another one is being positioned and prepared in the bed next to it.
Although these practices flout regulations for infection control in high income countries, cataract surgery in India achieves similar or better outcomes and does not cause any more infections than it does in the UK or the US. [26,46-49] Consequently, it may well be that the law of diminishing returns has reached its ultimate limit, in the sense that an expensive and unsustainable medical practice does not seem to bring any health benefits at all. The Indian eye clinics demonstrate that an effective model of care is possible without expensive and unsustainable supplies and resources. Medical innovation can happen without new technology.Driven by profit
The law of diminishing returns and the focus on curative medicine are both rooted in the fact that medical innovation is primarily driven by profit. [50,51] Private companies who develop and sell medical equipment, pharmaceuticals, and other health care products have nothing to win or earn if the demand for new curative health care technologies and products declines, or if medical technologies were to be judged in relation to their resource use. The medical industry -- logically -- wants to increase the sales of its products, and has enormous marketing budgets and lobbying power at its disposal to achieve that goal. 
King George Military Hospital, electrical treatment and x-ray room. 1915. Source: US National Library of Medicine.
The WHO estimates that 20-40% of health care spending is wasted, and argues that “the cost-effectiveness, real need, and likely usefulness of many innovative technologies are questionable”. [44, 37] An increasing body of academic literature shows the extent to which patients in high income countries are “overdosed, overtreated, and overdiagnosed”. [44, 14]
None of this is inevitable. A modern health care system could also work in another economic context. For example, some have suggested the open source development of medical equipment and pharmaceuticals, in which health care technology would become a commons. Shifting the tax burden from labour to resources could be another part of the solution. In high income countries, medical equipment, pharmaceuticals, and disposable products partly serve to reduce the expensive human labour force in health care.Age and Sustainability
Based on the fragmented data available, it seems likely that the resource use of modern health care systems could be reduced significantly, without bringing us back to the barber-surgeons of the middle ages. A health care system that is more focused on preventive medicine, and which operates outside the logic of the market, could reduce emissions without negatively impacting health, maybe even improving it.
The law of diminishing returns highlights additional opportunities to lower the environmental footprint of health care services. For example, if the environmental footprint of health care was halved, it’s very unlikely that life expectancy would decrease proportionally. Nearly half of lifetime health care expenditures – and thus energy use and emissions – is incurred during the senior years (+65 years old). For those up to age 85, more than one-third of their lifetime expenditures will accrue in the remaining years. 
Advocating for a shorter average life expectancy, even if it may concern a very modest decrease, sounds problematic. However, avoiding the topic is just as problematic. Because of modern health care’s enormous (and still growing) environmental footprint, today’s health and longevity comes at least partly at the expense of the health and longevity of younger and future generations, who have no voice in this debate.  If we cure one person today, at the expense of making other people sick tomorrow, health care becomes counter-productive. Health is not only a private good but also a public one, and as medical treatments become increasingly resource-intensive, the chances increase that the public health damage of a treatment outweighs the individual gain of a patient, especially at old age.
Kris De Decker
Thanks to Elizabeth Shove
Proofread by Alice Essam & Eric Wagner
 Pichler, Peter-Paul, et al. "International comparison of health care carbon footprints." Environmental Research Letters 14.6 (2019): 064004. https://iopscience.iop.org/article/10.1088/1748-9326/ab19e1/pdf
 National estimates of health care sector greenhouse gas emissions have been performed for the UK (2009), the USA (2009 & 2016), Sweden (2017), Australia (2018), Canada (2018), China (2019), Japan (2020) and Austria (2020). For an overview, see . However, because each study has its own methodology, the results are not perfectly comparable. That’s why I quote this source, as it gives comparable estimates.
 Eckelman, Matthew J., and Jodi Sherman. "Environmental impacts of the US health care system and effects on public health." PloS one 11.6 (2016): e0157014. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0157014
 US National Health Expenditure Data. Centers for Medicare & Medicaid Services. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NationalHealthAccountsHistorical
 Tainter, Joseph. The collapse of complex societies. Cambridge university press, 1988. Page 102 & 103.
 Current healthcare expenditure, 2012-2017, Eurostat. Current health expenditure per capita (current US$) - European Union, World Bank. Current health expenditure per capita, PPP (current international $) - European Union, World Bank. Health spending, OECD.
 In what follows, I ignore the resource use and emissions caused by transportation to and from health care facilities, as well as the resource use and emissions caused by the building of the health care facilities themselves.
 Research in different countries has shown an electricity use of 130 to 280 kilowatt-hour per square metre per year, representing around 50% of total on-site building energy consumption. [11-12] For comparison, residential electricity use in European households is on average 70 kWh/m2/year, and total energy demand is dominated by heating, not electricity. According to a 2016 study, for which scientists collected power data over a period of 18 months in a German hospital, operating rooms have the highest electricity use (438 kWh/m2/year), followed by intensive care units (135 kWh/m2/yr). 
 Christiansen, Nils, Martin Kaltschmitt, and Frank Dzukowski. "Electrical energy consumption and utilization time analysis of hospital departments and large scale medical equipment." Energy and Buildings 131 (2016): 172-183.
 Wu, Rui. "The carbon footprint of the Chinese health-care system: an environmentally extended input–output and structural path analysis study." The Lancet Planetary Health 3.10 (2019): e413-e419. https://www.sciencedirect.com/science/article/pii/S2542519619301925
 Bawaneh, Khaled, et al. "Energy consumption analysis and characterization of healthcare facilities in the United States." Energies 12.19 (2019): 3775. https://www.mdpi.com/1996-1073/12/19/3775/pdf
 Rohde, Tarald, and Robert Martinez. "Equipment and energy usage in a large teaching hospital in Norway." Journal of healthcare engineering 6 (2015). http://downloads.hindawi.com/journals/jhe/2015/231507.pdf
 Black, Douglas R., et al. "Evaluation of miscellaneous and electronic device energy use in hospitals." World Review of Science, Technology and Sustainable Development 10.1-2-3 (2013): 113-128. https://www.osti.gov/servlets/purl/1172701
 Picano, Eugenio. "Environmental sustainability of medical imaging." Acta Cardiologica (2020): 1-5. https://www.tandfonline.com/doi/abs/10.1080/00015385.2020.1815985
 Sherman, Jodi D., et al. "The Green Print: Advancement of Environmental Sustainability in Healthcare." Resources, Conservation and Recycling 161 (2020): 104882. https://www.researchgate.net/profile/Brett_Duane/publication/343137350_The_Green_Print_Advancement_of_Environmental_Sustainability_in_Healthcare/links/5f216962299bf134048f8960/The-Green-Print-Advancement-of-Environmental-Sustainability-in-Healthcare.pdf
 Martin, Marisa, et al. "Environmental impacts of abdominal imaging: a pilot investigation." Journal of the American College of Radiology 15.10 (2018): 1385-1393. https://www.sciencedirect.com/science/article/abs/pii/S1546144018308639. The researchers write that “when production and use phases are combined, the total energy consumption of MRI (>309 MJ/examination, abdominal scan, 1.5 Tesla) is comparable with cooling a three-bedroom house with central air-conditioning for a day”.
 Weisz, Ulli, et al. "Carbon emission trends and sustainability options in Austrian health care." Resources, Conservation and Recycling 160 (2020): 104862.
 Belkhir, Lotfi, and Ahmed Elmeligi. "Carbon footprint of the global pharmaceutical industry and relative impact of its major players." Journal of Cleaner Production 214 (2019): 185-194. https://www.sciencedirect.com/science/article/abs/pii/S0959652618336084
 Laufman, Harold, Luther Riley, and Barry Badner. "Use of disposable products in surgical practice." Archives of Surgery 111.1 (1976): 20-26. https://jamanetwork.com/journals/jamasurgery/article-abstract/581229
 Gilden, Daniel J., K. N. Scissors, and J. B. Reuler. "Disposable products in the hospital waste stream." Western journal of medicine 156.3 (1992): 269. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1003232/pdf/westjmed00091-0045.pdf
 Sherman, Jodi D., and Harriet W. Hopf. "Balancing infection control and environmental protection as a matter of patient safety: the case of laryngoscope handles." Anesthesia & Analgesia 127.2 (2018): 576-579. https://www.researchgate.net/profile/Jodi_Sherman/publication/322407715_Balancing_Infection_Control_and_Environmental_Protection_as_a_Matter_of_Patient_Safety_The_Case_of_Laryngoscope_Handles/links/5a82ba12a6fdcc6f3eadcfab/Balancing-Infection-Control-and-Environmental-Protection-as-a-Matter-of-Patient-Safety-The-Case-of-Laryngoscope-Handles.pdf
 Thiel, Cassandra Lee, et al. "Life cycle assessment of medical procedures: Vaginal and cesarean section births." 2012 IEEE International Symposium on Sustainable Systems and Technology (ISSST). IEEE, 2012.
 Campion, Nicole, et al. "Sustainable healthcare and environmental life-cycle impacts of disposable supplies: a focus on disposable custom packs." Journal of Cleaner Production 94 (2015): 46-55.
 “Reusables, Disposables each play a role in preventing cross-contamination”, Elizabeth Srejic, Infection Control Today, April 2016
 Sustainability roadmap for hospitals, American Association of Hospitals. http://www.sustainabilityroadmap.org/topics/waste.shtml#.YCsEOXyYXWc.
 Thiel, Cassandra L., et al. "Cataract surgery and environmental sustainability: waste and lifecycle assessment of phacoemulsification at a private healthcare facility." Journal of Cataract & Refractive Surgery 43.11 (2017): 1391-1398. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5728421/
 Vollmer, Martin K., et al. "Modern inhalation anesthetics: potent greenhouse gases in the global atmosphere." Geophysical Research Letters 42.5 (2015): 1606-1611. https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2014GL062785
 Salas, Renee N., et al. "A pathway to net zero emissions for healthcare." bmj 371 (2020).
 Brown, Lawrence H., et al. "Estimating the life cycle greenhouse gas emissions of Australian ambulance services." Journal of Cleaner Production 37 (2012): 135-141.
 Connor, A., R. Lillywhite, and M. W. Cooke. "The carbon footprint of a renal service in the United Kingdom." QJM: An International Journal of Medicine 103.12 (2010): 965-975. https://academic.oup.com/qjmed/article/103/12/965/1584174
 Herrmann, C., and A. Rock. "Magnetic resonance equipment (MRI)–Study on the potential for environmental improvement by the aspect of energy efficiency." PE INTERNATIONAL AG, Report (2012).
 Shove, Elizabeth. "What is wrong with energy efficiency?." Building Research & Information 46.7 (2018): 779-789. https://www.tandfonline.com/doi/pdf/10.1080/09613218.2017.1361746
 Heye, Tobias, et al. "The energy consumption of radiology: energy-and cost-saving opportunities for CT and MRI operation." Radiology 295.3 (2020): 593-605. https://pubmed.ncbi.nlm.nih.gov/32208096/
 Blue, Stanley. "Reducing demand for energy in hospitals: opportunities for and limits to temporal coordination." Demanding Energy. Palgrave Macmillan, Cham, 2018. 313-337.
 Duffin, Jacalyn. History of medicine: a scandalously short introduction. University of Toronto Press, 2010.
 WHO compendium of innovative health technologies for low-resource settings, WHO; 2016-17. WHO, 2018. https://www.who.int/medical_devices/publications/compendium_2016_2017/en/
 Medical devices: managing the mismatch: an outcome of the priority medical devices project: methodology briefing paper, WHO, 2010. https://apps.who.int/iris/handle/10665/70491
 Global Atlas of Medical Devices, WHO, 2017. https://www.who.int/medical_devices/publications/global_atlas_meddev2017/en/
 Page, Brandi R., et al. "Cobalt, linac, or other: what is the best solution for radiation therapy in developing countries?." International Journal of Radiation Oncology* Biology* Physics89.3 (2014): 476-480.
 In a survey of surgeons across 30 African nations, 48% reported at least weekly power failures, 29% had operated using only mobile phone lights, and 19% had experienced poor surgical outcomes as a result of it. 
 Parker, Steve. Medicine: The Definitive Illustrated History. DK Publishing, 2016.
 Hall, Peter A., and Michèle Lamont, eds. Successful societies: How institutions and culture affect health. Cambridge University Press, 2009.
 Borowy, Iris, and Jean-Louis Aillon. "Sustainable health and degrowth: Health, health care and society beyond the growth paradigm." Social Theory & Health 15.3 (2017): 346-368.
 Sherman, Jodi D., and Harriet W. Hopf. "Balancing infection control and environmental protection as a matter of patient safety: the case of laryngoscope handles." Anesthesia & Analgesia 127.2 (2018): 576-579.
 Steyn, A., et al. "Frugal innovation for global surgery: leveraging lessons from low-and middle-income countries to optimise resource use and promote value-based care." The Bulletin of the Royal College of Surgeons of England 102.5 (2020): 198-200. https://publishing.rcseng.ac.uk/doi/pdf/10.1308/rcsbull.2020.150
 Haripriya, Aravind, David F. Chang, and Ravilla D. Ravindran. "Endophthalmitis reduction with intracameral moxifloxacin in eyes with and without surgical complications: Results from 2 million consecutive cataract surgeries." Journal of Cataract & Refractive Surgery 45.9 (2019): 1226-1233. https://www.aurolab.com/images/JCRS%202%20million.pdf
 Venkatesh, Rengaraj, et al. "Carbon footprint and cost–effectiveness of cataract surgery." Current opinion in ophthalmology 27.1 (2016): 82-88.
 Thiel, Cassandra L., et al. "Utilizing off-the-shelf LCA methods to develop a ‘triple bottom line’auditing tool for global cataract surgical services." Resources, Conservation and Recycling 158 (2020): 104805.
 Relman, Arnold S. "The new medical-industrial complex." New England Journal of Medicine 303.17 (1980): 963-970. https://www.nejm.org/doi/full/10.1056/NEJM198010233031703
 Smith, Richard. "Limits to medicine. Medical nemesis: the expropriation of health." Journal of Epidemiology & Community Health 57.12 (2003): 928-928. https://jech.bmj.com/content/57/12/928
 In health care, there is a thin line between marketing and corruption, especially when the target audience is medical personnel that may gain benefits from using or prescribing a medical device or drug, or when regulators are influenced to facilitate practices that increase profits. Transparancy International ranks the procurement of drugs and medical equipment fourth on a list of seven processes that carry high risk of corruption, and calls the problem "widespread in all countries". 
 Alemayehu, Berhanu, and Kenneth E. Warner. "The lifetime distribution of health care costs." Health services research 39.3 (2004): 627-642. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1361028/
Being an independent journalist – or an office worker if you wish – I always reasoned that I needed a decent computer and that I need to pay for that quality. Between 2000 and 2017, I consumed three laptops that I bought new and which costed me around 5,000 euros in total – roughly 300 euros per year over the entire period. The average useful life of my three laptops was 5.7 years.
Low-tech Magazine is now written and published on a 2006 ThinkPad X60s.
In 2017, somewhere between getting my office and my website off-the-grid, I decided not to buy any more new laptops. Instead, I switched to a 2006 second-hand machine that I purchased online for 50 euros and which does everything that I want and need. Including a new battery and a simple hardware upgrade, I invested less than 150 euros.
If my 2006 laptop lasts as long as my other machines – if it runs for another 1.7 years – it will have cost me only 26 euros per year. That’s more than 10 times less than the cost of my previous laptops. In this article, I explain my motivation for not buying any more new laptops, and how you could do the same.Energy and material use of a laptop
Not buying new laptops saves a lot of money, but also a lot of resources and environmental destruction. According to the most recent life cycle analysis, it takes 3,010 to 4,340 megajoules of primary energy to make a laptop – this includes mining the materials, manufacturing the machine, and bringing it to market. 
Each year, we purchase between 160 and 200 million laptops. Using the data above, this means that the production of laptops requires a yearly energy consumption of 480 to 868 petajoules, which corresponds to between one quarter and almost half of all solar PV energy produced worldwide in 2018 (2,023 petajoules).  The making of a laptop also involves a high material consumption, which includes a wide variety of minerals that may be considered scarce due to different types of constraints: economic, social, geochemical, and geopolitical. 
The production of microchips is a very energy- and material-intensive process, but that is not the only problem. The high resource use of laptops is also because they have a very short lifespan. Most of the 160-200 million laptops sold each year are replacement purchases. The average laptop is replaced every 3 years (in business) to five years (elsewhere).  My 5.7 years per laptop experience is not exceptional.Laptops don’t change
The study cited dates from 2011, and it refers to a machine made in 2001: a Dell Inspiron 2500. You are forgiven for thinking that this “most recent life cycle analysis of a laptop” is outdated, but it’s not. A 2015 research paper discovered that the embodied energy of laptops is static over time. 
The scientists disassembled 11 laptops of similar size, made between 1999 and 2008, and weighed the different components. Also, they measured the silicon die area for all motherboards and 30 DRAM cards produced over roughly the same period (until 2011). They found that the mass and material composition of all key components – battery, motherboard, hard drive, memory – did not change significantly, even though manufacturing processes became more efficient in energy and material use.The reason is simple: improvements in functionality balance the efficiency gains obtained in the manufacturing process. Battery mass, memory, and hard disk drive mass decreased per unit of functionality but showed roughly constant totals per year. The same dynamic explains why newer laptops don’t show lower operational electricity consumption compared to older laptops. New laptops may be more energy-efficient per computational power, but these gains are offset by more computational power. Jevon’s paradox is nowhere as evident as it is in computing. The challenge
All this means that there’s no environmental or financial benefit whatsoever to replacing an old laptop with a new one. On the contrary, the only thing a consumer can do to improve their laptop's ecological and economic sustainability is to use it for as long as possible. This is facilitated by the fact that laptops are now a mature technology and have more than sufficient computational power. One problem, though. Consumers who try to keep working on their old laptops are likely to end up frustrated. I shortly explain my frustrations below, and I’m pretty confident that they are not exceptional.
In 2000, when I was working as a freelance science and tech journalist in Belgium, I bought my first laptop, an Apple iBook. Little more than two or three years later, the charger started malfunctioning. When informed of the price for a new charger, I was so disgusted with Apple’s sales practices – chargers are very cheap to produce, but Apple sold them for a lot of money – that I refused to buy it. Instead, I managed to keep the charger working for a few more years, first by putting it under the weight of books and furniture, and when that didn’t work anymore, by putting it in a firmly tightened clamp.My second laptop: IBM ThinkPad R52 (2005-2013)
When the charger eventually died entirely in 2005, I decided to look for a new laptop. I had only one demand: it should have a charger that lasts or is at least cheap to replace. I found more than I was looking for. I bought an IBM Thinkpad R52, and it was love at first use. My IBM laptop was the Apple iBook counterpart, not just in terms of design (a rectangular box available in all colours as long as it’s black). More importantly, the entire machine was built to last, built to be reliable, and built to be repairable.
Circular and modular products are all the hype these days, but my IBM Thinkpad was precisely that. Every component in the laptop could be screwed off and replaced, the sturdy case (with steel hinges) was spacious enough to make serious upgrades possible, and it had every connector you can imagine. My 2005 machine still works today, and I am convinced that it could keep working for another 500 years if given proper care. Like a pre-industrial windmill, its lifetime could be extended endlessly by gradually repairing and replacing every part that it consists of. The question is not how we can evolve towards a circular economy, but instead why we continue to evolve away from it.
My Thinkpad was more expensive to buy than my iBook, but at least I didn’t spend all that money on a cute design but a decent computer. The charger gave no problems, and when I lost it during a trip and had to buy a new one, I could do so for a fair price. Little did I know that my happy purchase was going to be a once-in-a-lifetime experience.
The IBM Thinkpad R52 from 2005.My third laptop: Lenovo Thinkpad T430 (2013-2017)
Fast forward to 2013. I am now living in Spain and I’m running Low-tech Magazine. I’m still working on my IBM Thinkpad R52, but there are some problems on the horizon. First of all, Microsoft will soon force me to upgrade my operating system, because support for Windows XP is to end in 2014. I don’t feel like spending a couple of hundred euros on a new operating system that would be too demanding for my old laptop anyway. Furthermore, the laptop had gotten a bit slow, even after it had been restored to its factory settings. In short, I fell into the trap that the hardware and software industries have set up for us and made the mistake of thinking that I needed a new laptop.
Having been so fond of my Thinkpad, it was only logical to get a new one. Here’s the problem: in 2005, shortly after I had bought my first Thinkpad, Lenovo, a Chinese manufacturer that is now the largest computer maker in the world, bought IBM's PC business. The Chinese don’t have a reputation for building quality products, but since Lenovo was still selling Thinkpads that looked almost identical to those built by IBM, I decided to try my luck and bought a Lenovo Thinkpad T430 in April 2013. At a steep price, but I assumed that quality had to be paid for.
My mistake was clear from the beginning. I had to send the new laptop back twice because its case was deformed. When I finally got one that didn’t wobble on my desk, I quickly ran into another problem: the keys started breaking off. I can still remember my disbelief when it happened for the first time. The IBM Thinkpad is known for its robust keyboard. If you want to break it, you need a hammer. Lenovo obviously didn’t find that so important and had quietly replaced the keyboard with an inferior one. Mind you, I can be an aggressive typist, but I have never broken any other keyboard.
I grumpily ordered a replacement key for 15 euros. In the months after that, replacement keys became a recurring cost. After spending more than 100 euros on plastic keys, which would soon break again, I calculated that my keyboard had 90 keys and that replacing them all just once would cost me 1,350 euros. I stopped using the keyboard altogether, temporarily finding a solution in an external keyboard. However, this was impractical, especially for working away from home – and why else would I want a laptop?
There was no getting around it anymore: I needed a new laptop. Again. But which one? For sure it would not be one made by Lenovo or Apple.
Replacing all keys on my Lenovo T430 would have costed me 1,350 euros.My fourth laptop: IBM Thinkpad X60s (2017-now)
Not finding what I was looking for, I decided to go back in time. By now, it had dawned on me that new laptops are of inferior quality compared to older laptops, even if they carry a much higher price tag. I found out that Lenovo switched keyboards around 2011 and started searching auction sites for Thinkpads built before that year. I could have changed back to my ThinkPad R52 from 2005, but by now, I had become accustomed to a Spanish keyboard, and the R52 had a Belgian one.
In April 2017, I settled on a used Thinkpad X60s from 2006.  As of December 2020, the machine is in operation for almost 4 years and is 14 years old – three to five times older than the average laptop. If I loved my Thinkpad R52 from 2005, I adore my Thinkpad X60s from 2006. It’s just as sturdily built – it already survived a drop from a table on a concrete floor – but it’s much smaller and also lighter: 1.43 kg vs. 3.2 kg.
My 2006 Thinkpad X60s does everything I want it to do. I use it to write articles, do research, and maintain the websites. I have also used it on-stage to give lectures, projecting images on a large screen. There’s only one thing missing on my laptop, especially nowadays, and that’s a webcam. I solve this by firing up the cursed 2013 laptop with the broken keys whenever I need to, happy to give it some use that doesn’t involve its keyboard. It could also be solved by a switch to the Thinkpad X200 from 2008, which is a newer version of the same model and has a webcam.
How to make an old laptop run like it’s new
Not buying any more new laptops is not as simple as buying a used laptop. It’s advisable to upgrade the hardware, and it’s essential to downgrade the software. There are two things you need to do:1. Use low energy software
My laptop runs on Linux Lite, one of several open-source operating systems specially designed to work on old computers. The use of a Linux operating system is not a mere suggestion. There’s no way you’re going to revive an old laptop if you stick to Microsoft Windows or Apple OS because the machine would freeze instantly. Linux Lite does not have the flashy visuals of the newest Apple and Windows interfaces, but it has a familiar graphical interface and looks anything but obsolete. It takes very little space on the hard disk and demands even less computing power. The result is that an old laptop, despite its limited specifications, runs smoothly. I also use light browsers: Vivaldi and Midori.
Having used Microsoft Windows for a long time, I find Linux operating systems to be remarkably better, even more so because they are free to download and install. Furthermore, Linux operating systems do not steal your personal data and do not try to lock you in, like the newest operating systems from both Microsoft and Apple do. That said, even with Linux, obsolescence cannot be ruled out. For example, Linux Lite will stop its support for 32-bit computers in 2021, which means that I will soon have to look for an alternative operating system, or buy a slightly younger 64-bit laptop.2. Replace the hard disk drive with a solid-state drive
In recent years, solid-state drives (SSD) have become available and affordable, and they are much faster than hard disk drives (HDD). Although you can revive an old laptop by merely switching to a light-weight operating system, if you also replace the hard disk drive with a solid-state drive, you’ll have a machine that is just as fast as a brand new laptop. Depending on the storage capacity you want, an SSD will cost you between 20 euro (120 GB) and 100 euro (960 GB).
Installment is pretty straightforward and well documented online. Solid-state drives run silently and are more resistant to physical shock, but they have a shorter life expectancy than hard disk drives. Mine is now working for almost 4 years. It seems that both from an environmental and financial viewpoint, an old laptop with SSD is a much better choice than buying a new laptop, even if the solid-state drive needs replacement now and then.Spare laptops
Meanwhile, my strategy has evolved. I have bought two identical models for a similar price, in 2018 and early 2020, to use as spare laptops. Now I plan to keep working on these machines for as long as possible, having more than sufficient spare parts available. Since I bought the laptop, it had two technical issues. After roughly a year of use, the fan died. I had it repaired overnight in a tiny and messy IT shop run by a Chinese in Antwerp, Belgium. The Chinese may not have a reputation for building quality products, but they sure know how to fix things. The man said that my patched fan would run for another six months, but it’s still working more than two years later.
Then, last year, my X60s suddenly refused to charge its battery, an issue that had also appeared with my cursed 2013 laptop. It seems to be a common problem with Thinkpads, but I could not solve it yet. Neither did I really have to because I had a spare laptop ready and started using that one whenever I needed or wanted to work outside.
Inside the Thinkpad X60s. Source: Hardware Maintenance Manual.The magical SD-card
This is the moment to introduce you to my magical SD-card, which is another hardware upgrade that facilitates the use of old (but also new) laptops. Many people have their personal documents stored on their laptop's hard drive and then make backups to external storage media if all goes well. I do it the other way around.
I have all my data on a 128 GB SD-card, which I can plug into any of the Thinkpads that I own. I then make monthly backups of the SD-card, which I store on an external storage medium, as well as regular backups of the documents that I am working on, which I temporarily store on the drive of the laptop that I am working on. This has proven to be very reliable, at least for me: I have stopped losing work due to computer problems and insufficient backups.
The other advantage is that I can work on any laptop that I want and that I’m not dependent on a particular machine to access my work. You can get similar advantages when you keep all your data in the cloud, but the SD-card is the more sustainable option, and it works without internet access.
Hypothetically, I could have up to two hard drive failures in one day and keep working as if nothing happened. Since I am now using both laptops alternately – one with battery, the other one without – I can also leave them at different locations and cycle between these places while carrying only the SD-card in my wallet. Try that with your brand new, expensive laptop. I can also use my laptops together if I need an extra screen.
A 128 GB SD-card will set you back between 20 and 40 euros, depending on the brand. In combination with a hard disk drive, the SD-card also increases the performance of an old laptop and can be an alternative to installing a solid-state drive. My spare laptop does not have one and it can be slow when browsing heavy-weight websites. However, thanks to the SD-card, opening a map or document happens almost instantly, as does scrolling through a document or saving it. The SD-card also keeps the hard disk running smoothly because it's mostly empty. I don’t know how practical using an SD-card is for other laptops, but all my Thinkpads have a slot for them.The costs
Let’s make a complete cost calculation, including the investment in spare laptops and SD-card, and using today’s prices for both solid-state drives and SD-cards, which have become much cheaper since I have bought them:
- ThinkPad X60s: 50 euro
- ThinkPad X60s spare laptop: 60 euro
- ThinkPad X60 spare laptop: 75 euro
- Two replacement batteries: 50 euro
- 240 GB solid-state drive: 30 euro
- 128 GB SD-card: 20 euro
- Total: 285 euros
Even if you buy all of this, you only spent 285 euros. For that price, you may be able to buy the crappiest new laptop on the market, but it surely won’t get you two spare laptops. If you manage to keep working with this lot for ten years, your laptop costs would be 28.5 euros per year. You may have to replace a few solid-state drives and SD-cards, but it won’t make much difference. Furthermore, you save the ecological damage that is caused by the production of a new laptop every 5.7 years.
Although I have used my Thinkpad X60s as an example, the same strategy works with other Thinkpad models – here’s an overview of all historical models – and laptops from other brands (which I know nothing about). If you prefer not to buy on auction sites, you can walk to the nearest pawnshop and get a used laptop with a guarantee. The chances are that you don’t even need to buy anything, as many people have old laptops lying around.
There’s no need to go back to a 2006 machine. I hope it’s clear that I am trying to make a statement here, and I probably went as far back as one can while keeping things practical. My first try was a used ThinkPad X30 from 2002, but that was one step too far. It uses a different charger type, it has no SD-card slot, and I could not get the wireless internet connection working. For many people, it may serve to choose a somewhat younger laptop. That will give you a webcam and a 64-bit architecture, which makes things easier. Of course, you can also try to beat me and go back to the 1990s, but then you’ll have to do without USB and wireless internet connection.
Your choice of laptop also depends on what you want to do with it. If you use it mainly for writing, surfing the web, communication, and entertainment, you can do it as cheaply as I did. If you do graphical or audiovisual work, it’s more complicated, because in that case, you’re probably an Apple user. The same strategy could be applied, on a somewhat younger and more expensive laptop, but it would suggest switching from a Mac to a Linux operating system. When it comes to office applications, Linux is clearly better than its commercial alternatives. For a lack of experience, I cannot tell you if that holds for other software as well.This is a hack, not a new economical model
Although capitalism could provide us with used laptops for decades to come, the strategy outlined above should be considered a hack, not an economical model. It’s a way to deal with or escape from an economic system that tries to force you and me to consume as much as possible. It’s an attempt to break that system, but it’s not a solution in itself. We need another economical model, in which we build all laptops like pre-2011 Thinkpads. As a consequence, laptop sales would go down, but that’s precisely what we need. Furthermore, with today's computing efficiency, we could significantly reduce the operational and embodied energy use of a laptop if we reversed the trend towards ever higher functionality.
Significantly, hardware and software changes drive the fast obsolescence of computers, but the latter has now become the most crucial factor. A computer of 15 years old has all the hardware you need, but it’s not compatible with the newest (commercial) software. This is true for operating systems and every type of software, from games to office applications to websites. Consequently, to make laptop use more sustainable, the software industry would need to start making every new version of its products lighter instead of heavier. The lighter the software, the longer our laptops will last, and we will need less energy to use and produce them.
Kris De Decker
Images: Jordi Manrique Corominas, Adriana Parra. Proofreading: Eric Wagner.
 Deng, Liqiu, Callie W. Babbitt, and Eric D. Williams. "Economic-balance hybrid LCA extended with uncertainty analysis: case study of a laptop computer." Journal of Cleaner Production 19.11 (2011): 1198-1206. https://www.sciencedirect.com/science/article/abs/pii/S0959652611000801
 International Renewable Energy Agency (IRENA). https://www.irena.org/solar
 André, Hampus, Maria Ljunggren Söderman, and Anders Nordelöf. "Resource and environmental impacts of using second-hand laptop computers: A case study of commercial reuse." Waste Management 88 (2019): 268-279. https://www.sciencedirect.com/science/article/pii/S0956053X19301825
 Bihouix, Philippe. The Age of Low Tech: Towards a Technologically Sustainable Civilization. Policy Press, 2020. https://bristoluniversitypress.co.uk/the-age-of-low-tech
 Kasulaitis, Barbara V., et al. "Evolving materials, attributes, and functionality in consumer electronics: Case study of laptop computers." Resources, conservation and recycling 100 (2015): 1-10. https://www.sciencedirect.com/science/article/abs/pii/S0921344915000683
 Lenovo took over IBM in 2005 and so strictly speaking I bought a Lenovo Thinkpad X60s. However, the hardware had not changed yet, and the laptop only carries the new brand name along that of IBM.
From the Neolithic to the beginning of the twentieth century, coppiced woodlands, pollarded trees, and hedgerows provided people with a sustainable supply of energy, materials, and food.
Advocating for the use of biomass as a renewable source of energy – replacing fossil fuels – has become controversial among environmentalists. The comments on the previous article, which discussed thermoelectric stoves, illustrate this:
- “As the recent film Planet of the Humans points out, biomass a.k.a. dead trees is not a renewable resource by any means, even though the EU classifies it as such.”
- “How is cutting down trees sustainable?”
- “Article fails to mention that a wood stove produces more CO2 than a coal power plant for every ton of wood/coal that is burned.”
- “This is pure insanity. Burning trees to reduce our carbon footprint is oxymoronic.”
- “The carbon footprint alone is just horrifying.”
- “The biggest problem with burning anything is once it's burned, it's gone forever.”
- “The only silly question I can add to to the silliness of this piece, is where is all the wood coming from?”
In contrast to what the comments suggest, the article does not advocate the expansion of biomass as an energy source. Instead, it argues that already burning biomass fires – used by roughly 40% of today’s global population – could also produce electricity as a by-product, if they are outfitted with thermoelectric modules. Nevertheless, several commenters maintained their criticism after they read the article more carefully. One of them wrote: “We should aim to eliminate the burning of biomass globally, not make it more attractive.”
Apparently, high-tech thinking has permeated the minds of (urban) environmentalists to such an extent that they view biomass as an inherently troublesome energy source – similar to fossil fuels. To be clear, critics are right to call out unsustainable practices in biomass production. However, these are the consequences of a relatively recent, “industrial” approach to forestry. When we look at historical forest management practices, it becomes clear that biomass is potentially one of the most sustainable energy sources on this planet.Coppicing: Harvesting Wood Without Killing Trees
Nowadays, most wood is harvested by killing trees. Before the Industrial Revolution, a lot of wood was harvested from living trees, which were coppiced. The principle of coppicing is based on the natural ability of many broad-leaved species to regrow from damaged stems or roots – damage caused by fire, wind, snow, animals, pathogens, or (on slopes) falling rocks. Coppice management involves the cutting down of trees close to ground level, after which the base – called the “stool” – develops several new shoots, resulting in a multi-stemmed tree.
A coppice stool. Image: Geert Van der Linden.
A recently coppiced patch of oak forest. Image: Henk vD. (CC BY-SA 3.0)
Coppice stools in Surrey, England. Image: Martinvl (CC BY-SA 4.0)
When we think of a forest or a tree plantation, we imagine it as a landscape stacked with tall trees. However, until the beginning of the twentieth century, at least half of the forests in Europe were coppiced, giving them a more bush-like appearance.  The coppicing of trees can be dated back to the stone age, when people built pile dwellings and trackways crossing prehistoric fenlands using thousands of branches of equal size – a feat that can only be accomplished by coppicing. 
The approximate historical range of coppice forests in the Czech Republic (above, in red) and in Spain (below, in blue). Source: "Coppice forests in Europe", see 
Ever since then, the technique formed the standard approach to wood production – not just in Europe but almost all over the world. Coppicing expanded greatly during the eighteenth and nineteenth centuries, when population growth and the rise of industrial activity (glass, iron, tile and lime manufacturing) put increasing pressure on wood reserves.Short Rotation Cycles
Because the young shoots of a coppiced tree can exploit an already well-developed root system, a coppiced tree produces wood faster than a tall tree. Or, to be more precise: although its photosynthetic efficiency is the same, a tall tree provides more biomass below ground (in the roots) while a coppiced tree produces more biomass above ground (in the shoots) – which is clearly more practical for harvesting.  Partly because of this, coppicing was based on short rotation cycles, often of around two to four years, although both yearly rotations and rotations up to 12 years or longer also occurred.
Coppice stools with different rotation cycles. Images: Geert Van der Linden.
Because of the short rotation cycles, a coppice forest was a very quick, regular and reliable supplier of firewood. Often, it was cut up into a number of equal compartments that corresponded to the number of years in the planned rotation. For example, if the shoots were harvested every three years, the forest was divided into three parts, and one of these was coppiced each year. Short rotation cycles also meant that it took only a few years before the carbon released by the burning of the wood was compensated by the carbon that was absorbed by new growth, making a coppice forest truly carbon neutral. In very short rotation cycles, new growth could even be ready for harvest by the time the old growth wood had dried enough to be burned.
In some tree species, the stump sprouting ability decreases with age. After several rotations, these trees were either harvested in their entirety and replaced by new trees, or converted into a coppice with a longer rotation. Other tree species resprout well from stumps of all ages, and can provide shoots for centuries, especially on rich soils with a good water supply. Surviving coppice stools can be more than 1,000 years old.Biodiversity
A coppice can be called a “coppice forest” or a “coppice plantation”, but in reality it was neither a forest nor a plantation – perhaps something in between. Although managed by humans, coppice forests were not environmentally destructive, on the contrary. Harvesting wood from living trees instead of killing them is beneficial for the life forms that depend on them. Coppice forests can have a richer biodiversity than unmanaged forests, because they always contain areas with different stages of light and growth. None of this is true in industrial wood plantations, which support little or no plant and animal life, and which have longer rotation cycles (of at least twenty years).
Coppice stools in the Netherlands. Image: K. Vliet (CC BY-SA 4.0)
Sweet chestnut coppice at Flexham Park, Sussex, England. Image: Charlesdrakew, public domain.
Our forebears also cut down tall, standing trees with large-diameter stems – just not for firewood. Large trees were only “killed” when large timber was required, for example for the construction of ships, buildings, bridges, and windmills.  Coppice forests could contain tall trees (a “coppice-with-standards”), which were left to grow for decades while the surrounding trees were regularly pruned. However, even these standing trees could be partly coppiced, for example by harvesting their side branches while they were alive (shredding).Multipurpose Trees
The archetypical wood plantation promoted by the industrial world involves regularly spaced rows of trees in even-aged, monocultural stands, providing a single output – timber for construction, pulpwood for paper production, or fuelwood for power plants. In contrast, trees in pre-industrial coppice forests had multiple purposes. They provided firewood, but also construction materials and animal fodder.
The targeted wood dimensions, determined by the use of the shoots, set the rotation period of the coppice. Because not every type of wood was suited for every type of use, coppiced forests often consisted of a variety of tree species at different ages. Several age classes of stems could even be rotated on the same coppice stool (“selection coppice”), and the rotations could evolve over time according to the needs and priorities of the economic activities.
A small woodland with a diverse mix of coppiced, pollarded and standard trees. Image: Geert Van der Linden.
Coppiced wood was used to build almost anything that was needed in a community.  For example, young willow shoots, which are very flexible, were braided into baskets and crates, while sweet chestnut prunings, which do not expand or shrink after drying, were used to make all kinds of barrels. Ash and goat willow, which yield straight and sturdy wood, provided the material for making the handles of brooms, axes, shovels, rakes and other tools.
Young hazel shoots were split along the entire length, braided between the wooden beams of buildings, and then sealed with loam and cow manure – the so-called wattle-and-daub construction. Hazel shoots also kept thatched roofs together. Alder and willow, which have almost limitless life expectancy under water, were used as foundation piles and river bank reinforcements. The construction wood that was taken out of a coppice forest did not diminish its energy supply: because the artefacts were often used locally, at the end of their lives they could still be burned as firewood.
Harvesting leaf fodder in Leikanger kommune, Norway. Image: Leif Hauge. Source: 
Coppice forests also supplied food. On the one hand, they provided people with fruits, berries, truffles, nuts, mushrooms, herbs, honey, and game. On the other hand, they were an important source of winter fodder for farm animals. Before the Industrial Revolution, many sheep and goats were fed with so-called “leaf fodder” or “leaf hay” – leaves with or without twigs. 
Elm and ash were among the most nutritious species, but sheep also got birch, hazel, linden, bird cherry and even oak, while goats were also fed with alder. In mountainous regions, horses, cattle, pigs and silk worms could be given leaf hay too. Leaf fodder was grown in rotations of three to six years, when the branches provided the highest ratio of leaves to wood. When the leaves were eaten by the animals, the wood could still be burned.Pollards & Hedgerows
Coppice stools are vulnerable to grazing animals, especially when the shoots are young. Therefore, coppice forests were usually protected against animals by building a ditch, fence or hedge around them. In contrast, pollarding allowed animals and trees to be mixed on the same land. Pollarded trees were pruned like coppices, but to a height of at least two metres to keep the young shoots out of reach of grazing animals.
Pollarded trees in Segovia, Spain. Image: Ecologistas en Acción.
Wooded meadows and wood pastures – mosaics of pasture and forest – combined the grazing of animals with the production of fodder, firewood and/or construction wood from pollarded trees. “Pannage” or “mast feeding” was the method of sending pigs into pollarded oak forests during autumn, where they could feed on fallen acorns. The system formed the mainstay of pork production in Europe for centuries.  The “meadow orchard” or “grazed orchard” combined fruit cultivation and grazing -- pollarded fruit trees offered shade to the animals, while the animals could not reach the fruit but fertilised the trees.
Forest or pasture? Something in between. A "dehesa" (pig forest farm) in Spain. Image by Basotxerri (CC BY-SA 4.0).
A meadow orchard surrounded by a living hedge in Rijkhoven, Belgium. Image: Geert Van der Linden.
While agriculture and forestry are now strictly separated activities, in earlier times the farm was the forest and vice versa. It would make a lot of sense to bring them back together, because agriculture and livestock production – not wood production – are the main drivers of deforestation. If trees provide animal fodder, meat and dairy production should not lead to deforestation. If crops can be grown in fields with trees, agriculture should not lead to deforestation. Forest farms would also improve animal welfare, soil fertility and erosion control.Line Plantings
Extensive plantations could consist of coppiced or pollarded trees, and were often managed as a commons. However, coppicing and pollarding were not techniques seen only in large-scale forest management. Small woodlands in between fields or next to a rural house and managed by an individual household would be coppiced or pollarded. A lot of wood was also grown as line plantings around farmyards, fields and meadows, near buildings, and along paths, roads and waterways. Here, lopped trees and shrubs could also appear in the form of hedgerows, thickly planted hedges. A lot of wood was thus grown outside "forests" or "plantations". 
Hedge landscape in Normandy, France, around 1940. Image: W Wolny, public domain.
Line plantings in Flanders, Belgium. Detail from the Ferraris map, 1771-78.
Although line plantings are usually associated with the use of hedgerows in England, they were common in large parts of Europe. In 1804, English historian Abbé Mann expressed his surprise when he wrote about his trip to Flanders (today part of Belgium): “All fields are enclosed with hedges, and thick set with trees, insomuch that the whole face of the country, seen from a little height, seems one continued wood”. Typical for the region was the large number of pollarded trees. 
Like coppice forests, line plantings were diverse and provided people with firewood, construction materials and leaf fodder. However, unlike coppice forests, they had extra functions because of their specific location.  One of these was plot separation: keeping farm animals in, and keeping wild animals or cattle grazing on common lands out. Various techniques existed to make hedgerows impenetrable, even for small animals such as rabbits. Around meadows, hedgerows or rows of very closely planted pollarded trees (“pollarded tree hedges”) could stop large animals such as cows. If willow wicker was braided between them, such a line planting could also keep small animals out. 
Detail of a yew hedge. Image: Geert Van der Linden.
Hedgerow. Image: Geert Van der Linden.
Pollarded tree hedge in Nieuwekerken, Belgium. Image: Geert Van der Linden.
Coppice stools in a pasture. Image: Jan Bastiaens.
Trees and line plantings also offered protection against the weather. Line plantings protected fields, orchards and vegetable gardens against the wind, which could erode the soil and damage the crops. In warmer climates, trees could shield crops from the sun and fertilize the soil. Pollarded lime trees, which have very dense foliage, were often planted right next to wattle-and-daub buildings in order to protect them from wind, rain and sun. 
Dunghills were protected by one or more trees, preventing the valuable resource from evaporating due to sun or wind. In the yard of a watermill, the wooden water wheel was shielded by a tree to prevent the wood from shrinking or expanding in times of drought or inactivity. 
A pollarded yew tree protects a water wheel. Image: Geert Van der Linden.
Pollarded lime trees protect a farm building in Nederbrakel, Belgium. Image: Geert Van der Linden.Location Matters
Along paths, roads and waterways, line plantings had many of the same location-specific functions as on farms. Cattle and pigs were hoarded over dedicated droveways lined with hedgerows, coppices and/or pollards. When the railroads appeared, line plantings prevented collisions with animals. They protected road travellers from the weather, and marked the route so that people and animals would not get off the road in a snowy landscape. They prevented soil erosion at riverbanks and hollow roads.
All functions of line plantings could be managed by dead wood fences, which can be moved more easily than hedgerows, take up less space, don’t compete for light and food with crops, and can be ready in a short time.  However, in times and places were wood was scarce a living hedge was often preferred (and sometimes obliged) because it was a continuous wood producer, while a dead wood fence was a continuous wood consumer. A dead wood fence may save space and time on the spot, but it implies that the wood for its construction and maintenance is grown and harvested elsewhere in the surroundings.
Image: Pollarded tree hedge in Belgium. Image: Geert Van der Linden.
Local use of wood resources was maximised. For example, the tree that was planted next to the waterwheel, was not just any tree. It was red dogwood or elm, the wood that was best suited for constructing the interior gearwork of the mill. When a new part was needed for repairs, the wood could be harvested right next to the mill. Likewise, line plantings along dirt roads were used for the maintenance of those roads. The shoots were tied together in bundles and used as a foundation or to fill up holes. Because the trees were coppiced or pollarded and not cut down, no function was ever at the expense of another.
Nowadays, when people advocate for the planting of trees, targets are set in terms of forested area or the number of trees, and little attention is given to their location – which could even be on the other side of the world. However, as these examples show, planting trees closeby and in the right location can significantly optimise their potential.Shaped by Limits
Coppicing has largely disappeared in industrial societies, although pollarded trees can still be found along streets and in parks. Their prunings, which once sustained entire communities, are now considered waste products. If it worked so well, why was coppicing abandoned as a source of energy, materials and food? The answer is short: fossil fuels. Our forebears relied on coppice because they had no access to fossil fuels, and we don’t rely on coppice because we have.
Our forebears relied on coppice because they had no access to fossil fuels, and we don’t rely on coppice because we have
Most obviously, fossil fuels have replaced wood as a source of energy and materials. Coal, gas and oil took the place of firewood for cooking, space heating, water heating and industrial processes based on thermal energy. Metal, concrete and brick – materials that had been around for many centuries – only became widespread alternatives to wood after they could be made with fossil fuels, which also brought us plastics. Artificial fertilizers – products of fossil fuels – boosted the supply and the global trade of animal fodder, making leaf fodder obsolete. The mechanisation of agriculture – driven by fossil fuels – led to farming on much larger plots along with the elimination of trees and line plantings on farms.
Less obvious, but at least as important, is that fossil fuels have transformed forestry itself. Nowadays, the harvesting, processing and transporting of wood is heavily supported by the use of fossil fuels, while in earlier times they were entirely based on human and animal power – which themselves get their fuel from biomass. It was the limitations of these power sources that created and shaped coppice management all over the world.
Harvesting wood from pollarded trees in Belgium, 1947. Credit: Zeylemaker, Co., Nationaal Archief (CCO)
Transporting firewood in the Basque Country. Source: Notes on pollards: best practices' guide for pollarding. Gipuzkoaka Foru Aldundía-Diputación Foral de Giuzkoa, 2014.
Wood was harvested and processed by hand, using simple tools such as knives, machetes, billhooks, axes and (later) saws. Because the labour requirements of harvesting trees by hand increase with stem diameter, it was cheaper and more convenient to harvest many small branches instead of cutting down a few large trees. Furthermore, there was no need to split coppiced wood after it was harvested. Shoots were cut to a length of around one metre, and tied together in “faggots”, which were an easy size to handle manually.
It was the limitations of human and animal power that created and shaped coppice management all over the world
To transport firewood, our forebears relied on animal drawn carts over often very bad roads. This meant that, unless it could be transported over water, firewood had to be harvested within a radius of at most 15-30 km from the place where it was used.  Beyond those distances, the animal power required for transporting the firewood was larger than its energy content, and it would have made more sense to grow firewood on the pasture that fed the draft animal.  There were some exceptions to this rule. Some industrial activities, like iron and potash production, could be moved to more distant forests – transporting iron or potash was more economical than transporting the firewood required for their production. However, in general, coppice forests (and of course also line plantings) were located in the immediate vicinity of the settlement where the wood was used.
In short, coppicing appeared in a context of limits. Because of its faster growth and versatile use of space, it maximised the local wood supply of a given area. Because of its use of small branches, it made manual harvesting and transporting as economical and convenient as possible.Can Coppicing be Mechanised?
From the twentieth century onwards, harvesting was done by motor saw, and since the 1980s, wood is increasingly harvested by powerful vehicles that can fell entire trees and cut them on the spot in a matter of minutes. Fossil fuels have also brought better transportation infrastructures, which have unlocked wood reserves that were inaccessible in earlier times. Consequently, firewood can now be grown on one side of the planet and consumed at the other.
The use of fossil fuels adds carbon emissions to what used to be a completely carbon neutral activity, but much more important is that it has pushed wood production to a larger – unsustainable – scale.  Fossil fueled transportation has destroyed the connection between supply and demand that governed local forestry. If the wood supply is limited, a community has no other choice than to make sure that the wood harvest rate and the wood renewal rate are in balance. Otherwise, it risks running out of fuelwood, craft wood and animal fodder, and it would be abandoned.
Mechanically harvested willow coppice plantation. Shortly after coppicing (right), 3-years old growth (left). Image: Lignovis GmbH (CC BY-SA 4.0).
Likewise, fully mechanised harvesting has pushed forestry to a scale that is incompatible with sustainable forest management. Our forebears did not cut down large trees for firewood, because it was not economical. Today, the forest industry does exactly that because mechanisation makes it the most profitable thing to do. Compared to industrial forestry, where one worker can harvest up to 60 m3 of wood per hour, coppicing is extremely labour-intensive. Consequently, it cannot compete in an economic system that fosters the replacement of human labour with machines powered by fossil fuels.
Coppicing cannot compete in an economic system that fosters the replacement of human labour with machines powered by fossil fuels
Some scientists and engineers have tried to solve this by demonstrating coppice harvesting machines.  However, mechanisation is a slippery slope. The machines are only practical and economical on somewhat larger tracts of woodland (>1 ha) which contain coppiced trees of the same species and the same age, with only one purpose (often fuelwood for power generation). As we have seen, this excludes many older forms of coppice management, such as the use of multipurpose trees and line plantings. Add fossil fueled transportation to the mix, and the result is a type of industrial coppice management that brings few improvements.
Coppiced trees along a brook in 's Gravenvoeren, Belgium. Image: Geert Van der Linden.
Sustainable forest management is essentially local and manual. This doesn’t mean that we need to copy the past to make biomass energy sustainable again. For example, the radius of the wood supply could be increased by low energy transport options, such as cargo bikes and aerial ropeways, which are much more efficient than horse or ox drawn carts over bad roads, and which could be operated without fossil fuels. Hand tools have also improved in terms of efficiency and ergonomics. We could even use motor saws that run on biofuels – a much more realistic application than their use in car engines. The Past Lives On
This article has compared industrial biomass production with historical forms of forest management in Europe, but in fact there was no need to look to the past for inspiration. The 40% of the global population consisting of people in poor societies that still burn wood for cooking and water and/or space heating, are no clients of industrial forestry. Instead, they obtain firewood in much of the same ways that we did in earlier times, although the tree species and the environmental conditions can be very different. 
A 2017 study calculated that the wood consumption by people in “developing” societies – good for 55% of the global wood harvest and 9-15% of total global energy consumption – only causes 2-8% of anthropogenic climate impacts.  Why so little? Because around two-thirds of the wood that is harvested in developing societies is harvested sustainably, write the scientists. People collect mainly dead wood, they grow a lot of wood outside the forest, they coppice and pollard trees, and they prefer the use of multipurpose trees, which are too valuable to cut down. The motives are the same as those of our ancestors: people have no access to fossil fuels and are thus tied to a local wood supply, which needs to be harvested and transported manually.
African women carrying firewood. (CC BY-SA 4.0)
These numbers confirm that it is not biomass energy that’s unsustainable. If the whole of humanity would live as the 40% that still burns biomass regularly, climate change would not be an issue. What is really unsustainable is a high energy lifestyle. We can obviously not sustain a high-tech industrial society on coppice forests and line plantings alone. But the same is true for any other energy source, including uranium and fossil fuels.
Written by Kris De Decker. Proofread by Alice Essam.
 Multiple references:
Unrau, Alicia, et al. Coppice forests in Europe. University of Freiburg, 2018.
Notes on pollards: best practices' guide for pollarding. Gipuzkoako Foru Aldundia-Diputación Foral de Gipuzkoa, 2014.
Aarden wallen in Europa, in "Tot hier en niet verder: historische wallen in het Nederlandse landschap", Henk Baas, Bert Groenewoudt, Pim Jungerius and Hans Renes, Rijksdienst voor het Cultureel Erfgoed, 2012.
 Logan, William Bryant. Sprout lands: tending the endless gift of trees. WW Norton & Company, 2019.
 Holišová, Petra, et al. "Comparison of assimilation parameters of coppiced and non-coppiced sessile oaks". Forest-Biogeosciences and Forestry 9.4 (2016): 553.
 Perlin, John. A forest journey: the story of wood and civilization. The Countryman Press, 2005.
 Most of this information comes from a Belgian publication (in Dutch language): Handleiding voor het inventariseren van houten beplantingen met erfgoedwaarde. Geert Van der Linden, Nele Vanmaele, Koen Smets en Annelies Schepens, Agentschap Onroerend Erfgoed, 2020. For a good (but concise) reference in English, see Rotherham, Ian. Ancient Woodland: history, industry and crafts. Bloomsbury Publishing, 2013.
 While leaf fodder was used all over Europe, it was especially widespread in mountainous regions, such as Scandinavia, the Alps and the Pyrenees. For example, in Sweden in 1850, 1.3 million sheep and goats consumed a total of 190 million sheaves annually, for which at least 1 million hectares deciduous woodland was exploited, often in the form of pollards. The harvest of leaf fodder predates the use of hay as winter fodder. Branches could be cut with stone tools, while cutting grass requires bronze or iron tools. While most coppicing and pollarding was done in winter, harvesting leaf fodder logically happened in summer. Bundles of leaf fodder were often put in the pollarded trees to dry. References:
Logan, William Bryant. Sprout lands: tending the endless gift of trees. WW Norton & Company, 2019.
Slotte H., "Harvesting of leaf hay shaped the Swedish landscape", Landscape Ecology 16.8 (2001): 691-702.
 Wealleans, Alexandra L. "Such as pigs eat: the rise and fall of the pannage pig in the UK". Journal of the Science of Food and Agriculture 93.9 (2013): 2076-2083.
 This information is based on several Dutch language publications:
Handleiding voor het inventariseren van houten beplantingen met erfgoedwaarde. Geert Van der Linden, Nele Vanmaele, Koen Smets en Annelies Schepens, Agentschap Onroerend Erfgoed, 2020.
Handleiding voor het beheer van hagen en houtkanten met erfgoedwaarde. Thomas Van Driessche, Agentschap Onroerend Erfgoed, 2019
Knotbomen, knoestige knapen: een praktische gids. Geert Van der Linden, Jos Schenk, Bert Geeraerts, Provincie Vlaams-Brabant, 2017.
Handleiding: Het beheer van historische dreven en wegbeplantingen. Thomas Van Driessche, Paul Van den Bremt and Koen Smets. Agentschap Onroerend Erfgoed, 2017.
Dirkmaat, Jaap. Nederland weer mooi: op weg naar een natuurlijk en idyllisch landschap. ANWB Media-Boeken & Gidsen, 2006.
For a good source in English, see: Müller, Georg. Europe's Field Boundaries: Hedged banks, hedgerows, field walls (stone walls, dry stone walls), dead brushwood hedges, bent hedges, woven hedges, wattle fences and traditional wooden fences. Neuer Kunstverlag, 2013.
If line plantings were mainly used for wood production, they were planted at some distance from each other, allowing more light and thus a higher wood production. If they were mainly used as plot boundaries, they were planted more closely together. This diminished the wood harvest but allowed for a thicker growth.
 In fact, coppice forests could also have a location-specific function: they could be placed around a city or settlement to form an impenetrable obstacle for attackers, either by foot or by horse. They could not easily be destroyed by shooting, in contrast to a wall. Source: 
 Lime trees were even used for fire prevention. They were planted right next to the baking house in order to stop the spread of sparks to wood piles, haystacks and thatched roofs. Source: 
 The fact that living hedges and trees are easier to move than dead wood fences and posts also has practical advantages. In Europe until the French era, there was no land register and boundaries where physically indicated in the landscape. The surveyor's work was sealed with the planting of a tree, which is much harder to move on the sly than a pole or a fence. Source: 
 And, if it could be brought in over water from longer distances, the wood had to be harvested within 15-30 km of the river or coast.
 Sieferle, Rolf Pieter. The Subterranean Forest: energy systems and the industrial revolution. White Horse Press, 2001.
 On different scales of wood production, see also:
Jalas, Mikko, and Jenny, Rinkinen. "Stacking wood and staying warm: time, temporality and housework around domestic heating systems", Journal of Consumer Culture 16.1 (2016): 43-60.
Rinkinen, Jenny. "Demanding energy in everyday life: insights from wood heating into theories of social practice." (2015).
 Vanbeveren, S.P.P., et al. "Operational short rotation woody crop plantations: manual or mechanised harvesting?" Biomass and Bioenergy 72 (2015): 8-18.
 However, chainsaws can have adverse effects on some tree species, such as reduced growth or greater ability to transfer disease.
 Multiple sources that refer to traditional forestry practices in Africa:
Leach, Gerald, and Robin Mearns. Beyond the woodfuel crisis: people, land and trees in Africa. Earthscan, 1988.
Leach, Melissa, and Robin Mearns. "The lie of the land: challenging received wisdom on the African environment." (1998)
Cline-Cole, Reginald A. "Political economy, fuelwood relations, and vegetation conservation: Kasar Kano, Northerm Nigeria, 1850-1915." Forest & Conservation History 38.2 (1994): 67-78.
 Multiple references:
Bailis, Rob, et al. "Getting the number right: revisiting woodfuel sustainability in the developing world." Environmental Research Letters 12.11 (2017): 115002
Masera, Omar R., et al. "Environmental burden of traditional bioenergy use." Annual Review of Environment and Resources 40 (2015): 121-150.
Study downgrades climate impact of wood burning, John Upton, Climate Central, 2015.
 Haustingsskog. [revidert] Rettleiar for restaurering og skjøtsel, Garnås, Ingvill; Hauge, Leif ; Svalheim, Ellen, NIBIO RAPPORT | VOL. 4 | NR. 150 | 2018.
Wood stoves can provide a household with thermal energy for cooking and for space and water heating. Wood stoves equipped with thermoelectric generators also produce electricity, which can be more sustainable, more reliable and less costly than power from solar panels.
Illustration: Diego Marmolejo.
If the 2,000 year old windmill is the predecessor of today’s wind turbines, the fireplace and the wood stove are the even older predecessors of today’s solar panels. Like solar panels, trees and other plants convert sunlight into a useful source of energy for humans. Throughout history, the burning of wood and other biomass provided households with thermal energy, which was used for cooking, heating, washing, and lighting.
Photosynthesis also underpinned all historical sources of mechanical power: it provided fuel for both human and animal power, as well as the building materials for water mills and windmills. Neither the old-fashioned windmill nor the old-fashioned wood stove produced electricity, but both can easily be adapted to do so. It suffices to connect an electric generator to a windmill, and to connect a thermoelectric generator to a wood stove.Thermoelectric Generator
Thermoelectric generators (or "TEGS") are very similar to “photoelectric” generators – which we now call “photovoltaic” generators or solar PV cells. A photovoltaic generator converts light directly into electricity, and a thermoelectric generator converts heat directly into electricity. Light and heat are both part of the electromagnetic spectrum, and so the main difference between these devices is that they operate on different wavelengths. 
A thermoelectric generator consists of a number of ingot-shaped semiconductor elements which are connected in series with metal strips and sandwiched between two electrically insulating but thermally conducting ceramic plates to form a very compact module.  They are commercially available from manufacturers such as Hi-Z, Tellurex, Thermalforce and Thermomanic.
A thermoelectric module. Image: Gerardtv (CC BY-SA 3.0)
A thermoelectric module. 
Stick a thermoelectric module to the surface of a wood stove, and it will produce electricity whenever the stove is used for cooking, space heating, or water heating. In the experiments and prototypes that are described in more detail below, the power output per module varies between 3 and 19 watts.
As with solar panels, modules can be connected together in parallel and series to obtain any voltage and power output that one needs – at least as long as there is stove surface left. As with solar panels, the electric current that is produced by the thermoelectric module(s) is regulated by a charge controller and stored into a battery, so that power is also available when the stove is not in use. A thermoelectric stove is usually combined with low voltage, direct current appliances, which avoids the conversion losses of using an inverter.
Thermoelectric stoves could be applied in many parts of the world. Most research is aimed at the global South, where close to 3,000 million people (40% of the global population) rely on burning biomass for cooking and domestic water heating. Some of these households also use the stove or fireplace for lighting (1,300 million people have no access to electricity) and for space heating during part of the year. However, there’s also research aimed at households in industrial societies, where biomass stoves and burners have increased in popularity, especially outside of cities.100% Efficient
Ever since the thermoelectric effect was first described by Thomas Seebeck in 1821, thermoelectric generators have been infamous for their low efficiency in converting heat into electricity. [1, 3-6] Today, the electrical efficiency of thermoelectric modules is only around 5-6%, roughly three times lower than that of the most commonly used solar PV panels. 
Illustration: Diego Marmolejo.
However, in combination with a stove, the electrical efficiency of a thermoelectric module doesn’t matter that much. If a module is only 5% efficient in converting heat into electricity, the other 95% comes out as heat again. If the stove is used for space heating, this heat cannot be considered an energy loss, because it still contributes to its original purpose. Total system efficiency (heat + electricity) is close to 100% – no energy is lost. With appropriate stove design, the heat from electricity conversion can also be re-used for cooking or domestic water heating.More Reliable than Solar Panels
Thermoelectric modules share many of the benefits of solar panels: they are modular, they require little maintenance, they don’t have moving parts, they operate silently, and they have a long life expectancy.  However, thermoelectric modules also offer interesting advantages compared to solar PV panels, provided that there’s a regularly used (non-electric) heat source in the household.
Although thermoelectric modules are roughly three times less efficient than solar PV panels, thermoelectric stoves provide a more reliable electricity supply because their power production is less dependent on the weather, the seasons, and the time of the day. In jargon, thermoelectric stoves have a higher “net capacity factor” than solar PV panels.
Even if a stove is only used for cooking and hot water production, these daily household activities still guarantee a reliable power output, no matter the climate. Furthermore, the power production of a thermoelectric stove matches very well with the power demand of householders: the times when the stove is used, are commonly also the times when most electricity is used. Solar panels, on the other hand, produce little or no electricity when household demand peaks.
A Soviet thermo-electric generator based on a kerosene lamp, powering a radio, 1959. Image: The Museum of Retrotechnology.
Note that these advantages disappear when thermoelectric generators are powered by direct solar energy. Solar thermoelectric generators (or “STEGS”), in which thermoelectric modules are heated by concentrated sunlight, don’t compensate for the low efficiency of their modules due to higher reliability because they are just as dependent on the weather as solar PV panels are. [8-10]Less Energy Storage
Because of its higher reliability, there’s no need to oversize the power generation and storage capacity of a thermoelectric system to compensate for nights, dark seasons or bad weather days, as is the case with a solar PV installation. Battery capacity only needs to be large enough to store electricity for use in between two firings of the stove, and there’s no need to add extra modules to compensate for periods of low power production.
Solar panels and thermoelectric stoves can also be combined, resulting in a reliable off-grid system with little need for energy storage. Such a hybrid system combines well with a stove that is only used for space heating. The thermoelectric modules produce most of the power in winter, while the solar panels take over in summer.Cheaper to Install, Easier to Recycle
A second advantage is that thermoelectric modules are easier to install than solar panels. There’s no need to build a structure on the roof and an electric link to the outside world, because the whole power plant is indoors. This also prevents theft of the power source, a significant problem with solar panels in some regions.
All these factors make that power from a thermoelectric stove can be cheaper and more sustainable compared to power from solar PV panels. Less energy, materials and money are needed to manufacture batteries, modules, and support structures.
Illustration: Diego Marmolejo.
In terms of sustainability, there’s another advantage: unlike solar PV panels, thermoelectric modules are relatively easy to recycle. Although silicon solar cells themselves are perfectly recyclable, they are encapsulated in a plastic layer (usually “EVA” or ethylene/vinyl acetate polymer), which is critical to the long-term performance of the modules.  Removing this layer without destroying the silicon cells is technically possible, but so complex that it makes recycling unattractive from both a financial and energetic viewpoint. [12-13] On the other hand, thermoelectric modules do not contain any plastic at all. [14-15]Cooling the Modules
The electrical efficiency of a thermoelectric generator doesn't only depend on the module itself. It’s also, in large part, influenced by the temperature difference between the cold and the hot side of the module. A thermoelectric module operating at half the temperature difference will only generate one quarter of the power. Consequently, improving the thermal management of a thermoelectric generator is a major focus in the design of thermoelectric stoves, as it allows to produce more power with less modules.
On the one hand, this involves locating the hottest spot(s) on a stove and fixing the modules there – provided that they can take the heat. Most stoves have surface temperatures from 100 to 300 degrees Celsius, while the hot side of bismuth telluride modules (the most affordable and efficient ones) withstands continuous temperatures of 150 to 350 degrees, depending on the model.
On the other hand, thermal management comes down to lowering the temperature of the cold side as much as possible, which can be done in four ways: air-cooled and water-cooled forced convection, which involves electric fans and pumps, and air-cooled and water-cooled natural convection, which involves the use of passive heat sinks that do not have a parasitic load on the system.
Active cooling usually has higher efficiency, even when the extra use of a fan or a pump is taken into account. However, passive systems are cheaper, operate silently, and are more reliable than active systems. In particular, the breakdown of a fan can be problematic, as it can lead to module failure due to overheating. Thermoelectric Stoves with Heat Sinks
The first thermoelectric biomass stoves were built in the early 2000s, although the Soviets pioneered a similar concept in the 1950s with mostly electric radios powered by kerosene lamps.  In 2004, a team of Lebanese researchers retrofitted a typical cast-iron wood stove from local rural areas with a single 56 x 56 mm thermoelectric module they had made themselves.  The stove, which is used for cooking and baking as well as for space and water heating, is rather small (52 x 44 x 29 cm) and weighs 40 kg.
Image: The cast-iron stove used in the experiments. 
The researchers screwed a 1 cm thick smooth aluminium plate to the hottest spot of the stove surface, fixed the module there, and attached a very large (180 x 136 x 125 mm) aluminium finned heat sink to its cold side. At a burning rate of 2.5 kg soft pine wood per hour, their experiments showed an average power output of 4.2 watts. Operating the wood stove for 10 hours per day (excluding the warm-up phase) thus supplies a rural Lebanese household with 42 watt-hours of electricity, enough to cover basic needs.
Image: TEG installation details and location on stove. 
More modules and heat sinks can be added to increase power output, but of course the stove surface is limited, and as more modules are added they will be located in areas with a lower surface temperature, decreasing their efficiency. Another way to increase power production is to use an even larger heat sink, and/or a more expensive heat sink made from materials with higher thermal conductivity.Thermoelectric Stoves with Fans
Most thermoelectric stoves that have been built to date use electric fans to cool the module, in combination with a much smaller heat sink. Although the fan can break and is a parasitic load on the system, it can simultaneously increase the efficiency of the stove by blowing hot air into the combustion chamber -- slashing firewood consumption and air pollution roughly by half. Furthermore, fan-powered stoves avoid the building of a chimney and can rely on a horizontal exhaust pipe instead.  Consequently, self-powered, fan-cooled stoves make it possible to reduce firewood consumption and indoor air pollution in rural regions of the global South where people neither have access to electricity, nor the means to make a chimney through the roof.
A study of a forced-draft thermoelectric cookstove with one module showed a 4.5 watt power output, of which 1 watt is required to operate the fan.  The net power production (3.5 watts) is lower compared to that of the stove with only a heat sink (4.2 watts), but the fan-cooled stove uses only half as much firewood: it generates 3.5 watts net electricity at a burning rate of 1 kg of firewood per hour, while the passively cooled stove requires 2.5 kg of firewood to produce 4.2 watts.
Image: TEG-powered forced draft cooking stove. 
An 80-days field test of a similar portable thermoelectric cookstove design in Malawi showed that the technology was highly valued by the users, with the stoves producing more electricity than was needed. Over the entire period, power production amounted to between 250 and 700 watt-hours of electricity, while electricity use was between 100 and 250 watt-hours. 
Some fan-cooled thermoelectric cooking stoves are commercially available, often designed with backpackers in mind. Examples are the stoves from BioLite, Termomanic and Termefor, which advertise power outputs between 3 and 10 watts, depending on the design and the number of modules. Thermoelectric Stoves with Water Tanks
The most efficient thermoelectric stoves are those in which the cold side of the module(s) is cooled by direct contact with a water reservoir. Water has lower thermal resistance than air, and thus cools more effectively. Furthermore, its temperature cannot surpass 100 degrees Celsius, which makes module failure due to overheating less likely.
When thermoelectric modules are water-cooled, the waste heat from their electricity conversion does not contribute to space heating, but to domestic water heating. Water-cooled thermoelectric stoves can be active (using a pump) or passive (no moving parts). 
Most thermoelectric stoves with passive water cooling are small and only used for heating relatively small amounts of water. In fact, rather than the stove, it is most often a cooking pot that is equipped with thermoelectric modules. For example, the PowerPot is a commercially available backpacking type cooking pot with a thermoelectric module attached to the base, which can be directly placed on the top of a stove and advertises a power generation of 5-10 watts.
Image: multifunctional wood stove with passive water cooling. [22-25]
A much larger and more versatile thermoelectric stove with passive water cooling was designed by French researchers, based on a large, multifunctional mud wood stove design from Morocco. [22-25] They installed eight thermoelectric modules at the bottom of a built-in 30L water storage tank, which not only serves as the heat sink for the cold side of the generator, but also as the domestic hot water supply for the household. Furthermore, the stove is equipped with a self-powered electric fan and has a double combustion chamber to increase combustion efficiency.
Tests of a prototype generated 28 watts of power using two modules, while burning 1.5 kg of wood for cooking and/or heating. The fan used 15W, meaning that 13W of power remains for other uses. The stove also provided 60 litres of hot water per hour. Depending on the duration of two cooking sessions, between 35 and 55 watt-hour electricity was stored in a battery in a day. Note that here the researchers take into account the losses of the charge controller, the 6V battery, and the fan.Thermoelectric Stoves with Pumps
Passive water cooling has a downside. As the temperature of the water in the tank increases, the difference between the cold and the hot side of the module will decrease, and so will the electrical efficiency. There either needs to be sufficient time between two firings of a stove to let the water cool down again, or the warm water should regularly be used and replaced by cold water. A pump makes this task more convenient.
A 2015 prototype, in which a wood stove used for cooking and space and water heating was equipped with 21 thermoelectric modules cooled by a pumped water system, showed a power production from 25W (burning 1 kg of pine wood per hour) over 70W (4 kg wood/hour) to 166W (9 kg wood/hour).  The power output per module is as high as 7.9 watts, almost double the power output per module of the stove with natural air cooling. The pump uses 5W and the stove also has a fan to increase combustion efficiency, which consumes 1W. Thermoelectric Gas Boilers?
Thermoelectric generators with forced water cooling better fit the energy infrastructure in industrial societies, especially in households with central heating systems. More modules could be added, resulting in a power production that matches a relatively high energy lifestyle. However, there's some caveats. First, central heating systems are only used for space and water heating, not for cooking, which makes their power production less reliable throughout the year. Second, only some central heating systems operate on biomass or wood pellet burners, while many more run on gas, oil or electricity.
Prototype of a thermoelectric wood-pellet burner. 
Obviously, when the heat source is electric, it makes no sense to stick a thermoelectric module to it. A thermoelectric system is incompatible with the vision of a high-tech sustainable building where heating is done with an electric heat pump, cooking happens on an electric cooking stove, and hot water is produced by an electric boiler.
However, when the energy source is gas or oil, a thermoelectric boiler is as much of a low carbon solution as a grid-connected solar PV system on the roof.  A thermoelectric heating system doesn’t make a household independent of fossil fuels, but neither does a grid-connected solar PV installation. It relies on the (largely fossil fuel powered) power grid to solve energy shortages and excesses, and it usually counts on a fossil fuel powered central heating system for space and water heating.
A 1 kW thermoelectric generator with forced-water cooling for low temperature geothermal resources. 
A thermoelectric heating system that runs on fossil fuels also compares favourably to a large cogeneration power plant, which captures the waste heat of its electricity production and distributes it to individual households for space and water heating. In a thermoelectric heating system, heat and power are produced and consumed on-site. Unlike a central cogeneration power plant, there's no need for an infrastructure to distribute heat and electricity. This saves resources and avoids energy losses during transportation, which amount to between 10 and 20% for heat distribution and between 3 and 10% (or much more in some regions) for power distribution.
A cogeneration power plant is more energy efficient (25-40%) in turning heat into electricity, meaning that in comparison a thermoelectric heating system supplies a larger share of heat and a smaller share of electricity. This is far from problematic, though, because even in Europe 80% of average household energy use goes to space and water heating.
Kris De Decker
 In both cases, the workings can be reversed. If one runs an electric current through a thermoelectric module, it can act as a heater or a cooler. If one runs an electric current through a photovoltaic device, it will produce light – that’s the principle of a LED.
 Rowe, David Michael, ed. CRC handbook of thermoelectrics. CRC press, 2018.
 Thermoelectric generators, The Museum of Retrotechnology, accessed May 2020. http://www.douglas-self.com/MUSEUM/POWER/thermoelectric/thermoelectric.htm
 Polozine, Alexandre, Susanna Sirotinskaya, and Lírio Schaeffer. "History of development of thermoelectric materials for electric power generation and criteria of their quality." Materials Research 17.5 (2014): 1260-1267.
 Goupil, Christophe, ed. Continuum theory and modeling of thermoelectric elements. John Wiley & Sons, 2015.
 Joffe, Abram F. "The revival of thermoelectricity." Scientific American 199.5 (1958): 31-37.
 The Stirling engine, another predecessor of the solar PV panel that converts heat into electricity, lacks many of these advantages.
 Kraemer, Daniel, et al. "Concentrating solar thermoelectric generators with a peak efficiency of 7.4%." Nature Energy 1.11 (2016): 1-8.
 Amatya, R., and R. J. Ram. "Solar thermoelectric generator for micropower applications." Journal of electronic materials 39.9 (2010): 1735-1740.
 Gayathri, Ms D. Binu Ms R., Mr Vijay Anand Ms R. Lavanya, and Ms R. Kanmani. "Thermoelectric Power Generation Using Solar Energy." International Journal for Scientific Research & Development, Vol. 5, Issue 03, 2017.
 Jiang, Shan, et al. "Encapsulation of PV modules using ethylene vinyl acetate copolymer as the encapsulant." Macromolecular Reaction Engineering 9.5 (2015): 522-529.
 Xu, Yan, et al. "Global status of recycling waste solar panels: A review." Waste Management 75 (2018): 450-458.
 Sica, Daniela, et al. "Management of end-of-life photovoltaic panels as a step towards a circular economy." Renewable and Sustainable Energy Reviews 82 (2018): 2934-2945.
 Bahrami, Amin, Gabi Schierning, and Kornelius Nielsch. "Waste Recycling in Thermoelectric Materials." Advanced Energy Materials (2020).
 Balva, Maxime, et al. "Dismantling and chemical characterization of spent Peltier thermoelectric devices for antimony, bismuth and tellurium recovery." Environmental technology 38.7 (2017): 791-797.
 In terms of weight, a thermoelectric module of 5 grams consists of alumina for the ceramic plates (44%); copper for the electric contacts (28%); tellurium (10%), bismuth (6%) and antimony (2%) for the thermoelectric legs; and small amounts of tin (for soldering), selenium (for “doping” the bismuth telluride) and silicone paste (the only polymer in the module, used for gluing everything together). In thermoelectric modules, the concentration of the scarce elements antimony, tellurium and bismuth is much higher compared to their traditional resources, which makes recycling attractive. 
 Gao, H. B., et al. "Development of stove-powered thermoelectric generators: A review." Applied Thermal Engineering 96 (2016): 297-310.
 Nuwayhid, Rida Y., Alan Shihadeh, and Nesreen Ghaddar. "Development and testing of a domestic woodstove thermoelectric generator with natural convection cooling." Energy conversion and management 46.9-10 (2005): 1631-1643.
 Champier, Daniel, et al. "Study of a TE (thermoelectric) generator incorporated in a multifunction wood stove." Energy 36.3 (2011): 1518-1526.
 Raman, Perumal, Narasimhan K. Ram, and Ruchi Gupta. "Development, design and performance analysis of a forced draft clean combustion cookstove powered by a thermo electric generator with multi-utility options." Energy 69 (2014): 813-825.
 O'Shaughnessy, S. M., et al. "Field trial testing of an electricity-producing portable biomass cooking stove in rural Malawi." Energy for Sustainable development 20 (2014): 1-10.
 Champier, Daniel, et al. "Thermoelectric power generation from biomass cook stoves." Energy 35.2 (2010): 935-942.
 Champier, Daniel, et al. "Prototype combined heater/thermoelectric power generator for remote applications." Journal of electronic materials 42.7 (2013): 1888-1899. https://hal.archives-ouvertes.fr/hal-02014177/document
 Champier, Daniel. "Thermoelectric generators: A review of applications." Energy Conversion and Management 140 (2017): 167-181. http://www.soliftec.com/ThermGen20170.pdf
 Favarel, Camille, et al. "Thermoelectricity-A Promising Complementarity with Efficient Stoves in Off-grid-areas." Journal of Sustainable Development of Energy, Water and Environment Systems 3.3 (2015): 256-268.
 Goudarzi, A. M., et al. "Integration of thermoelectric generators and wood stove to produce heat, hot water, and electrical power." Journal of electronic materials 42.7 (2013): 2127-2133.
 The researchers also suggest a way to eliminate the pump: a water tank can be placed at a height of 1 m to provide the water, gravity will work as a pump to flow water into the cooling system, and the hot water produced by the cooling system can be stored in an insulated tank.
 Another prototype generated an average output of 27W with just two modules, more than enough to power the pump (8W). Net power production is 9.5 watts per module. Montecucco, Andrea, Jonathan Siviter, and Andrew R. Knox. "A combined heat and power system for solid-fuel stoves using thermoelectric generators." Energy Procedia 75 (2015): 597-602.
 In fact, the earliest experiments with thermoelectric heating systems date from the late 1990s and were aimed at the development of self-powered gas boilers. Central heating systems typically consume 250-400W of power for operating their electrical components: fans, blowers, pumps and control panels. By adding thermoelectric modules, the system maintains its ability to heat a home in the event of a prolonged electric outage. In combination with grid-connected solar PV panels, this only works while the sun shines. Allen, D. T., and W. Ch Mallon. "Further development of" self-powered boilers"." Eighteenth International Conference on Thermoelectrics. Proceedings, ICT'99 (Cat. No. 99TH8407). IEEE, 1999. Allen, Daniel T., and Jerzy Wonsowski. "Thermoelectric self-powered hydronic heating demonstration." XVI ICT'97. Proceedings ICT'97. 16th International Conference on Thermoelectrics (Cat. No. 97TH8291). IEEE, 1997.
 Moser, Wilhelm, et al. "A biomass-fuel based micro-scale CHP system with thermoelectric generators." Proceedings of the Central European Biomass Conference 2008. 2008.
 Liu, Changwei, Pingyun Chen, and Kewen Li. "A 1 KW thermoelectric generator for low-temperature geothermal resources." Thirty-ninth workshop on geothermal reservoir engineering, Stanford University, Stanford, California. 2014.
Citrus fruits (oranges, lemons, mandarins, tangerines, grapefruits, limes, pomeloes) are the highest-value fruit crop in terms of international trade. Citrus plants are not frost-hardy and can only be grown in tropical and subtropical climates – unless they are cultivated in fossil fuel heated glasshouses.
However, during the first half of the twentieth century, citrus fruits came to be grown a good distance from the (sub)tropical regions they usually thrive in. The Russians managed to grow citrus outdoors, where temperatures drop as low as minus 30 degrees Celsius, and without the use of glass or fossil fuels.
By 1950, the Soviet Union boasted 30,000 hectares of citrus plantations, producing 200,000 tonnes of fruits per year.The Expansion of Citrus Production in the Soviet Union
Before the first World War, the total area occupied by citrus plantations in the Russian Empire was estimated at a mere 160 hectares, located almost entirely in the coastal area of Western Georgia. This region enjoys a relatively mild winter climate because of its proximity to the Black Sea and the Caucasus Mountain range – which protects it against cold winter winds coming from the Russian plains and Western Siberia.
Nevertheless, such a climate is far from ideal for citrus production: although the average winter temperature is above zero, thermal minima may drop to between -8 to -12 degrees Celsius. Frost is deadly to citrus plants, even a short blast. For example, at the end of the 19th century, the extensive citrus industry in Florida (US) was almost completely destroyed when temperatures dropped briefly to between -3 and -8 degrees Celsius.
From the 1920s onwards, the Russians extended the area of citrus cultivation to regions considered even less suited. Initially, citrus production extended westward along the Black Sea coast, a region unprotected by mountains, and where temperatures can drop to -15 degrees Celsius. This includes Sochi – which hosted the 2014 Winter Olympics – and the southern coast of Crimea. At the same time, citrus cultivation extended eastward to the west coast of the Caspian Sea, in Azerbaijan.
Citrus production was then spread to regions where winter temperatures can drop to -20 degrees Celsius, and where the ground can freeze to a depth of 20-30 cm: larger parts of the earlier mentioned zones, as well as Dagestan, Turkmenistan, Tajikistan, Uzbekistan, and the southern districts of Ukraine and Moldova. Finally, citrus cultivation was pushed further north in these regions, where winter temperatures can plummet to -30 degrees Celsius, and where the ground can freeze to a depth of 50 cm.
Frost was not the only obstacle to citrus cultivation in these parts of the world. The region is also characterised by excessive summer heat and strong, dry winds.From Import Dependency to Self-Sufficiency
Before the first World War, almost all citrus fruits in ancient Russia came from abroad. The main suppliers were Sicily (for lemons) and Palestine (for oranges). Some 20,000 to 30,000 tonnes of citrus fruits were imported annually. Its consumption with tea, the national drink in Russia, meant lemon made up almost three-quarters of these imports.
In 1925, following the Russian Revolution and the civil war, citrus growing became the subject of planned development. The Communist Party was determined to become self-sufficient in citrus production, and no efforts were spared. They set up several research establishments and nurseries, as well as test fields in more than 50 locations.
By 1940, the acreage had grown to 17,000 hectares and production reached 40,000 tonnes, double the annual imports under the old regime. By 1950, the area planted with citrus fruits reached 30,000 hectares (56% mandarin trees, 28% lemon trees, 16% orange trees), and production grew to 200,000 tonnes of fruits per year.
The large share of mandarin trees can be explained by the fact that they are the most cold-resistant of all citrus fruits, tolerating frosts to about -2 degrees Celsius. Lemon trees, on the other hand, are the least cold-resistant citrus variety.
There are three reasons why the Russians managed to grow citrus fruits in regions that were (and are) considered totally unfit for it. First, they bred citrus varieties that were more resistant to cold. Second, they pruned citrus plants in radical ways that made them more resistant to cold, heat and wind. This eventually resulted in creeping citrus plants, which were only 25 cm tall. Third, they planted citrus plants in unlikely locations, most notably in trenches of up to two metres deep.“Progressive cold-hardening”
Imported citrus varieties only survived in a few isolated points along the Black Sea coast, which enjoyed a particularly favourable microclimate. To better prepare citrus fruits for cold, Soviet citrologists followed a method called “progressive cold-hardening”. It allowed them to create new varieties which were adapted to local ecological conditions, a cultivation strategy which had originally been developed for apricot trees and grapes.
The method consists of planting a seed of a highly valued tree a bit further north of its original location, and then waiting for it to give seeds. Those seeds are then planted a bit further north, and with the process repeated further, slowly but steadily pushing the citrus variety towards less hospitable climates. Using this method, apricot trees from Rostov could eventually be grown in Mitchurinsk, 650 km further up north, where they developed apricot seeds that were adapted to the local climate. On the other hand, directly planting the seed of the Rostov apricot tree in Mitchurinsk proved unsuccessful.
The method, developed following the observation that young plants started from seed adapt to the conditions of the new environment, also proved successful for citrus fruits – which retained high yields and high quality fruits. As well as “progressive cold-hardening”, from 1929 onwards the Russian citrologists performed a methodological selection of cold-resistant varieties, which were hybridised with the best local varieties. This was facilitated by an extensive collection of citrus fruits, which included almost all representatives of the genus Citrus.Dwarf and Semi-Dwarf Citrus Trees
In the main citrus growing centres worldwide, pruning citrus plants was very rare. Harold Hume, a renowned Canadian-American botanist, even advised to “keep pruning shears as far as possible from the citrus plantation”.
However, pruning was crucial to the cultivation of citrus plants in Russia. First and foremost, pruning reduced the height of the citrus plants. Conventional lemon trees grow up to 5 metres tall, while orange trees can be 12 metres tall. On the other hand, even prior to the 1920s, Russians worked with dwarf and semi-dwarf citrus trees, which were only 1 to 2 metres tall. These trees were further pruned to have compact crowns.
More compact trees have two advantages. First, closer to the ground, temperature variations are smaller and wind speed is lower. Second, smaller trees are easier to protect against the elements. In the region with the mildest climate, where an initial 160 ha of citrus fruits were grown, plantations were often located on terraces or on steep slopes, taking up the smallest piece of land with a favourable microclimate.
"Collecting tangerines at the Chakva state farm", a painting by Mikhail Beringov, 1930s.
During winter, individual citrus plants in these plantations were protected by a shelter made of cheesecloth or straw mats, supported by a light frame of poles. Plantations were also surrounded by windbreak curtains, arranged in such a way that they mitigated both the cold winter winds and hot, dry summer winds. These curtains also channeled the cold air masses descending from the tops of the hills outside of the plantations.
Further protection against cold and wind was generated by planting trees very close together – up to 3,000 plants per hectare. Excessive summer heat was counteracted by spraying whitewash on the upper part of the leaves, which lowered their temperature by about 4 degrees Celsius. All these methods work for large-stemmed citrus trees, but of course they are much cheaper and easier to perform on trees with a height of only 1 to 2 metres.Creeping Citrus Trees
Training small citrus plants was key to extending their cultivation across all regions of the Black Sea coast, where until then it had been impossible. This was achieved by pruning and guiding citrus plants into a creeping form, which reduced their height to a mere 25 cm.
The crown of creeping citrus plants was formed in two ways. In the first method, the trunk of the tree took an inclined position as soon as it left the soil. The main branches of the crown, formed in a unilateral fan, touched the ground, and so did the fruits. In the second method, the 10-15 cm tall stem was kept straight while the main branches developed radially at an angle of 90 degrees to the trunk, thus (seen from above) forming a spider-like crown. In this case, the branches and the fruits did not touch the ground, and this proved to be the most successful method.
Creeping culture, here applied to an apple tree. Source.
Creeping citrus trees offered even better protection against cold and wind compared to dwarf and semi-dwarf trees, because the creeping crown created a microclimate that softened both the summer maxima and the winter minima. During a 10-year long test, it was found that in winter, the air layer at the level of the creeping crown was on average 2.5 to 3 degrees Celsius warmer than the air layer at 2 metres above the ground. In summer, during hot weather, the difference in temperature could exceed 20 degrees Celsius.
Wind protection was just as effective. The wind speed at 2 metres above the ground reached an average of 10.4 metres per second, while it was only 1.8 metres per second at the level of creeping lemon trees. This limited dehydration of the crown so that less water was needed.
Logically, the very small size of creeping plants made it even easier and cheaper to protect them against the elements. Moreover, as a protection strategy it proved to be more effective: during the winter of 1942-43, when temperatures along the Black Sea coast went down to -15 degrees, creeping lemon trees protected by a double layer of cheesecloth and by windbreaks did not suffer in any way, while similarly protected taller-stemmed lemon trees froze to the roots.
Perhaps surprisingly, creeping citrus plants had higher yields than semi-dwarf citrus plants. The fruits ripened earlier, and produced more fruits, especially during the first years.Cultivating Citrus Trees in Trenches
None of the above mentioned cultivation methods were sufficient to grow citrus fruits in regions where the ground froze and where winter temperatures dropped below -15 degrees. Here, citrus plants were cultivated in trenches. Obviously, growing citrus fruits in trenches was only practical with dwarf and – most often – creeping plants. In this method, soil heat protects citrus fruits from frost.
The depth of the trenches varied from 0.8 to 2 metres depending on the winter temperature, the depth to which the ground froze, and the water table. The trees could be planted in single or double rows. Trenches were generally trapezoidal in section to improve light conditions. They were roughly 2.5 metres wide at the bottom and 3 metres wide at the top for single rows of plants, and 3.5 metres wide at the bottom and 4 metres wide at the top for double rows of plants.
If necessary, the walls were coated with clay or reinforced with brick or shell rock. Inside the trench, the plants were positioned 1.5 metres apart from each other, and when two rows were planted, each plant in the first row was located between two plants in the second row. The length of the trenches depended on the nature of the terrain, but did not surpass 50 metres.
Trenches were located on level ground or light slopes, oriented from east to west in order for optimal sunlight during the winter months. They were spaced apart 3-5 metres from each other when they contained a single row of plants, and 4-6 metres when they had a double row of plants. Trenches could be connected to each other, which made it more convenient to care for the plants.
The space in between the trenches allowed for the placing of shade screens or planting of natural shading plants. These increased the humidity in the trenches and protected the citrus plants from overheating in summer.Covering the Trenches
During the summer, plants received the same care as those planted in the ground under “ordinary” conditions. When winter came, the trenches were covered with 2 cm thick wooden boards and single or double straw mats, depending on the climate. This kept the soil heat in the trench, while keeping precipitation out. If a layer of snow covered the boards, it was left in place for extra insulation. The boards were sloped at an angle of 30-35 degrees. When in winter the temperature rose above zero degrees Celsius, the cover was raised on the south side or completely removed during the day.
This method cannot be applied to any plant. Citrus plants tolerate very low light levels for 3-4 months per year, provided that the temperature of the air in contact with the crown is maintained between 1 and 4 degrees Celsius. At this temperature, the metabolism of the plants weakens, which improves their resistance to cold.
Glass was only used sparingly. Wooden boards gave much better frost protection, were much cheaper, and could be made from local materials. The plants needed some stray light, so that up to a quarter of the trench shelter area was made of glass frames, which were covered with straw mats as well as a top cover of earth and clay. Only a few openings here and there provided light and ventilation.
The cultivation of creeping citrus plants in trenches, although labour-intensive, was a simple method that did not require large investment, and provided high yields (80 to 200 fruits per stem per year) as well as high quality tropical fruits. All types of citrus fruits were grown in trenches.Other Cultivation Methods
Apart from trenches, Soviet citrologists used other types of shelters to grow citrus plants – all of which were more effective with smaller sized trees (usually dwarf cultures). Some included (usually sparse) use of fossil fuels.
A first example is the cultivation of citrus fruits with annual transplantation. Citrus plants spent the summer outdoors, but as winter approached, they were dug up with the clod of earth surrounding their roots, and transported to wintering sheds, where they were crammed together for as long as it was freezing outside. In spring, they were moved back to their original location. Where the winters were relatively mild, these winter sheds were light wooden buildings that were generally unheated. In colder regions, they were made of masonry, half buried into the ground, and fitted with heating devices.
Citrus plants were also grown in unheated glasshouses. These “limonaria”, located on the Black Sea coast, were semi-circular glasshouses, built around particularly well-exposed hills with terraces. The trees were grown as espaliers – a method reminiscent of the fruit walls in northern European countries, which facilitated cultivation of peaches and other Mediterranean fruits at high latitudes.
Heated glasshouses, which used electric heating and artificially controlled carbon dioxide and humidity throughout the year, were only used in industrial centers located beyond the Arctic Cycle. Finally, citrus fruits were grown throughout the Soviet Union in pots or boxes in apartments, schools, public buildings, and even in the glass halls of factories and workshops – making use of the waste heat from space heating or industrial processes (steam or hot water).
Few of these methods would have been profitable under a free trade regime. Considerable research investment went into kickstarting domestic citrus production. Although most methods did not require fossil fuels and were possible using cheap and locally available materials, they were very labour intensive. Domestic citrus production was only possible because it was sheltered – not only from frost, but from foreign competition too.
Kris De Decker.
Edited by Alice Essam. Thanks to Alexandrine Maes.
- Les Agrumes en U.R.S.S., Boris Tkatchenko, in Fruits, vol.6, nr.3, pp.89-98, 1951. http://www.fruitiers-rares.info/articles21a26/article24-agrumes-en-URSS-1-Citrus.html & http://www.fruitiers-rares.info/articles51a56/article53-agrumes-en-URSS-2-Citrus.html
- М. А. КАПЦИНЕЛЬ, ВЫРАЩИВАНИЕ ЦИТРУСОВЫХ КУЛЬТУР В РОСТОВСКОЙ ОБЛАСТИ РОСТОВСКОЕ КНИЖНОЕ ИЗДАТЕЛЬСТВО Ростов-на-Дону —1953. ("Growing citrus cultures in the Rostov region", M.A. Kaptsinel). http://homecitrus.ru/books.html
- Мандарин – туапсинский господин?, СВЕТЛАНА СВЕТЛОВА, 16 ДЕКАБРЯ 2018 https://tuapsevesti.ru/archives/40995
- Fruit walls: Urban Farming in the 1600s
- Reinventing the Greenhouse
- A "Dacha" for Everyone? Community Gardens and Food Security in Russia
Our self-hosted, solar-powered, off-grid website has been running for 15 months now. In this article, we present its energy and uptime data, and calculate the embodied energy of our configuration. Based on these results, we consider the optimal balance between sustainability and server uptime, and outline possible improvements.
Illustration: Diego Marmolejo.Introduction
In September 2018, Low-tech Magazine launched a new website that aimed to radically reduce the energy use and carbon emissions associated with accessing its content. Internet energy use is growing quickly on account of both increasing bit rates (online content gets “heavier”) and increased time spent online (especially since the arrival of mobile computing and wireless internet).
The solar powered website bucks against these trends. To drop energy use far below that of the average website, we opted for a back-to-basics web design, using a static website instead of a database driven content management system. To reduce the energy use associated with the production of the solar panel and the battery, we chose a minimal set-up and accepted that the website goes off-line when the weather is bad.
We have been monitoring the solar powered server for 15 months now, and we have collected data on uptime, energy use, power use, system efficiency, and visitor traffic. We also calculated how much energy was required to make the solar panel, the battery, the charge controller and the server.Uptime, Electricity Use & System Efficiency
The solar powered website goes off-line when the weather is bad – but how often does that happen? For a period of about one year (351 days, from 12 December 2018 to 28 November 2019), we achieved an uptime of 95.26%. This means that we were off-line due to bad weather for 399 hours.
If we ignore the last two months, our uptime was 98.2%, with a downtime of only 152 hours. Uptime plummeted to 80% during the last two months, when a software upgrade increased the energy use of the server. This knocked the website off-line for at least a few hours every night.
One kilowatt-hour of solar generated electricity can serve almost 50,000 unique visitors
Let’s have a look at the electricity used by our web server (the “operational” energy use). We have measurements from the server and from the solar charge controller. Comparing both values reveals the inefficiencies in the system. Over a period of roughly one year (from 3 December 2018 to 24 November 2019), the electricity use of our server was 9.53 kilowatt-hours (kWh).
We measured significant losses in the solar PV system due to voltage conversions and charge/discharge losses in the battery. The solar charge controller showed a yearly electricity use of 18.10 kWh, meaning that system efficiency was roughly 50%.
During the period under study, the solar powered website received 865,000 unique visitors. Including all energy losses in the solar set-up, electricity use per unique visitor is then 0.021 watt-hour. One kilowatt-hour of solar generated electricity can thus serve almost 50,000 unique visitors, and one watt-hour of electricity can serve roughly 50 unique visitors. This is all renewable energy and as such there are no direct associated carbon emissions.Embodied Energy Use & Uptime
The story often ends here when renewable energy is presented as a solution for the growing energy use of the internet. When researchers examine the energy use of data centers, which host the content that is accessible on the internet, they never take into account the energy that is required to build and maintain the infrastructure that powers those data centers.
There is no such omission with a self-hosted website powered by an off-the-grid solar PV installation. The solar panel, the battery, and the solar charge controller are equally essential parts of the installation as the server itself. Consequently, energy use for the mining of the resources and the manufacture of these components – the “embodied energy” – must also be taken into account.
A simple representation of our system. The voltage conversion (between the 12V charge controller and the 5V server) and the battery meter (between the server and the battery) are missing. Illustration: Diego Marmolejo.
Unfortunately, most of this energy comes from fossil fuels, either in the form of diesel (mining the raw materials and transporting the components) or in the form of electricity generated mainly by fossil fuel power plants (most manufacturing processes).
The sizing of battery and solar panel is a compromise between uptime and sustainability
The embodied energy of our configuration is mainly determined by the size of the battery and the solar panel. At the same time, the size of battery and solar panel determine how often the website will be online (the “uptime”). Consequently, the sizing of battery and solar panel is a compromise between uptime and sustainability.
To find the optimal balance, we have run (and keep running) our system with different combinations of solar panels and batteries. Uptime and embodied energy are also determined by the local weather conditions, so the results we present here are only valid for our location (the balcony of the author’s home near Barcelona, Spain).
Different sizes of solar panels and batteries. Illustration: Diego MarmolejoUptime and Battery size
Battery storage capacity determines how long the website can run without a supply of solar power. A minimum of energy storage is required to get through the night, while additional storage can compensate for a certain period of low (or no) solar power production during the day. Batteries deteriorate with age, so it’s best to start with more capacity than is actually needed, otherwise the battery needs to be replaced rather quickly.
> 90% Uptime
First, let’s calculate the minimum energy storage needed to keep the website online during the night, provided that the weather is good, the battery is new, and the solar panel is large enough to charge the battery completely. The average power use of our web server during the first year, including all energy losses in the solar installation, was 1.97 watts. During the shortest night of the year (8h50, June 21), we need 17.40 watt-hour of storage capacity, and during the longest night of the year (14h49, December 21), we need 29.19 Wh.
Because lead-acid batteries should not be discharged below half of their capacity, the solar powered server requires a 60 Wh lead-acid battery to get through the shortest nights when solar conditions are optimal (2 x 29.19Wh). For most of the year we ran the system with a slightly larger energy storage (up to 86.4 Wh) and a 50W solar panel, and achieved the above mentioned uptime of 95-98%. 
A larger battery would keep the website running even during longer periods of bad weather, again provided that the solar panel is large enough to charge the battery completely. To compensate for each day of very bad weather (no significant power production), we need 47.28 watt-hour (24h x 1.97 watts) of storage capacity.
From 1 December 2019 to 12 January 2020, we combined the 50 W solar panel with a 168 watt-hour battery, which has a practical storage capacity of 84 watt-hour. This is enough storage to keep the website running for two nights and a day. Even though we tested this configuration during the darkest period of the year, we had relatively nice weather and achieved an uptime of 100%.
However, to assure an uptime of 100% over a period of years would require more energy storage. To keep the website online during four days of low or no power production, we would need a 440 watt-hour lead-acid battery – the size of a car battery. We include this configuration to represent the conventional approach to off-grid solar power.
< 90% Uptime
We also made calculations for batteries that aren’t large enough to get the website through the shortest night of the year: 48 Wh, 24 Wh, and 15.6 Wh (with practical storage capacities of 24 Wh, 12 Wh, and 7.8 Wh, respectively). The latter is the smallest lead-acid battery commercially available.
A website that goes off-line in evening could be an interesting option for a local online publication with low anticipated traffic after midnight.
If the weather is good, the 48 Wh lead-acid battery will keep the server running during the night from March to September. The 24 Wh lead acid-battery can keep the website online for a maximum of 6 hours, meaning that the server will go off-line each night of the year, although at different hours depending on the season.
Finally, the 15.6 Wh battery keeps the website online for only four hours when there’s no solar power. Even if the weather is good, the server will stop working around 1 am in summer and around 9 pm in winter. The maximum uptime for the smallest battery would be around 50%, and in practice it will be lower due to clouds and rain.
A website that goes off-line in evening could be an interesting option for a local online publication with low anticipated traffic after midnight. However, since Low-tech Magazine’s readership is almost equally divided between Europe and the USA this is not an attractive option. If the website goes down every night, our American readers could only access it during the morning.Uptime and Solar Panel Size
The uptime of the solar powered website is not only determined by the battery, but also by the solar panel, especially in relation to bad weather. The larger the solar panel, the quicker it will charge the battery and fewer hours of sun will be needed to get the website through the night. For example, with the 50 W solar panel, one to two hours of full sun are sufficient to completely charge any of the batteries (except for the car battery).
Different sizes of solar panels. Illustration: Diego Marmolejo.
Replace the 50 W solar panel by a 10 W solar panel, however, and the system needs at least 5.5 hours to charge the 86.4 Wh battery in optimal conditions (2 W to operate the server, 8 W to charge the battery). If the 10W solar panel is combined with a larger, 168 Wh lead-acid battery, it needs 10.5 hours of full sun to charge the battery completely, which is only possible from February to November.
A larger solar panel increases the chances that the website remains online even when weather conditions are not optimal.
A larger solar panel is equally advantageous during cloudy weather. Clouds can lower solar energy production to anywhere between 0 and 90% of maximum capacity, depending on the thickness of cloud cover. If a 50 watt solar panel produces just 10% of its maximum capacity (5W), that’s still enough to run the server (2W) and charge the battery (3W).
However, if a 10 W solar panel only produces 10% of its capacity, that’s just enough to power the server, and the battery won’t be charged. We ran the website on a 10 W panel from 12 to 21 January 2020, and it quickly went down when the weather was not optimal. We are now powering the website with a 30W solar panel (and a 168 Wh battery).
A 5 W solar panel – the smallest 12V solar panel commercially available – is the absolute minimum required to run a solar powered website. However, only under optimal conditions will it be able to power the server (2W) and charge the battery (3W), and it could only keep the website running through the night if the day is long enough. Because solar panels rarely generate their maximum power capacity, this would result in a website that is online only while the sun shines.
Even though the combination of a small solar panel and large battery can have the same embodied energy as the combination of a large solar panel and a small battery, the system each creates will have very different characteristics. In general, it’s best to opt for a larger solar panel and a smaller battery, because this combination increases the life expectancy of the battery – lead-acid batteries need to be fully charged from time to time or they lose storage capacity.Embodied Energy for Different Sizes of Batteries and Solar Panels
It takes 1.03 megajoule (MJ) to produce 1 watt-hour of lead-acid battery capacity , and 3,514 MJ of energy to produce one m2 of solar panel.  In the table below, we present the embodied energy for different sizes of batteries and solar panels and then calculate the embodied energy per year, based on a life expectancy of 5 years for batteries and 25 years for solar panels. The values are converted to kilowatt-hours per year and refer to primary energy, not electricity.
A solar powered website also needs a charge controller and of course a web server. The embodied energy for these components remains the same no matter the size of solar panel or battery. The embodied energy per year is based on a life expectancy of 10 years. 
We now have all data to calculate the total embodied energy for each combination of solar panels and batteries. The results are presented in the table below. The embodied energy varies by a factor of five depending on the configuration: from 10.92 kWh primary energy per year for the combination of the smallest solar panel (5W) with the smallest battery (15.6 Wh) to 50.46 kWh primary energy per year for the combination of the largest solar panel (50 W) with the largest battery (440Wh).
If we divide these results by the number of unique visitors per year (865,000), we obtain the embodied energy use per unique visitor to our website. For our original configuration with 95-98% uptime (50W solar panel, 86.4Wh battery), primary energy use per unique visitor is 0.03 Wh. This result would be pretty similar for the other configurations with a lower uptime, because although the embodied energy is lower, so is the number of unique visitors.
Now that we have calculated the embodied energy of different configurations, we can calculate the carbon emissions. We can’t compare the environmental footprint of the solar powered website with that of the old website, because it is hosted elsewhere and we can’t measure its energy use. What we can compare is the solar powered website with a similar self-hosted configuration that is run on grid power. This allows us to assess the (un)sustainability of running the website on solar power.
Life cycle analyses of solar panels are not very useful for working out the CO2-emissions of our components because they work on the assumption that all energy produced by the panels is used. This is not necessarily true in our case: the larger solar panels waste a lot of solar power in optimal weather conditions.
Hosting the solar powered Low-tech Magazine for a year has produced as much emissions as an average car driving a distance of 50 km.
We therefore take another approach: we convert the embodied energy of our components to litres of oil (1 litre of oil is 10 kWh of primary energy) and calculate the result based on the CO2-emissions of oil (1 litre of oil produces 3 kg of greenhouse gasses, including mining and refining it). This takes into account that most solar panels and batteries are now produced in China – where the power grid is three times as carbon-intensive and 50% less energy efficient than in Europe. 
This means that fossil fuel use associated with running the solar powered Low-tech Magazine during the first year (50W panel, 86.4 Wh battery) corresponds to 3 litres of oil and 9 kg of carbon emissions – as much as an average European car driving a distance of 50 km. Below are the results for the other configurations:
Comparison with Carbon Intensity of Spanish Power Grid
Now let’s calculate the hypothetical CO2-emissions from running our self-hosted web server on grid power instead of solar power. CO2-emissions in this case depend on the Spanish power grid, which happens to be one of the least carbon intensive in Europe due to its high share of renewable and nuclear energy (respectively 36.8% and 22% in 2019).
Last year, the carbon intensity of the Spanish power grid decreased to 162 g of CO2 per kWh of electricity. For comparison, the average carbon intensity in Europe is around 300g per kWh of electricity, while the carbon intensity of the US and Chinese power grid are respectively above 400g and 900g of CO2 per kWh of electricity.
If we just look at the operational energy use of our server, which was 9.53 kWh of electricity during the first year, running it on the Spanish power grid would have produced 1.54 kg of CO2-emissions, compared to 3 - 9 kg in our tested configurations. This seems to indicate that our solar powered server is a bad idea, because even the smallest solar panel with the smallest battery generates more carbon emissions than grid power.
When the carbon intensity of the power grid is measured, the embodied energy of the renewable power infrastructure is taken to be zero.
However, we’re comparing apples to oranges. We have calculated our emissions based on the embodied energy of our installation. When the carbon intensity of the Spanish power grid is measured, the embodied energy of the renewable power infrastructure is taken to be zero. If we calculated our carbon intensity in the same way, of course it would be zero, too.
Ignoring the embodied carbon emissions of the power infrastructure is reasonable when the grid is powered by fossil fuel power plants, because the carbon emissions to build that infrastructure are very small compared to the carbon emissions of the fuel that is burned. However, the reverse is true of renewable power sources, where operational carbon emissions are almost zero but carbon is emitted during the production of the power plants themselves.
To make a fair comparison with our solar powered server, the calculation of the carbon intensity of the Spanish power grid should take into account the emissions from the building and maintaining of the power plants, the transmission lines, and – should fossil fuel power plants eventually disappear – the energy storage. Of course, ultimately, the embodied energy of all these components would depend on the chosen uptime.Possible Improvements
There are many ways in which the sustainability of our solar powered website could be improved while maintaining our present uptime. Producing solar panels and batteries using electricity from the Spanish grid would have the largest impact in terms of carbon emissions, because the carbon footprint of our configuration would be roughly 5 times lower than it is now.
What we can do ourselves is lower the operational energy use of the server and improve the system efficiency of the solar PV installation. Both would allow us to run the server with a smaller battery and solar panel, thereby reducing embodied energy. We could also switch to another type of energy storage or even another type of energy source.
We already made some changes that have resulted in a lower operational energy use of the server. For example, we discovered that more than half of total data traffic on our server (6.63 of 11.16 TB) was caused by a single broken RSS implementation that pulled our feed every couple of minutes.
A difference in power use of 0.19 watts adds up to 4.56 watt-hour over the course of 24 hours, which means that the website can stay online for more than 2.5 hours longer.
Fixing this as well as some other changes lowered the power use of the server (excluding energy losses) from 1.14 watts to about 0.95 watts. The gain may seem small, but a difference in power use of 0.19 watts adds up to 4.56 watt-hour over the course of 24 hours, which means that the website can stay online for more than 2.5 hours longer.
System efficiency was only 50% during the first year. Energy losses were experienced during charging and discharging of the battery (22%), as well as in the voltage conversion from 12V (solar PV system) to 5V (USB connection), where the losses add up to 28%. The initial voltage converter we built was pretty suboptimum (our solar charge controller doesn't have a built-in USB-connection), so we could build a better one, or switch to a 5V solar PV set-up.
To increase the efficiency of the energy storage, we could replace the lead-acid batteries with more expensive lithium-ion batteries, which have lower charge/discharge losses (<10%) and lower embodied energy. More likely is that we eventually switch to a more poetic small-scale compressed air energy storage system (CAES). Although low pressure CAES systems have similar efficiency to lead-acid batteries, they have much lower embodied energy due to their long life expectancy (decades instead of years).
Another way to lower the embodied energy is to switch renewable energy source. Solar PV power has high embodied energy compared to alternatives such as wind, water, or human power. These power sources could be harvested with little more than a generator and a voltage regulator – as the rest of the power plant could be built out of wood. Furthermore, a water-powered website wouldn’t require high-tech energy storage. If you’re in a cold climate, you could even operate a website on the heat of a wood stove, using a thermo-electric generator.
People who have a good supply of wind or water power could build a system with lower embodied energy than ours. However, unless the author starts powering his website by hand or foot, we’re pretty much stuck with solar power. The biggest improvement we could make is to add a solar tracker that makes the panel follow the sun, which could increase electricity generation by as much as 30%, and allow us to obtain a better uptime with a smaller panel.Let’s Scale Things Up !
A final way to improve the sustainability of our system would be to scale it up: run more websites on a server, and run more (and larger) servers on a solar PV system. This set-up would have much lower embodied energy than an oversized system for each website alone.
Illustration: Diego Marmolejo.
Solar Webhosting Company
If we were to fill the author’s balcony with solar panels and start a solar powered webhosting company, the embodied energy per unique visitor would decrease significantly. We would need only one server for multiple websites, and only one solar charge controller for multiple solar panels. Voltage conversion would be more energy efficient, and both solar and battery power could be shared by all websites, which brings economies of scale.
Of course, this is the very concept of the data center, and although we have no ambition to start such a business, others could take this idea forward: towards a data center that is run just as efficiently as any other data center today, but which is powered by renewables and goes off-line when the weather is bad.
Add More Websites
We found that the capacity of our server is large enough to host more websites, so we already took a small step towards economies of scale by moving the Spanish and French versions of Low-tech Magazine to the solar powered server (as well as some other translations).
Although this move will increase our operational energy use and potentially also our embodied energy use, we also eliminate other websites that are or were hosted elsewhere. We also have to keep in mind that the number of unique visitors to Low-tech Magazine may grow in the future, so we need to become more energy efficient just to maintain our environmental footprint.
Combine Server and Lighting
Another way to achieve economies of scale would give a whole new twist to the idea. The solar powered server is part of the author’s household, which is also partly powered by off-grid solar energy. We could test different sizes of batteries and solar panels – simply swapping components between solar installations.
When we were running the server on the 50 W panel, the author was running the lights in the living room on a 10W panel – and was often left sitting in the dark. When we were running the server on the 10 W panel, it was the other way around: there was more light in the household, at the expense of a lower server uptime.
If the weather gets bad, the author could decide not to use the lights and keep the server online – or the other way around
Let’s say we run both the lights and the server on one solar PV system. It would lower the embodied energy if both systems are considered, because only one solar charge controller would be needed. Furthermore, it could result in a much smaller battery and solar panel (compared to two separate systems), because if the weather gets bad, the author could decide not to use the lights and keep the server online – or the other way around. This flexibility is not available now, because the server is the only load and its power use cannot be easily manipulated.Energy Use in the Network
As far as we know, ours is the first life cycle analysis of a website that runs entirely on renewable energy and includes the embodied energy of its power and energy storage infrastructure. However, this is not, of course, the total energy use associated with this website.
There’s also the operational and embodied energy of the network infrastructure (which includes our router, the internet backbone, and the mobile phone network), and the operational and embodied energy of the devices that our visitors use to access our website: smartphones, tablets, laptops, desktops. Some of these have low operational energy use, but they all have very limited lifespans and thus high embodied energy.
Energy use in the network is directly related to the bit rate of the data traffic that runs through it, so our lightweight website is just as efficient in the communication network as it is on our server. However, we have very little influence over which devices people use to access our website, and the direct advantage of our design is much smaller here than in the network. For example, our website has the potential to increase the life expectancy of computers, because it’s light enough to be accessed with very old machines. Unfortunately, our website alone will not make people use their computers for longer.
Both the network infrastructure and the end-use devices could be re-imagined along the lines of the solar powered website.
That said, both the network infrastructure and the end-use devices could be re-imagined along the lines of the solar powered website – downscaled and powered by renewable energy sources with limited energy storage. Parts of the network infrastructure could go off-line if the local weather is bad, and your e-mail may be temporarily stored in a rainstorm 3.000 km away. This type of network infrastructure actually exists in some countries, and those networks partly inspired this solar powered website. The end-use devices could have low energy use and long life expectancy.
Because the total energy use of the internet is usually measured to be roughly equally distributed over servers, network, and end-use devices (all including the manufacturing of the devices), we can make a rough estimate of the total energy use of this website throughout a re-imagined internet. For our original set-up with 95.2% uptime, this would be 87.6 kWh of primary energy, which corresponds to 9 litres of oil and 27 kg of CO2. The improvements we outlined earlier could bring these numbers further down, because in this calculation the whole internet is powered by oversized solar PV systems on balconies.
Kris De Decker, Roel Roscam Abbing, Marie Otsuka
Thanks to Kathy Vanhout, Adriana Parra and Gauthier Roussilhe.
Proofread by Alice Essam.
 The storage capacity for our original set-up is an estimation. In reality, during this period we have run the solar powered server on a 24 Wh (3.7V, 6.6A) LiPo-battery, and placed a very old 84.4 watt-hour lead-acid battery in between the LiPo and the solar charge controller to make both systems compatible. The cut-off voltage of the lead-acid battery was set very high in summer (meaning that the system was running only on the LiPo) but lower in winter (so that part of the lead-acid battery provided a share of the energy storage). This complicated set-up was entirely due to the fact that we could only measure the storage capacity of the LiPo battery, which we needed to display our online battery meter. In November 2019 we developed our own lead-acid battery meter, which made it possible to eliminate the LiPo from our configuration.
 “Energy Analysis of Batteries in Photovoltaic systems. Part one (Performance and energy requirements)" and “Part two (Energy Return Factors and Overall Battery Efficiencies)". Energy Conversion and Management 46, 2005
 Zhong, Shan, Pratiksha Rakhe, and Joshua M. Pearce. "Energy payback time of a solar photovoltaic powered waste plastic recyclebot system." Recycling 2.2 (2017): 10.
 There is little useful research into the embodied energy of solar charge controllers. Most studies focus on large solar PV systems, in which the charge controller’s embodied energy is negligible. The most useful result we found was a value of 1 MJ/W, estimated over the size of the controller: Kim, Bunthern, et al. "Life cycle assessment for a solar energy system based on reuse components for developing countries." Journal of cleaner production 208 (2019): 1459-1468. For a capacity of 120W, this comes down to 120 MJ or 33.33 kWh. For the life expectancy, we found values of 7 years and 12.5 years: same reference, and Kim, Bunthern, et al. "Second life of power supply unit as charge controller in PV system and environmental benefit assessment." IECON 2016-42nd Annual Conference of the IEEE Industrial Electronics Society. IEEE, 2016. We decided to make the calculation based on a life expectancy of 10 years.
 There is no research about the embodied energy of our server. We calculated the embodied energy on the basis of a life cycle analysis of a smartphone: Ercan, Mine & Malmodin, Jens & Bergmark, Pernilla & Kimfalk, Emma & Nilsson, Ellinor. (2016). [Life Cycle Assessment of a Smartphone](https://www.ericsson.com/en/reports-and-papers/research-papers/life-cycle-assessment-of-a-smartphone). 10.2991/ict4s-16.2016.15. We have no idea of the expected lifetime of the server, but since our Olimex is aimed at industrial use (unlike the Raspberry Pi), we assume a life expectancy of 10 years, just like the charge controller.
 De Decker, Kris. "How sustainable is solar PV power?", Low-tech Magazine, May 2015.
During the last months we have been working on transforming Low-tech Magazine into a multilingual publication. Many articles had been translated over the years, but they were not easy to find. Now, each language has its own solar powered main page.
The Spanish and French versions are the most complete for now, with respectively 36 and 11 articles online. Dutch, German and Polish main pages are also available, and we just received the first Russian and Italian translations. Some languages also have articles that are not translated into English.
Every language can be accessed by clicking on the large dot on the right side of the menu. There are also links to translations in the articles themselves.Translators
Most articles have been translated by volunteers, whose names are mentioned just below the introduction. If you are interested in doing translations, please get in touch. If it concerns a new language, we’ll also ask you to translate some site elements.Newsletters
Several languages have their own e-mail newsletter: English, Spanish, French, and Dutch. Subscribers will receive a maximum of twelve e-mails per year. There’s a lot more content to add, so stay tuned.Colophon
- Website design and development: Marie Otsuka
- Content production: Kathy Vanhout
- Translators (published): Aliana Bertolo, Benoît Bride, Albert Cuesta, Sévérine D., Guillaume Dutilleux, Framalang, Zeltia González Blanco, Michal Kolbusz, Alexander López, Nekane López Azurmendi, Bertrand Louart, Camille Martin, Jordi Parra, Martin Randelhoff, Arnaud Robert, Angela Schult, José Vera.
The fire – which we have used in our homes for over 400,000 years – remains the most versatile and sustainable household technology that humanity has ever known. The fire alone provided what we now get through a combination of modern appliances such as the oven and cooking hob, heating system, lights, refrigerator, freezer, hot water boiler, tumble dryer, and television. Unlike these newer technologies, the fire had no need for a central infrastructure to make it work, and it could be built locally from readily available materials.
Illustration: Diego Marmolejo.From Open Hearth to Power Plant
The habitual use of fire dates back at least 300,000 to 400,000 years. [1-2] Until the twentieth century, the biomass-fuelled fire was the only energy using “appliance” in the household – whether people were living in a cave, a temporary hut, or a permanent building. The earliest shelters were often erected with the express purpose of keeping fire alive, protecting it from wind and rain.
For most of history, the fire appeared in the form of an open hearth, which was built on an earthen floor in the middle of a shelter. The smoke of the fire escaped through a hole in the roof. Beginning in the fourteenth century in Europe, the open hearth was gradually replaced by a fireplace connected to a chimney, most often built against a wall. In colder regions (such as Scandinavia), people built more energy efficient tile stoves, while in milder climates (such as those around the Mediterranean), people continued to use braziers – portable metal baskets in which charcoal was burnt. In the 18th and 19th centuries, fireplaces were starting to be replaced by metal stoves.
The fire remained central in the household until the 20th century, when it was replaced by a wide range of appliances, plugged into central infrastructures. Today, in industrial societies, even metal stoves have become rare in households. Open burning has been all but banned, especially in cities. New buildings no longer have fireplaces, chimneys, or a hole in the roof.
The fire remained central in the household until the 20th century, when it was replaced by a wide range of appliances, plugged into central infrastructures.
“Paradoxically”, writes Luis Fernández-Galiano in Fire and Memory: On Architecture and Energy, “the dwellings that began as places to promote the fire, today shun open burning”.  In Fire: A Brief History, Stephen J. Pyne observes that: “Urban residents can pass years without seeing a fire. It appears mostly by accident or arson, and almost always as a danger”. 
However, the fire has far from disappeared. Thousands of individual fires in households have been replaced by a few giant fires in central power plants. And the fire also burns elsewhere. “In our economy of abundance”, writes Stephen J. Pyne, “fire is at the heart of the magic – in factories, automobiles, homes and power plants... Modern cities remain fire-driven ecosystems... Shut down combustion and you shut down the city. But open flame itself has vanished. Like a black hole in space, fire has shaped everything around it without itself being visible.”
Industrialisation has only altered, not abolished burning. Most importantly, fire started using another energy source: fossil fuels instead of biomass. Until the twentieth century, almost all human-made fires were the product of renewable energy sources: wood, grass, dung – peat and some early uses of coal being the exceptions. Today in industrial societies, almost all fire “at the heart of the magic” burns on gas, coal or oil.Fire vs. Electricity
Globally, a few billion people still live in households built around an old-fashioned fire, often in the form of an open hearth. Some people in the Western world consider this a backward and primitive practice that needs to be abolished – even though it is based on the use of renewable energy sources. For example, in 2011, the UN and the World Bank launched the Sustainable Energy for All initiative, aiming to “ensure universal access to modern energy services” by 2030.  The concept of “modern energy services” is vague, but it essentially refers to the use of electricity and gas – and thus, in practice, the use of fossil fuels.
“Urbanites see fire as a technology for which other, more advanced technologies can substitute”
Initiatives like this imply that “modern energy services” are “better” than the traditional open hearth or fireplace. “Urbanites see fire as a technology for which other, more advanced technologies can substitute”, writes Stephen J. Pyne. “If fire is a device, they want an improved flame- and smoke-free upgrade”. Examples of such flame- and smoke-free upgrades are today’s solar PV panels and wind turbines, which are supposed to end our dependence on fossil fuelled fires to produce “modern energy services”. However, how do open hearths and “modern energy services” – including those based on renewable energy sources – actually compare in terms of efficiency, sustainability, health and safety? What are we really saying when we argue that electricity or gas are “better” than a traditional fire?The Versatility of a Fire
One reason why people in industrial societies regard open fire as inefficient and unsustainable is because they simply don’t know how their ancestors actually used it. If these days a fire is considered to be inefficient, it’s because we only measure the efficiency of one of its functions, usually space heating. However, our ancestors did not only use the fire to warm themselves. They also used it for cooking, illumination, food preservation, hot water production, clothes drying, and protection from predators and insects, among other things.
Illustration: Diego Marmolejo.
Fire is extremely versatile: it’s hard to say which of its functions were most valued by our ancestors. Therefore, if we measure the energy use of a household fire and compare it to modern technology, we should not compare it to the energy use of a heating system or a cooking stove alone, but to the energy use of the entire household.Cooking With Fire
As a cooking device alone, the fire can accommodate a wide variety of cooking methods and replace a surprisingly large number of modern kitchen appliances. The fire not only functioned as a cooking stove, but also as an oven. For roasting and grilling, food was held on a turning spit and cooked by direct exposure to the fire. Baking happened in a clay container (a “Dutch oven”) which was put in the ashes of the fire. Alternatively, a separate bake oven was built into the jamb or the rear of the fireplace, or as a freestanding structure outside the house. Boiling and frying happened in a pot that was hanging above the fire. [6-7]
The functions of many smaller electrical kitchen appliances were also enabled by the fire. For example, you may think that people started eating toast when the electric toaster appeared in the twentieth century, but before that time they simply held a “toasting fork” into the fire. Likewise, quickly preparing warm drinks did not begin with the invention of the electric immersion heater: earlier on, people immersed a red-hot iron tool in a cup, producing hot beverages in a matter of seconds. 
As a cooking device alone, the fire can accommodate a wide variety of cooking methods and replace a surprisingly large number of modern kitchen appliances.
The fire also substituted for today’s refrigerator and freezer. In The Food Axis: Cooking, eating, and the architecture of American houses, Elizabeth Collins Cromley describes how meat and fish were suspended for several weeks in the smoke of a fire to preserve them for longer.  At the simplest level, our ancestors hung their cuts of meat or fish in the kitchen chimney or – if there was no chimney – high above the hearth, suspended from the ceiling. Smoking fish and meat could also happen in a chimney smoke chamber, which was either an adjunct to the kitchen fireplace, or a chamber built off the chimney in the basement or attic. The smokehouse could also be a separate building.
Several other food preservation methods were dependent on fire. Fruits, vegetables and herbs were dried by fire if the local climate wasn’t sunny enough. Sugaring fruits and making butter and cheese all depended on heat from a fire. Salt, essential for food preservation, was kept in a box hung against the fireplace to keep it dry. Distributing Heat and Light
A fire not only produces heat and smoke – it also produces light. As a light source, fire was just as versatile as electric lighting is today. The light of a fire resided not only in the hearth or the fireplace, but also in torches, rushlights, and later candles and oil lamps. [9-10] Heat from a fire could also be spread all over the household. Although the kitchen was usually the only space in the house that was heated, embers from the fire could be put into portable heating devices, such as foot stoves and bed warmers. 
The fire was also used to heat water for cleaning and washing, a practice that continued when cast-iron wood-stoves appeared – many of these had hot water tanks. Furthermore, the fire took care of drying clothes, substituting for today’s tumble dryer. And people didn’t just start ironing their clothes when the electric iron came along. Since the middle ages, our ancestors used plain metal irons that were heated by a fire or on a stove, or a “box iron”, which held glowing charcoal inside – some of these had a small chimney to keep smokey smells away from the clothes. 
People didn't just start ironing their clothes when the electric iron came along. Since the middle ages, our ancestors used plain metal irons that were heated by a fire or on a stove.
There was also the function of the fire as a focal point of communication and socialisation. For thousands of years, the hearth was the “ancient focus of conversation and the crackling soul of the house”.  Televisions and mobile phones have taken over these roles, although it is doubtful whether they hold the same appeal for people as a fire does. A host of electronic consumer products that imitate the effects of a fire – electric candles and fireplaces, led-bulbs with flickering flame effects, video’s of crackling fires – seem to indicate that humans miss open fire.Sustainability and Efficiency
In a household built around a fire, the making of hot beverages and toast, the drying of clothes, or the illumination of the space does not raise the energy use of the fire: it simply makes more efficient use of the fire that is already there for other purposes – like space heating. To achieve the same result today, we have to turn on a several appliances, and all of them require extra energy use: the heating system, the immersion heater, the electric toaster, the tumble dryer, and the lights.
Furthermore, we should also take into account the mining and the energy use required to replace one fire with dozens of factory-made appliances, which all need to be distributed to individual consumers. Finally, we should take into account the energy and materials that are required to build and maintain the infrastructures that these appliances depend on to operate, like the power grid, gas infrastructure, or the cold chain. In contrast, an open hearth can be built locally with readily available materials, and it operates independently of centralised infrastructures.
Illustration: Diego Marmolejo.
Today’s renewable power plants, such as solar PV panels or wind turbines, don’t properly address the energy question: they also need to be manufactured, transported, maintained and discarded of, and they imply that we can keep designing, producing and discarding an increasing range of electric household appliances in order to satisfy our needs. Neither would biomass electricity make this system sustainable: although it eliminates the use of fossil fuels, a great deal of energy is lost in the process of converting biomass to electricity, and we still need factories to manufacture the electric appliances and the infrastructures.Energy Use Compared: Ancient vs. Modern Households
If we look at the energy use in European households today, we see that on average 64% of all energy goes to space heating, 15% to water heating, 14% to lights and appliances, 5% to cooking and 1% to other services (including cooling).  Most of these services can be supplied by fire. So, how does the energy use of a traditional household with open hearth compare to the energy use of a modern household built around appliances and infrastructures?
Obviously, the energy use of modern houses is better documented than that of buildings and shelters from times gone by. However, there is research documenting the energy use of households that still rely on a traditional fire.
If we measure the energy use of a household fire and compare it to modern technology, we should compare it to the energy use of the entire household.
A 2002 investigation of firewood consumption in traditional houses in Nepal measures the annual firewood consumption per household to be between 6 and 33 m3, which corresponds to between 35 and 165 Gigajoule (GJ) of energy. [14-16] This seems quite a lot in comparison to the total energy use in contemporary households, which is around 75 GJ per year in Germany and around 105 GJ in Canada.
However, the average Nepalese household participating in the research consisted of 5 to 12 people, while households in modern societies have shrunk to little more than two people. In the Nepalese households under study, energy use was between 2 and 33 GJ per capita, while another, more recent research paper on firewood consumption for heating, cooking and lighting in Nepal calculates a per capita use of roughly 2.5 to 10 GJ of energy per person per year. [17-18] In comparison, total household energy consumption per capita is around 30 to 40 GJ in countries like the Netherlands, Germany and Canada.10 Billion People Around the Hearth
Even without taking into account the extra resources needed to build the appliances and the infrastructures, energy use in the pre-industrial household seems to have been significantly lower than it is today. In fact, a quick calculation reveals that – at least in theory – 10 billion people using an open hearth as their only energy source would be a perfectly sustainable practice.
Assuming an average firewood consumption of 6 m3 per capita, we would need 60 billion cubic metres of wood annually. One cubic metre of wood requires an annual yield of 0.2 ha of coppice, so we need 12 billion ha or 120 million square kilometers of forest if we want to avoid deforestation. That’s three times as much as we have today, and about 80% of the total land area of our planet (150 million square kilometers). Because we don’t need extra space for factories and roads to make and distribute consumer goods, we actually could go back to the open hearth without destroying our environment. The same cannot be said of 10 billion people going forward using fossil fuels and modern infrastructures.Health vs. Sustainability
If not for their sustainability or efficiency, then why do we consider “modern energy services” superior to a traditional fire? The suppression of open fire in modern cities is supported by two extra arguments: fire is unhealthy (it produces air pollution), and it is dangerous (it carries the risk of an uncontrollable fire). These risks are real, but how does the fire compare to “modern energy services” in terms of health and safety?
There is no doubt that the replacement of the household fire by modern infrastructures has advanced air quality, health and safety in cities. However, this may only be a temporary gain: modern infrastructures are at least as hazardous to safety and health because of their dependence on fossil fuels.
How does the fire compare to “modern energy services” in terms of health and safety?
For example, the heat waves and forest fires which are ravaging Australia while I write this, are killing people and destroying properties, and they are producing thick smoke that continues to blanket some of the largest cities. These fires are not caused by people using open hearths. These fires are the consequence of climate change, which is mainly caused by people’s use of industrial infrastructures – powered by fossil fuels.
The heavy dependence on central infrastructures for so many vital needs is another health and safety risk: cut the power supply to a large city and almost everything stops working -- including the sewer network, the food storage, and the burglar alarms.
Our troubled view of the old-fashioned fire is partly rooted in the conflation of two distinct concepts: “health” and “sustainability”. Indeed, something can be healthy, safe and sustainable at the same time, like walking – lest there is no sidewalk. But something can also be healthy and safe but not very sustainable (like a refrigerator, because it depends on an energy-intensive cold chain), and something can be sustainable but not very healthy or safe (like a smokeroom for meat and fish in the basement).
Health and longevity are things that we, as individuals, "need", want, desire, or feel entitled to. Just like we feel entitled to certain levels of comfort, convenience, speed or cleanliness. On the other hand, defining sustainability requires us to question what levels of human comfort, convenience, cleanliness, speed, safety and health our environment can support before it collapses. We can choose safety and health over sustainability when they are in conflict with each other, but only at the expense of the safety and health of younger and future generations.
Kris De Decker
- Subscribe to our newsletter
- Support Low-tech Magazine via Paypal or Patreon.
- Buy the printed website.
 Roebroeks, Wil, and Paola Villa. "On the earliest evidence for habitual use of fire in Europe.". Proceedings of the National Academy of Sciences 108.13 (2011): 5209-5214.
 Berna, Francesco, et al. "Microstratigraphic evidence of in situ fire in the Acheulean strata of Wonderwerk Cave, Northern Cape province, South Africa." Proceedings of the National Academy of Sciences 109.20 (2012): E1215-E1220.
 Fernández, Guillén, and Luis Fernández-Galiano. Fire and memory: on architecture and energy. Mit Press, 2000.
 Pyne, Stephen J. Fire: a brief history. University of Washington Press, 2019.
 Collins Cromley, Elizabeth. The food axis: cooking, eating, and the architecture of American houses. University of Virginia Press, 2010.
 Unlike today’s gas or electric stoves and ovens, a fire has no buttons to control its temperature. For boiling and simmering, this was solved by hanging the pots on a crane, which could be raised or lowered. In ovens, cooks decided to bake pies or bread first while the oven is the hottest, then, successively as the oven cools down, gingerbread, custards, then grains could be put in to dry. 
 Marcoux, Paula. Cooking with fire: From roasting on a spit to baking in a tannur, rediscovered techniques and recipes that capture the flavors of wood-fired cooking. Storey Publishing, 2014.
 Hough, Walter. Fire as an agent in human culture. No. 139. Govt. print. Off., 1926.
 The energy source for these distributed fires were wood, resin, wax, fat, grease or oil. Needs for special concentration and position of the source of illumination stimulated the invention of holders, brackets, and stands. 
 Heating people, not spaces: restoring the old way of warming, Kris De Decker, Low-tech Magazine, 2016.
 History of ironing, Old & Interesting, retrieved December 26, 2019.
 Energy consumption and use by households, Eurostat, 2019.
 Rijal, H. B., and H. Yoshida. "Investigation and evaluation of firewood consumption in traditional houses in Nepal." Proceedings: Indoor Air (2002): 1000-1005.
 The energy content of 1 m3 of wood also depends on the type of wood and how it is stacked. I’ve compared apples to apples when it was possible, but this was not always the case so the result is only a rough estimate.
 The annual firewood usage in 18th century Austria (Carinthia) was limited to 35 m3 per household. Source: Peter, Sieferle Rolf. The subterranean forest. Cambridge: The White Horse Press, 2001.
 Rijal, Hom Bahadur. "Firewood Consumption in Nepal." Sustainable Houses and Living in the Hot-Humid Climates of Asia. Springer, Singapore, 2018. 335-344.
 The results are 0.5 to 2 m3 of firewoord per person per year, which I have converted to 2.5 to 10 GJ of energy per person per year.
The second volume of the printed website is out now and contains 32 articles originally published between 2007 and 2012. The book, which has 618 pages and 268 images, sells for $25.20 in the Lulu Bookstore.
Whereas the first volume contained all but a handful of web articles published between 2012 and 2018, this second volume features a third of the web articles published in the earlier years, carefully selected for their continued relevance and interest today. Overall, we wanted to make an attractive book with timeless articles rather than an exact copy of the website.
Low-tech Magazine 2007-2012, Kris De Decker, ISBN 9781794711525, 618 pp., December 2019
Based on the feedback we received on the first volume, we made a few changes to the design. This volume has more images and they are now located in the middle and not at the end of each article. None of the images is dithered. The cover is almost identical to that of the first book, but the spine has a different color.
For most of the period covered in this volume, Low-tech Magazine did not make use of footnotes, but rather hyperlinks in the text. For the book, these links have been converted to references, and dead links have been replaced by links to copies of pages recorded by the Internet Archive. Ironically, the references in the book are now more up-to-date than those on the website.
This book is printed on demand, meaning that a copy is only printed when someone orders it. It takes a bit longer to receive the book, and because each copy goes straight from printer to client, there is no way for us to control the print quality. If you receive a copy that is badly printed, you should notify Lulu to get a replacement. It’s a very smooth process and there’s no need to return the damaged copy.More Books?
With the availability of this second book, the most important articles from the first twelve years of Low-tech Magazine are now available on paper. A third volume containing all articles published after September 2018 is anticipated but unconfirmed as yet. We are considering waiting until there is sufficient material to fill another 600-700 pages; alternatively, we could publish more often.
We also plan to bring out a print version of No Tech Magazine, Low-tech Magazine’s sister blog that has been curating shorter posts since 2009. But for now, the focus is back on writing.Contents
- How to downsize a transport network: the Chinese wheelbarrow
- Medieval smokestacks: fossil fuels in pre-industrial times
- The bright future of solar powered factories
- Pedal powered farms and factories: the forgotten future of the stationary bicycle
- Bike powered electricity generators are not sustainable
- The short history of early pedal powered machines
- Insulation: first the body, then the home
- Aerial ropeways: automatic cargo transport for a bargain
- Hand powered drilling tools and machines
- Boat mills: water powered, floating factories
- Recycling animal and human dung is the key to sustainable farming
- The status quo of electric cars: better batteries, same range
- The sky is the limit: human powered cranes and lifting devices
- Wood gas vehicles: firewood in the fuel tank
- Gas bag vehicles
- Trolley canal boats
- How (not) to resolve the energy crisis
- Hoffmann kilns
- Wind powered factories: history (and future) of industrial windmills
- Water powered cable trains
- Get wired (again): Trolleybuses and Trolleytrucks
- Electric road trains in Germany, 1901 - 1950
- The monster footprint of digital technology
- Cargo ships, then and now
- Moonlight towers: light pollution in the 1800s
- Tiles as a substitute for steel: the art of the timbrel vault
- A steam powered submarine: the Ictíneo
- The Citroen 2CV: cleantech from the 1940s
- Life without airplanes: from London to New York in 3 days and 12 hours
- Satellite navigation in the 18th century
- Email in the 18th century: the optical telegraph
- Low-tech Magazine 2007-2012, Kris De Decker, ISBN 9781794711525, 618 pp., December 2019
- Low-tech Magazine 2012-2018, Kris De Decker, ISBN 9780359478330, 710 pp., March 2019.
Images: Jordi Parra.
The daily shower would be hard to sustain in a world without fossil fuels.
The mist shower, a satisfying but forgotten technology which uses very little water and energy, could be a solution.
Designer Jonas Görgen developed a do-it-yourself kit to convert almost any shower into a mist shower and sent me one to try out.The Carbon Footprint of the Daily Shower
The shower doesn’t get much attention in the context of climate change. However, like airplanes, cars, and heating systems, it has become a very wasteful and carbon-intensive way to provide for a basic need: washing the body. Each day, many of us pour roughly 70 litres of hot water over our bodies in order to be “clean”.
This practice requires two scarce resources: water and energy. More attention is given to the showers' high water consumption, but energy use is just as problematic. Hot water production accounts for the second most significant use of energy in many homes (after heating), and much of it is used for showering. Water treatment and distribution also use lots of energy.
In contrast to the energy used for space heating, which has decreased during the last decades, the energy used for hot water in households has been steadily growing. One of the reasons is that people are showering longer and more frequently, and using increasingly powerful shower heads. For example, in the Netherlands from 1992 to 2016, shower frequency increased from 0.69 to 0.72 showers per day, shower duration increased from 8.2 to 8.9 minutes, and the average water flow increased from 7.5 to 8.6 litres per minute. 
In many industrial societies it's now common to shower at least once per day
Altogether, the average Dutch person used 50.2 liter of water per day for showering in 2016, compared to “only” 39.5 litres of water per day in 1992. This is a conservative calculation: these data do not include the showers taken outside the home, for example in the gym. Research shows that in many industrial societies, and especially among younger people, it is now common to shower at least once per day. [2-4]
The original man-made shower. Pouring a bucket of water over one's (or another person's) body.
Taking the Dutch as an example, let’s look at the energy use and carbon emissions of a daily hot shower. Heating 76.5 litres of water (8.9 minutes x 8.6 litres per minute) from 18 to 38 degrees Celsius requires 2.1 kilowatt-hours (kWh) of energy. Depending on the energy source (gas, electricity), the carbon intensity of the power grid (US/EU), or the efficiency of the gas boiler (new/old), the resulting CO2-emissions of an average shower amount to 0.462 – 0.921 kg.  If we compare this to the carbon emissions of a relatively fuel efficient car (130 gCO2/km), the emissions of a typical shower equal 3.5 – 7 km of driving, and this result ignores the energy cost demanded by water treatment and distribution.
The emissions of a typical shower equal 3.5 – 7 km of driving
In principle, the energy for a shower could be generated by renewable energy sources. However, if eight billion people were to shower daily, total energy use per year would be 6,132 terawatt-hours (TWh). This is eight times the energy produced by wind turbines worldwide in 2017 (745 TWh). All (current) wind turbines in the world could provide only 1 billion people with a “sustainable” daily shower. Furthermore, the use of renewable energy sources doesn’t lower the water use of the daily shower. To be clear, renewable energy is part of the solution – solar boilers, biomass, heat generating windmills – but we also need to look at the demand side of washing in a post-carbon world.More Powerful Showers
Since the early 1990s, low flow shower heads have provided a more water and energy efficient way of showering. These shower heads use between four and nine litres of water per minute, roughly half the flow rate of a normal shower (ten to fifteen litres per minute). Almost half of all Dutch households had a low flow shower head installed in 2016, but as we have seen, the flow rate of the average shower since the 1990s has been increasing, not decreasing. 
A rain shower. Image: soak.com.
That’s because other Dutch people have upgraded to rain showers, which have a water flow of about 25 litres per minute – double that of a normal shower head, and three times more than what a low flow shower head uses. A 8.9 minute rain shower requires 222 litres of water and 6.3 kilowatt-hour of energy to heat it. The carbon footprint corresponds to 14.3 – 21.3 km of driving.Life Before Showers
It may shock some younger readers, but only fifty years ago most people in industrial societies didn’t shower at all. Wall-mounted shower units, installed over the bathtub, became widespread only in the 1970s, and dedicated shower cubicles became a regular fixture in new homes only since the 1980s and 1990s. [2, 4] Before the arrival of the shower, people took one (or a few) bath(s) per week, and in between they washed themselves at the sink using a washcloth (the so-called sink wash, bird bath or sponge bath).
The weekly water and energy use of a daily shower quickly surpasses the water and energy use of a once, twice or even thrice weekly bath
The shower is often presented as a more sustainable option than the bath, because the latter is said to use more water. However, the weekly water and energy use of a daily shower quickly surpasses the water and energy use of a once, twice or even thrice weekly bath.  The sponge bath is even more water and energy efficient: roughly two litres of water is sufficient to get clean, and the water could even be cold because not the whole body gets wet at the same time.
Taking a sponge bath. Summer Morning, a painting by Carl Larsson, 1908
Environmental organisations, water companies, and municipalities encourage people in industrial societies to take shorter showers, use low flow shower heads, and install energy efficient water boilers. There’s also factors influencing energy and/or water use that these institutions don’t dare to question: shower frequency, water temperature (“take cold showers”), or the act of showering itself – it is never suggested that a sponge bath would actually suffice. Clearly, the daily hot shower is today regarded not as a luxury but as a basic necessity.Why Do We Shower?
However, showering does not only wash the body. A shower that’s entirely focused on cleaning the body – a so-called Navy shower or Sea shower– takes very little time, energy and water. A Navy shower consists of a 30 seconds shower to get wet, soaping the body while the water is off, and is completed by another 30 second shower to rinse the soapy water.
Until the 1970s, showers were only used in barracks or prisons to wash many people in a short time.  Image: La douche au Régiment, a painting by Eugène Chaperon, 1887.
Assuming an average water flow, a (hot) Navy shower uses only 8.3 litres of water and 0.2 kilowatt-hour of energy. A daily sponge bath would have even lower water and energy use. A nine minute hot shower per day is by no means a basic necessity: it’s a treat. Since the 1990s, the daily shower has been portrayed in advertisements as a means of relaxation, stress relief, and sensual pleasure. [2, 4]Mist Showers
The use of the shower to treat oneself seems to be incompatible with a drastic reduction of its water and energy use. However, there is a technology that might just do that: the mist shower. A mist shower atomizes water to very fine drops (less than 10 microns), which greatly reduces the water flow. Buckminster Fuller invented the first one in 1936 as part of his Dymaxion bathroom (he called it a “fog gun”). The idea was taken up again in the 1970s, when several trials and experiments were conducted with both atomised hand washing and showering.
Left: Mist shower developed by NASA. Right: Mist shower developed by the Canadian Minimum Housing Group. Both are from the 1970s. 
NASA developed a mist shower with a hand-held, movable nozzle that incorporated an on-off thumb controlled water valve attached to a flexible hose. The average water use for a nine-minute shower was measured to be 2.2 litres, which corresponds to a water flow of only 0.24 litres per minute.  The Canadian Minimum Housing Group developed and tested several mist showers and obtained a flow rate of 0.33 liters of water per minute. . In both cases, swab tests of bacteria on the skin showed that mist showers clean the body just as well as a “normal” shower of the same duration – using 30 to 40 times less water.
Jonas Görgen developed a kit that converts almost any shower into a mist shower.
Jonas Görgen, a young designer who graduated from the Design Academy Eindhoven in 2019, became fascinated by the history of the mist shower and decided to build one himself. Compared to earlier mist showers, Görgen has improved the concept in two important ways. First, he developed a kit that can turn almost any shower into a mist shower with very little effort. Second, in contrast to earlier experiments, his mist shower uses not one but three to six nozzles. This turns a functional but very basic mist shower (using only one nozzle), into a pleasant experience that feels just as comfortable and invigorating as a “normal” shower.
A 6-nozzle mist shower in the bathroom of designer Jonas Görgen. Image: Jonas Görgen.
The kit that Jonas sent me contains six nozzles, some connectors and dividers, some flexible plastic tubes (“feel free to cut to any length”), and some pieces of copper wire (“to fix and attach the nozzles in the right positions”). I installed a five-nozzle mist shower in less than twenty minutes, and although the result won’t win a design award (in fact, Jonas built a more beautiful mist shower for his graduation project), as a do-it-yourself shower hack it is simply brilliant.
With five nozzles, I measured a water flow of two liters per minute, which is five times less than my now obsolete shower head
In my set-up, four of the nozzles are fixed (one aimed at the head, one aimed at the back, and two aimed at the hips), while one is flexible and can be aimed where it’s needed -- as in the NASA experiments. Using more than one nozzle increases the water flow, but the water savings remain significant.
For five nozzles, I measured a water flow of two litres per minute, which is five times less than my now obsolete shower head (ten litres per minute) and 12.5 less than the water flow of a rain shower. It’s unusual to obtain such large savings with so little effort. Jonas writes about his shower that “it is not all a compromise in comfort, as it is sometimes suggested in the research papers of the 1970s” and I totally agree. The difference is clearly due to the fact that earlier mist showers only used one nozzle.Energy Savings of a Mist Shower
The energy savings of a mist shower are smaller than its water savings. That’s because a mist shower requires a higher water temperature. The increased surface area of the water decreases water use but also causes the heat to dissolve more quickly in the air. Even if the water coming from the tap is at maximum temperature (usually 60 degrees Celsius and already slightly too hot to touch), when sprayed by a nozzle it quickly loses its temperature the further you place your body from the opening. The trick is to position the nozzles in such a way that they closely surround the body. I did this with the iron wires and some duct tape, but there are more elegant ways.
The energy savings of a mist shower are smaller than its water savings
I found a water temperature of about 50 degrees Celsius to be sufficient for thermal comfort, but a mist shower in winter may require a higher water temperature, so let’s assume a value of 60 degrees to calculate the energy use of my 5-nozzle mist shower. At a flow rate of two litres per minute, a 8.9 minute shower consumes 17.8 litres of water. Heating that volume of water from 18 degrees to 60 degrees requires 1.04 kWh. That’s half the energy use of the average shower in the Netherlands (2.1 kWh), and six times lower than the energy use of a rain shower (6.3 kWh).
Details of Jonas Görgen's do-it-yourself mist shower.
The energy use of a mist shower could be further reduced by showering in an enclosed cabin, which increases thermal comfort with lower water temperatures. Another trick to increase thermal comfort in winter is to open the nozzles a bit so that the surface area of the water decreases. This increases water use but decreases heat loss. It is down to the individual to find balance between saving energy or water, based on local circumstances.
An argument that is often made against water saving shower heads is that people compensate lower water flows by taking longer showers. A similar argument could be made against mist showers, because the use of mist increases the time needed to rinse the body of soapy water. However, a mist shower of 8.9 minutes offers plenty of time to get rid of soap and shampoo. The test subjects in the NASA experiments all managed to wash and rinse within 9 minutes, using only one nozzle on a flexible hose. Washing long hair is more problematic, but also in this case the problem can be addressed by opening the nozzles a bit more, increasing the water flow.How Many Nozzles Can We Afford?
The five-nozzle mist shower offers significant water and energy savings compared to a “normal” shower and does so without sacrificing comfort. However, is it sustainable enough? If eight billion people used a five-nozzle mist shower, all wind turbines in the world could still only provide two billion people with a daily hot shower. And, compared to a one-minute Navy shower – which is entirely focused on efficiency, not on comfort – energy use is five times higher, and water use is twice as high. So, let’s see what happens when we decrease the number of nozzles, still assuming average shower frequency and duration.
Three nozzles – with a flow rate of roughly one liter of water per minute – are the minimum for providing the comfort of a hot shower
I found three nozzles – with a flow rate of roughly one liter of water per minute – to be the minimum for providing the comfort of a hot shower. This would bring the water use of a 8.9 minute mist shower down to 8.9 litres, which corresponds with the water use of a one-minute Navy Shower. The energy use would come down to 0.52 kWh, two to three times higher than that of a Navy shower. This would provide four billion people with a wind-powered daily hot shower, meaning that if we halved the shower duration (from 8.9 to 4.5 minutes) or showered less frequently (once every two days), the global population could be cleaned and pampered using only wind power.
A nozzle in my mist shower.
If we give up on comfort and simply get clean with as little energy and water as possible, we could take a mist shower using only one nozzle, just like in the seventies. Using just one nozzle I measured the water flow to be 0.3 litres per minute, meaning that a 8.9 minute mist shower would need only 2.67 litres of water and 0.156 kilowatt-hour of energy. The resource use of a mist shower then corresponds to that of a sponge bath, and is significantly lower than that of a one-minute Navy Shower. All wind turbines in the world could provide roughly 15 billion people with a daily 8.9 minute hot mist shower.
If more than fifteen nozzles are used, the energy use of a mist shower is higher than that of a conventional shower
Conversely, the water and – especially – energy use of a mist shower increases quickly as more nozzles are added. With twenty nozzles, the water use is still below that of the average shower (6-7 litres vs. 8.3 litres per minute), but the energy use is already higher: 3.1 kWh compared to 2.1 kWh. Mist showers are not low energy products by definition. It depends how we use them.Off-Pipe
There’s one problem with mist showers operating with only one to three nozzles: modern water boilers don’t get triggered by a flow rate below 1 litre of water per minute, meaning that only cold mist comes out. This is not a fundamental problem – it’s technically possible to make water boilers that heat small amounts of water – and it brings us to another potential advantage of the mist shower: its effect on the bathroom.
I got so fond of my mist shower that I'm travelling with it.
The modern shower is not a device that stands on its own. It is plugged into several infrastructure networks, like the water supply, the sewer network, and the power grid or gas infrastructure. In contrast, although a mist shower could be plugged into the same infrastructures, it could also operate without them, further reducing the use of resources.
Modern water boilers don’t get triggered by a flow rate below 1 liter of water per minute, meaning that only cold mist comes out
First of all, a switch to mist showers would make it possible to use much smaller and less powerful water boilers, which could be powered by local solar or wind powered systems that are smaller and cheaper than those required for conventional water boilers. With a minimal mist shower, one could even question the need for a water boiler at all. The quantity of water is so small (2.67 litres) that it could be heated on the fire – just like in the old days.
A portable mist shower from the 1970s, pressurized with a bicycle pump. 
Secondly, because of its high water use, a conventional shower needs to be connected to the drain. The mist shower discharges much less water, which makes it possible to take the shower off-pipe and treat the water on site, for example to flush the toilet, water the plants, or clean the pavement. Third, a water supply in the bathroom is not strictly necessary either: a small container could be filled elsewhere and taken to the bathroom.
The Canadian experiments in the 1970s resulted in such a portable mist shower. The water was stored in a Volkswagen window washing reservoir connected to a bicycle pump to pressurize the water. The 2.5 litres of water were pressurized with about twenty strokes of the bicycle pump.  In short, if we switch to mist, the infrastructure that made the modern shower possible can be scaled down and simplified, to such an extent that the bathroom could be taken off-grid and off-pipe even in an urban context, bringing further reductions in the use of water and energy. The same approach could be applied to hand washing and dish washing.
Kris De Decker
- Support Low-tech Magazine via PayPal, Patreon or Liberapay.
- Buy the printed website.
- Subscribe to our newsletter.
 van Thiel, Lisanne. "Watergebruik thuis 2013." TNS NIPO, Amsterdam (2014).
 Shove, E. A. Comfort, Cleanliness and Convenience: the Social Organization of Normality. Berg, 2003.
 Hitchings, Russell, Alison Browne, and Tullia Jack. "Should there be more showers at the summer music festival? Studying the contextual dependence of resource consuming conventions and lessons for sustainable tourism." Journal of Sustainable Tourism26.3 (2018): 496-514.
 Hand, Martin, Elizabeth Shove, and Dale Southerton. "Explaining showering: A discussion of the material, conventional, and temporal dimensions of practice." Sociological Research Online10.2 (2005): 1-13.
 If electricity is used, the resulting CO2-emissions of a shower are 0.621 kg in Europe and 0.921 kg in the US. [Overview of electricity production and use in Europe. European Environmental Agency, created 2017, updated 2019] [Assessing the evolution of power sector carbon intensity in the United States, Greg Schivley et al, 2018.] If gas is used, the emissions of a shower amount to between 0.462 kg (for new gas boilers) and 0.714 kg (for older boilers). [Carbon footprint of heat generation, houses of parliament.]
 Space Shower Habitability Technology, Arthur Rosener, 1972.
 Water conservation and the mist experience, 1978.
For more than two thousand years, windmills were built from recyclable or reusable materials: wood, stone, brick, canvas, metal. When – electricity producing – wind turbines appeared in the 1880s, the materials didn’t change.
It’s only since the arrival of plastic composite blades in the 1980s that wind power has become the source of a toxic waste product that ends up in landfills.
New wood production technology and design makes it possible to build larger wind turbines almost entirely out of wood again – not just the blades, but also the rest of the structure. This would solve the waste issue and make the manufacturing of wind turbines largely independent of fossil fuels and mined materials. A forest planted in between the wind turbines could provide the wood for the next generation of wind turbines.
Illustration: Eva Miquel for Low-tech Magazine
If we build them out of wood, large wind turbines could become a textbook example of the circular economy.How Sustainable is a Windmill Blade?
Wind turbines are considered to be a clean and sustainable source of power. However, while they can indeed generate electricity with lower CO2-emissions than fossil fuel power plants, they also produce a lot of waste. This is easily overlooked, because roughly 90% of the mass of a large wind turbine is steel, mainly concentrated in the tower. Steel is commonly recycled and this explains why wind turbines have very short energy payback times – the recycled steel can be used to produce new wind turbine parts, which greatly lowers the energy required during the manufacturing process.
However, wind turbine blades are made from light-weight plastic composite materials, which are voluminous and impossible to recycle. Although the mass of the blades is limited compared to the total mass of a wind turbine, it’s not negligible. For example, one 60 m long fiberglass blade weighs 17 tonnes, meaning that a 5 MW wind turbine produces more than 50 tonnes of plastic composite waste from the blades alone.
A fiberglass reinforced plastic blade. Source: Gurit.
A windmill blade typically consists of a combination of epoxy – a petroleum product – with fiberglass reinforcements. The blades also contain sandwiched core materials, such as polyvinyl chloride foam, polyethylene terephtalate foam, balsa wood (intertwined in fibers and epoxy) and polyurethane coatings. [1-4]
A 5 MW wind turbine contains more than 50 tonnes of unrecyclable plastic in the blades alone.
Unlike the steel in the tower, the plastic in blades cannot be recycled to make new plastic blades. The material can only be “downcycled”, for instance by shredding it, which damages the fibers and makes them useless for anything but a filler reinforcement in cement or asphalt production. Other methods are being investigated, but they all run into the same problem: nobody wants the “recycled” material. Some architects have re-used windmill blades, for example to build benches or playgrounds. But we cannot build everything out of wind turbine blades.
Because of the limited options for recycling and re-use, windmill blades are usually landfilled (in the US) or incinerated (in the EU). The latter approach is not less unsustainable, because incinerating the blades only partially reduces the amount of material to be landfilled (60% of the scrap remains as ash) and converts the rest into air pollution. Furthermore, given that fiberglass is incombustible, the caloric value of the blades is so limited that little or no power can be produced. [1-4]Dealing With Waste – 25 Years Later
Most of the roughly 250,000 wind turbines now in operation worldwide were installed less than 25 years ago, which is their estimated life expectancy. However, the rapid growth of wind power over the last two decades will soon be reflected in a delayed but ever increasing and never-ending supply of waste materials. For example in Europe, the share of installed wind turbines older than 15 years increases from 12% in 2016 to 28% in 2020. In Germany, Spain and Denmark, their share increases to 41-57%. In 2020 alone, these countries will each have to dispose of 6,000 to 12,000 wind turbine blades. 
Old-fashioned windmills had sails made entirely from recyclable materials. Image: Rasbak (CC BY-SA 3.0)
Discarded blades will not only become more numerous but also larger, reflecting a continuous trend towards ever larger rotor diameters. Wind turbines built 25 years ago had blade lengths of around 15-20 m, while today’s blades reach lengths of 75-80 m or more.  Estimates based on current growth figures for wind power have suggested that composite materials from blades worldwide will amount to 330,000 tonnes of waste per year by 2028, and to 418,000 tonnes per year by 2040. 
The rapid growth in wind power over the last two decades will soon be reflected in a delayed but ever increasing and never-ending supply of waste materials.
These are conservative estimates, because numerous blade failures have been reported, and because constant development of more efficient blades with higher power generating capacity is resulting in blade replacement well before their estimated lifespan.  Furthermore, this amount of waste results from wind turbines installed between 2005 and 2015, when wind power only supplied a maximum of 4% of global power demand. If wind would supply a more desirable 40% of (current) power demand, there would be three to four million tonnes of waste per year.Windmill Blades Through History
Yet a look at the history of wind power shows that plastic is not an essential material. The use of wind for mechanical power production dates back to Antiquity, and the first electricity generating windmills – now called wind turbines – were built in the 1880s. However, fiberglass blades only took off in the 1980s. For some two thousand years, windmills of whatever type were entirely recyclable.
Old-fashioned windmills had towers built out of wood, stone, or brick. Their “blades” or “sails” were usually made of a wood framework covered with canvas or wood boards. In later centuries, parts were increasingly made from iron, also a recyclable material.
The first wind turbines in Europe, built by Paul La Cour in Denmark, had traditional slatted wooden sails. Image: Paul La Cour Museum.
When new types of sails were invented in the eighteenth and nineteenth centuries (such as spring, patent, and rolling-reefer sails), as well as in the twentieth century (Dekkerized and Bilau sails), the design changed but the materials remained the same (eventually including aluminum).  Furthermore, contrary to modern wind turbines, which need to be replaced regularly and in their entirety, old-fashioned windmills could last for many decades or even centuries through regular repair and maintenance.
A look at the history of wind power shows that plastic is not an essential material.
The first wind turbine in the US, built by Charles F. Brush, had a 17 m diameter annular sail with 144 thin blades made of cedar wood. The first wind turbine in Europe, built by Paul La Cour in Denmark, had four traditional slatted wooden sails with a rotor diameter of 22.8 m. La Cour’s design was copied by local enterprises in Denmark, resulting in thousands of wind turbines operating on Danish farms between 1900 and 1920. Dozens of experimental wind turbines were built during the first half of the twentieth century, including some with steel blades, such as the 1939 Smith-Putnam wind turbine in the US. 
The three-bladed Gedser wind turbine relied on an air frame superstructure for blade stiffening.
In 1957, Johannes Juul – a student of Paul La Cour – built the three-bladed Gedser wind turbine. It had a rotor diameter of 24 m and relied on an air frame superstructure of steel wires for rotor and blade stiffening. The blades were built from steel spars, with aluminium shells supported by wooden ribs.
The Gedser turbine remained the most successful wind turbine until the mid-1980s. It ran for 11 years without maintenance, generating up to 360,000 kWh per year, but was not repaired when a bearing failed. When the turbine was refurbished and tested in the late 1970s, it performed better than the first wind turbines with fiberglass blades. [8-9]Size Matters
The first wind turbine with fiberglass blades was installed in 1978 in Denmark, where it powered a school. With its 54 m diameter rotor, the Tvind turbine was at the time the largest wind turbine ever built. After 1980, fiberglass blades became standard in Denmark and the “Danish design” was later copied all over the world. The plastic blade, so it seems, is what defines the modern wind turbine. This presents us with a dilemma.
The switch to fiberglass blades was mainly driven by the desire to build larger wind turbines. Larger wind turbines lower the cost per kilowatt-hour of generated electricity, for two reasons: the wind increases with height, and the doubling of the rotor radius increases power output four times. The desire to build larger wind turbines has driven the wind industry ever since. Rotor diameters increased from around 50 m in the 1990s to 120 m in the 2000s. Today’s largest off-shore wind turbines have rotor diameters of more than 160 m, and a 12 MW turbine with a 220 m rotor diameter is being constructed in the Netherlands. 
Improved windmill blade from the 1940s, built and designed by P.L. Fauel. Image: Rasbak (CC BY-SA 3.0)
However, with increasing size, the mass of the rotor blade also increases, which requires lighter materials. At the same time, larger blades deflect more, so that their structural stiffness is of increasing importance to maintain optimal aerodynamic performance and to avoid the blade hitting the tower. In short, larger wind turbines with longer blades place ever higher demands on the materials used, and these exceed the capacities of recyclable materials. [11-12] Wind turbines have become more efficient, but also less sustainable.
Larger wind turbines with longer blades place ever higher demands on the materials used.
Right now, this trend is illustrated by the increasing use of carbon fiber reinforced plastic, which is even stronger, stiffer and lighter than fiberglass reinforced plastic.  The use of carbon fibers – which further complicates potential recycling – has become standard in the largest wind turbine blades, mainly in highly stressed locations such as the blade root or the spar caps. Consequently, we have again entered a new era in which blades are now so large that they cannot be made out of fiberglass reinforced composites alone anymore.Reinventing the Windmill Blade
An industry that calls itself sustainable and renewable cannot send millions of tonnes of plastic waste to landfills each year. Consequently, could we revert to building wind turbine blades from recyclable materials alone? And how large could we build them? To which extent can efficiency and sustainability be reconciled?
Improved windmill blade from the 1930s, designed by Kurt Bilau. The tower is made of stone, the sails are made of wood and aluminum. Image: Frank Vincentz (CC BY-SA 3.0).
Most research into the design of more sustainable wind turbine blades sticks with plastic as the main material. Thermoplastics can be melted and re-used, making it possible to recycle the blades into new wind turbine blades, even on-site. However, due to the material’s lower strength and stiffness, these blades have not been built larger than 9 m for now. 
Another area of development is the substitution of glass fibers for wood or flax fibers. These blades can be larger, but they have only small sustainability advantages over fiberglass-epoxy blades. [14-15] The petroleum-based epoxy is more harmful than the glass fiber, and natural fiber based composite materials absorb more of it. [16-17]
A small wind turbine with solid wooden blades and tower. Image: InnoVentum.
Some engineers and scientists follow different paths and revert to more traditional wood construction. For small wind turbines, blades can be carved out of solid wood. For larger wind turbines, the blades can be composed of a hollow aerodynamic shell and an internal framework of ribs and stringers supported by a beam called the spar – all built from laminated veneer wood boards, beams and panels.Laminated Veneer Lumber
Laminated veneer lumber – in which the wood is peeled off the tree and then glued back together in thin layers – is a wood product that appeared in the 1980s, and which has an important advantage in relation to solid wood components. The consistency of wood can vary within a single tree. Therefore, the length of the wood spars used in pre-industrial windmills was limited by the availability of large tree trunks of consistent quality.
The largest traditional windmill ever built – the 1900 Murphy mill in San Francisco – had a rotor diameter of 35 m. In contrast, the process of veneering spreads out defects such as knots, giving better and more predictable stiffness properties. This allows to build larger wooden blades. 
Patent sails with Dekker leading edges, 1940s. Image: Reboelje.
Wood laminates offer substantial cost and weight reductions as compared to fiberglass. Although the strength and stiffness are lower, much of the load that the blade must support is a consequence of its own weight, so a wood blade doesn’t need to be as strong as a fiberglass blade.  Nevertheless, the low stiffness of wood makes it difficult to limit the elastic deflections for very large rotor blades.
A blade largely made from laminated veneer lumber, but reinforced with carbon composite spars, can be built more than 60 m long.
In a 2017 study of a 5 MW wind turbine with 61.5 m long blades, conducted at UMassAmherst in the US, it was calculated that in order to be stiff enough and withstand the forces that it’s exposed to, a blade made of laminated wood veneer panels would be 2.8 heavier than a plastic blade (48 versus 17 tonnes) and have a laminate of over 50 cm thick.  Although this suggests that it’s technically possible to build a wooden blade more than 60 m long, it’s not very practical. With heavier blades, the wind turbine needs to be built much stronger, which increases the costs and the use of resources.Smaller Wind Turbines?
There’s two ways to solve this problem. The first is to design a blade largely made from laminated veneer lumber, but reinforced with carbon composite spars and covered with an outer layer of fiberglass composite. In the above mentioned study it was found that such a wood-carbon hybrid blade is stiff enough to reach a length of 61.5 m for a 5 MW turbine, and can be built 3 tonnes lighter than a fiberglass blade.  Another study for a wood-carbon blade of the same length comes to a similar conclusion, although in this case the wood-carbon blade is slightly heavier than the plastic blade. 
Wood-carbon blades contain less plastic composite material, and the plastic is not intertwined with wood throughout the blade but clearly separated from it, making blade re-use, recycling or incineration more attractive. However, according to the studies mentioned above, a wood-carbon blade still contains 2.5 tonnes  to 6.2 tonnes  of plastic composites, meaning that a three-bladed 5 MW wind turbine would produce 7.5 to 18.4 tonnes of unrecyclable waste – compared to 50 tonnes for a conventional blade.
A laminated wooden blade with carbon spar caps. Source: 
The environmental damage of the carbon-epoxy spars can be viewed as acceptable, if compared to the larger damage done by conventional wind turbine blades. However, the waste problem would not be solved, and further growth in wind power would still result in ever larger waste streams.
Alternatively, we could define sustainability in more ambitious terms, and build wind turbine blades completely out of wood again – even if this means that we have to build them smaller. There’s an extra argument to question our focus on efficiency: the decrease in sustainability not only shows in the blades. Other parts of wind turbines are also increasingly made from plastic composites – most notably the nose cone and the nacelle cover (the housing that protects the drivetrain and the auxiliary equipment from the elements). [1-4]
Other trends are the increased use of electronics, which are not suited for recycling, and of permanent magnet generators based on rare earth materials, which save costs compared to a mechanical gearbox but only at the expense of more destructive mining. Larger wind turbines also kill more birds and bats. 
By sacrificing some efficiency, we could gain a lot in sustainability.
By sacrificing some efficiency, we could gain a lot in sustainability. Wind power advocates may not agree, because it would make wind power less competitive with fossil fuels. However, more expensive wind power can always be counteracted by higher prices for fossil fuels. What’s really problematic is our choice of cheap fossil fuels as a benchmark to determine the viability of wind power. It’s by aiming to compete with fossil fuels – and thus by aiming to provide the energy for a lifestyle built on fossil fuels – that wind turbines have become increasingly damaging to the environment. If we would reduce energy demand, smaller and less efficient wind turbines would not be a problem.
The first wind turbine in the US, built by Charles F. Brush, had a 17 m diameter annular sail with 144 thin blades made of cedar wood.
How large could we build practical wind turbine blades from laminated veneer lumber alone? Nobody knows. I asked Rachel Koh, the scientist who calculated the requirements for the 61.5 m wood-only blade, but she couldn’t help me: “I only ran the model for the blades of a 5 MW turbine. It would be hypothetically possible to run another study to answer your question, but it's not a small undertaking”. She also notes that it’s possible to further improve the stiffness of wood laminates with manufacturing innovations.A Forest of Wind Turbines
Whether we opt for large wood-carbon blades or smaller wood-only blades, in both cases we could also build the tower and the nacelle cover from laminated wood products. In 2012, the German company TimberTower built a laminated wood tower 100 m tall for a 1.5 MW wind turbine. A wooden tower seems to be besides the point, because it replaces part of a wind turbine that’s already perfectly recyclable. However, a wind turbine of which the structure is almost completely built out of wood offers extra benefits.
Illustration: Eva Miquel for Low-tech Magazine
Wood could make the production of wind turbines entirely independent of mined materials and of fossil fuels, except for the gearwork and the electric components (but further gains can be achieved, whenever possible, by using wind power for direct mechanical or direct heat production).  Furthermore, wooden wind turbines could become a carbon sink – sequestering CO2 from the atmosphere in their wood components.
Finally, the space between wind turbines on a wind farm, which is not suited as a residential area, should be used to grow a forest that will provide the wood for the next generation of wind turbines. The lumber could be sawed, processed and assembled on-site, which eliminates the energy use associated with the transport of wind turbine parts. The energy required for manufacturing the laminates and for constructing the turbines could come from the windmills, as well as from forest biomass. The wooden wind turbine could become a textbook example of the circular economy.What about solar panels?
A forthcoming article investigates the sustainability of solar panels. Is toxic and unrecyclable waste inherent to solar PV power? Could we build solar panels using sustainable materials? And what would that mean for the affordability and efficiency of solar power?
Kris De Decker
- Support Low-tech Magazine via PayPal, Patreon or Liberapay.
- Buy the printed website.
- Subscribe to our newsletter.
 Ramirez-Tejeda, Katerin, David A. Turcotte, and Sarah Pike. "Unsustainable Wind Turbine Blade Disposal Practices in the United States: A Case for Policy Intervention and Technological Innovation." NEW SOLUTIONS: A Journal of Environmental and Occupational Health Policy 26.4 (2017): 581-598.
 Wilburn, David R. Wind energy in the United States and materials required for the land-based wind turbine industry from 2010 through 2030. US Department of the Interior, US Geological Survey, 2011.
 Jensen, Jonas Pagh. "Evaluating the environmental impacts of recycling wind turbines." Wind Energy 22.2 (2019): 316-326.
 Martínez, Eduardo, et al. "Life cycle assessment of a multi-megawatt wind turbine." Renewable energy 34.3 (2009): 667-673.
 Ziegler, Lisa, et al. "Lifetime extension of onshore wind turbines: A review covering Germany, Spain, Denmark, and the UK." Renewable and Sustainable Energy Reviews 82 (2018): 1261-1271.
 Lefeuvre, Anaële, et al. "Anticipating in-use stocks of carbon fiber reinforced polymers and related waste flows generated by the commercial aeronautical sector until 2050." Resources, Conservation and Recycling 125 (2017): 264-272.
 De Decker, Kris. "Wind powered factories: history (and future) of industrial windmills." Low-Tech Magazine. Barcelona (2009).
 The Rise of Modern Wind Energy: Wind Power for the World. Pan Stanford Publishing, 2013.
 Lundsager, P., Sten Tronæs Frandsen, and Carl Jørgen Christensen. "Analysis of data from the Gedser wind turbine 1977-1979." (1980).
 Gupta, Ashwani K. "Efficient wind energy conversion: evolution to modern design." Journal of Energy Resources Technology 137.5 (2015): 051201.
 Brøndsted, Povl, Hans Lilholt, and Aage Lystrup. "Composite materials for wind power turbine blades." Annu. Rev. Mater. Res. 35 (2005): 505-538.
 Koh, Rachel. "Bio-based Wind Turbine Blades: Renewable Energy Meets Sustainable Materials for Clean, Green Power." (2017).
 Murray, Robynne, et al. Manufacturing a 9-meter thermoplastic composite wind turbine blade. No. NREL/CP-5000-68615. National Renewable Energy Lab.(NREL), Golden, CO (United States), 2017.
 Borrmann, Rasmus. “Structural design of a wood-CFRP wind turbine blade model.” (2016)
 Spera, David. “Wind Turbine Technology: Fundamental Concepts in Wind Turbine Engineering, Second Edition.” (2009)
 Corona, Andrea, et al. "Comparative environmental sustainability assessment of bio-based fibre reinforcement materials for wind turbine blades." Wind Engineering 39.1 (2015): 53-63.
 The use of wood for wind turbine construction. Meade Gougeon, NASA.
 De Decker, Kris. "Heat your house with a mechanical windmill." Low-Tech Magazine. Barcelona (2019).
 Loss, Scott R., Tom Will, and Peter P. Marra. "Estimates of bird collision mortality at wind facilities in the contiguous United States." Biological Conservation 168 (2013): 201-209.
Building them out of wood addresses these issues. Because of their aesthetic appeal, and thanks to the ability to produce them locally, small wooden wind turbines can also improve the public acceptance of wind power.
Furthermore, innovation in tower design facilitates the installation of small wind turbines, reducing the need for concrete foundations and heavy machinery.
Image: A wind turbine with wooden blades. Source: EAZ Wind.// // Low Performance
Tests have shown that commercially available small wind turbines may not always generate sufficient power over their lifetime to compensate for the energy that was needed to produce them. There are three reasons why this is so. First, there are the laws of physics. The energy yield of a wind turbine increases faster than its height and rotor size, meaning that as a wind turbine becomes smaller, its power output decreases over proportionally.
Second, wind turbine blades are commonly made from fiberglass reinforced plastic, which is energy-intensive to produce (and impossible to recycle). This energy needs to be “paid back” during the lifetime of the wind turbine, which can be challenging for machines with small rotor diameters.
Third, the maintenance of small wind turbines depends on the ability of the manufacturer to remain in business and provide its customers with spare parts. Unlike solar panels, wind turbines have a lot of moving parts and are thus more likely to need repairs. However, suppliers of small wind turbines tend to have an even shorter life expectancy than their products. Hand Carved Wood Blades
The laws of physics can’t be changed, but on their own they don’t make small wind turbines uneconomical and unsustainable. It’s the other two factors that are decisive, and these can be addressed. In fact, they have been addressed for more than two decades by Scottish engineer Hugh Piggott, who builds small 1-2 kW wind turbines with 2-4 meter rotor diameters using solid wooden blades. 
Hand carved wooden blades. Source: 
The blades are hand carved locally with basic woodworking skills and tools. In contrast to fiberglass blades, little or no energy is used to produce them. This increases the chance that the wind turbine will produce more energy over its lifetime than was needed to make it.
Defying the usual focus on efficiency, Piggott’s wind turbines sacrifice peak power for more reliable operation. The machines use a furling system which limits the turbine input at winds of 8 m/s (Beaufort 5), while most commercial models keep working up to higher wind speeds. Shutting down the turbine at lower wind speeds increases reliability, because the faster the machine spins, the quicker its parts will wear out. Local Manufacturing
A comparison of Piggott’s wind turbines with commercially available models concluded that the increased energy yield generated by the latter at wind speeds above 8 m/s is largely wasted, because most of the extra power is generated when the batteries are already full. The study also revealed that the Scottish design is about 20% cheaper, taking into account both capital and operational costs. 
Wooden wind turbines in Nepal. Source: 
Piggott’s open source design has spawned thousands of small DIY wind turbines all over the world. It also became the basis for several wind-based rural electrification initiatives in Mongolia, Nepal, Peru and Nicaragua. [4-7] In “developing” countries, the ability to manufacture and maintain the turbines locally is a great advantage over the use of commercial wind turbines or solar panels.Commercial Wind Turbines with Wood Blades
The use of solid wood blades, once common for smaller windmills and wind turbines, has seen renewed interest lately. [8-9] Most notable is the success story of the Dutch company EAZ Wind, founded in 2014 by four young windsurfers. The firm, which now has over 40 employees, sells wind turbines with solid wooden blades to farms and energy cooperatives in the region. With a rotor diameter of 12 meter and a power output of 10 kW, the turbines are about five times larger than Piggott’s machines.
Wind turbine with wooden blades, built by EAZ Wind.
The blades are made from solid wood beams that are glued together and then sanded to obtain their shape. They are then covered with an epoxy coating to protect them from humidity, while the sharp side of the blade gets a strip of fiberglass reinforced plastic to make it more durable.
The wind turbines, installed on 15 m tall towers, produce roughly 30,000 kWh of electricity per year, which corresponds to the power use of ten Dutch households. A machine sells for 46,000 euro, which makes it cheaper than a solar PV system (4,600 euro per household, or less than half the price of a solar PV system). The financial payback time – in the windy northern Netherlands – is 7 to 10 years.Public Acceptance
Interestingly, EAZ Wind’s choice for wooden blades is not driven by the aim to lower the embodied energy of the wind turbine. Rather, the company’s mission is to make the countryside – especially farms but also small villages – self-sufficient in terms of power production by designing more beautiful and locally produced wind turbines that people don’t complain about. As in many other countries, large wind turbines – and the transmission lines that go with them – raise a lot of opposition from local residents in the Netherlands.
Installing a wind turbine. Image: EAZ Wind.
The approach seems to work. When a farm installs a wind turbine, its neighbours are usually the next customers. EAZ Wind has sold more than 400 wind turbines by now. Public acceptance of wind power seems to be encouraged by two factors. First, wind turbines with wooden blades have a more natural look, increasing their aesthetic appeal.
Second, the machines are produced locally, meaning that the purchase of a wind turbine supports the local economy. The wood for the blades comes from a nearby province and is processed by companies in the region.Wooden Towers
The turbines from EAZ Wind have wooden blades, but steel towers. The Swedish company InnoVentum takes a different approach: its wind turbines have a wooden tower, while the blades are made from plastic. The 12 m or 20 m tall towers are of a unique design, composed from small wood modules that can be bolted together on the ground in a few hours.
Innoventum's wooden wind turbine tower.
The multi-leg towers require no -- or much less -- concrete for their foundations and they can be erected without the use of a crane, using a rope and a winch instead. Around fifteen have been installed since 2012. Like EAZ Wind, the company aims to create a new aesthetic level that may help to increase the acceptance of wind turbines.
Innoventum's wooden wind turbine tower.
Of course, both approaches could be combined, resulting in small wind turbines with wooden blades, tower and other structural parts. A small wind turbine that’s almost completely built out of wood – minus the gearwork and the generator – further decreases the energy that’s needed to produce it, thus making it more economical and sustainable over its entire lifetime.
In terms of carbon emissions, a small wooden wind turbine can even be considered a carbon sink, because the wood sequesters CO2 that the trees have taken from the atmosphere.Combining Wind and Solar
The newest products from both EAZ Wind and InnoVentum incorporate solar panels at the basis of the structure. Because the wind turbine and the solar PV system can share the same support structure, electrical system, and energy storage, this approach saves money and resources. The combination of solar and wind also increases the chances of sufficient power output at any time, reducing the need for energy storage – which is the most unsustainable part of an off-the-grid power installation.
In the hybrid solar-wind model from EAZ Wind, the capacity of the wind turbine is double the capacity of the solar PV panels, reflecting the local climate (windy but not very sunny). The addition of solar panels increases the power yield to 45,000 kWh per year, which corresponds to the power demand of 14 Dutch households. However, the use of solar panels increases the embodied energy of the system considerably, so that it may no longer be a carbon sink.
Image: InnoVentum.Decentralised Power Production
Small wooden wind turbines offer additional benefits that are inherent to all decentralised power sources. The fact that they're paid for by the same people that enjoy their benefits, increases their public acceptance. They also eliminate the need for transmission lines, and the more power is produced and used locally, the less challenging it becomes to integrate unpredictable wind power into the central grid. Last but not least, the connection between energy use and demand encourages lower energy ways of life.
Kris De Decker
- Support Low-tech Magazine via PayPal, Patreon or Liberapay.
- Buy the printed website.
- Subscribe to our newsletter.
 Kostakis, Vasilis, et al. "The convergence of digital commons with local manufacturing from a degrowth perspective: two illustrative cases." Journal of Cleaner Production 197 (2018): 1684-1693.
 How to build a wind turbine. High Piggott, 2003.
 Sumanik-Leary, Jon, et al. "Locally manufactured small wind turbines: how do they compare to commercial machines." Proceedings of 9 th PhD Seminar on Wind Energy in Europe. 2013.
 Mishnaevsky, Leon, et al. "Materials for wind turbine blades: an overview." Materials 10.11 (2017): 1285.
 Mishnaevsky Jr, Leon, et al. "Strength and reliability of wood for the components of low-cost wind turbines: computational and experimental analysis and applications." Wind Engineering 33.2 (2009): 183-196.
 Mishnaevsky Jr, Leon, et al. "Small wind turbines with timber blades for developing countries: Materials choice, development, installation and experiences." Renewable Energy 36.8 (2011): 2128-2138.
 Sinha, Rakesh, et al. "Selection of Nepalese timber for small wind turbine blade construction." Wind Engineering 34.3 (2010): 263-276.
 Clausen, P. D., F. Reynal, and D. H. Wood. "Design, manufacture and testing of small wind turbine blades." Advances in wind turbine blade design and materials. Woodhead Publishing, 2013. 413-431.
 Pourrajabian, Abolfazl, et al. "Choosing an appropriate timber for a small wind turbine blade: A comparative study." Renewable and Sustainable Energy Reviews 100 (2019): 1-8.
Image: InnoVentum.// //
After 12 years, Low-tech Magazine finally makes the jump from web to paper. The first result is a 710-page perfect-bound paperback which is printed on demand and contains 37 of the most recent articles from the website (2012 to 2018). A second volume, collecting articles published between 2007 and 2011, will appear later this year.Book Design
The books are based on the same electronic documents that make up the solar powered website of Low-tech Magazine -- all articles were converted to Markdown, a lightweight markup language based on plain text files. Therefore, the content is almost identical.
Both the books and the website use dithered images, albeit for different reasons. On the solar powered website, dithered images reduce page size and thus energy use. In the book, dithering makes it possible to also include low resolution images. The first volume contains a selection of 159 illustrations.Why Paper?
The books can be read when the solar powered website is down due to bad weather. In fact, the content can be viewed with no access to a computer, a power supply, or an industrial civilization. A printed website also serves to preserve the content of Low-tech Magazine in the longer run. Websites don’t live forever, and the internet should not be taken for granted.Print on Demand
Printing is done on demand, meaning that there are no unsold copies (and no large upfront investment costs). Our US publisher Lulu.com works with printers all over the world, so that most copies are produced locally and travel relatively short distances.
The first book sells for $25.20, which converts to 23.80 euro at the current conversion rate. Delivery rates (for books ordered through Lulu) vary by country, but if one accepts the longest delivery times (up to 11 working days), costs can be as low as $3. Note that it also takes 3 to 5 work days to print the book.
Book design by Lauren Traugott-Campbell. Book images by Adriana Parra.
Low-tech Magazine 2012-2018, Kris De Decker, ISBN 9780359478330, 710 pp., March 2019.
Renewable energy production is almost entirely aimed at the generation of electricity. However, we use more energy in the form of heat, which solar panels and wind turbines can supply only indirectly and inefficiently.
A solar thermal collector skips the conversion to electricity and supplies renewable thermal energy in a direct and efficient way. Much less known is that a mechanical windmill can do the same in a windy climate -- by oversizing its brake system, a windmill can generate lots of direct heat through friction.
Illustration: Rona Binay for Low-tech Magazine.
Given the right conditions, a mechanical windmill with an oversized brake system is a cheap, effective, and sustainable heating system.Heat versus Electricity
On a global scale, thermal energy demand corresponds to one third of the primary energy supply, while electricity demand is only one-fifth.  In temperate or cold climates, the share of thermal energy is even higher. For example in the UK, heat counts for almost half of total energy use.  If we only look at households, thermal energy for space and water heating in temperate and cold climates can be 60-80% of total domestic energy demand. 
In spite of this, renewable energy sources play a negligible role in heat production. The main exception is the traditional use of biomass for cooking and heating – but in the “developed” world even biomass is often used to produce electricity instead of heat. The use of direct solar heat and geothermal heat provide less than 1% and 0.2% of global heat demand, respectively  . While renewable energy sources account for more than 20% of global electricity demand (mostly hydroelectric), they only account for 10% of global heat demand (mostly biomass).  Direct versus Indirect Heat Production
Electricity produced by renewable energy sources can be – and is being – converted to heat in an indirect way. For example, a wind turbine converts its rotational energy into electricity by the use of its electrical generator, and this electricity can then be converted into heat using an electric heater, an electric boiler, or an electric heat pump. The result is heat generated by wind energy.
In particular, the electric heat pump is promoted by many governments and organisations as a sustainable solution for renewable heat generation. However, solar and wind energy can also be used in a direct way, without converting them to electricity first – and of course the same applies to biomass. Direct heat production is cheaper, more energy efficient, and more sustainable than indirect heat production.
Prototypes of heat generating windmills, built by Esra L. Sorensen in 1974. Photo by Claus Nybroe. Source: 
The direct alternative for solar photovoltaic power is solar thermal power, a technology that appeared in the nineteenth century following cheaper production technologies for glass and mirrors. Solar thermal energy can be used for water heating, space heating or industrial processes, and this is 2-3 times as energy efficient compared to following the indirect path involving electricity conversion.
Almost nobody knows that a windmill can produce heat directly.
The direct alternative for wind power that everybody knows is the old-fashioned windmill, which is at least 2.000 years old. It transferred the rotational energy from its wind rotor directly to the axis of a machine, for example for sawing wood or grinding grain. This old-fashioned approach remains relevant, also in combination with new technology, because it would be more energy efficient compared to first converting the energy to electricity, and then back to rotational energy.
However, an old-fashioned windmill can not only provide mechanical energy, but also thermal energy. The problem is that almost nobody knows this. Even the International Energy Agency doesn't mention direct conversion of wind into heat when it presents all possible options for renewable heat production. The Water Brake Windmill
Heat generating windmills convert rotational energy directly into heat by generating friction in water, using a so-called “water brake” or “Joule Machine”. A heat generator based on this principle is basically a wind-powered mixer or impeller installed into an insulated tank filled with water. Due to friction among molecules of the water, mechanical energy is converted into heat energy. The heated water can be pumped into a building for heating or washing, and the same concept could be applied to industrial processes in a factory that require relatively low temperatures.   
The Joule Machine was originally conceived as a measuring apparatus. James Joule built it in the 1840s for his famous measurement of the mechanical equivalent of heat: one Joule calorie equals the amount of energy required to raise the temperature of 1 cubic centimeter of water by 1 degree Celsius. 
A heat generator based on this principle is basically a wind-powered mixer or impeller installed into an insulated tank filled with water
The most fascinating thing about water brake windmills is that, hypothetically, they could have been built hundreds or even thousands of years ago. They require simple materials: wood and/or metal. But although we cannot exclude their use in pre-industrial times, the first reference to heat producing windmills dates from the 1970s, when the Danes started building them in the wake of the first oil crisis.
At the time, Denmark was almost entirely dependent on imported oil for heating, which left many households in the cold when the oil supply was disturbed. Because the Danes already had a strong DIY-culture for small wind turbines generating electricity on farms, they started building windmills to heat their houses. Some chose the indirect path, converting wind generated electricity into heat using electric heating appliances. Others, however, developed mechanical windmills that produced heat directly.Cheaper to Build and More Efficient to Operate
The direct approach to heat production is considerably cheaper and more sustainable than converting wind or solar generated electricity into heat by using electric heating devices, including an electric heat pump. There’s two reasons for this.
First, mechanical windmills are less complex, which makes them more affordable and less resource-intensive to build, and which increases their lifetime. In a water brake windmill, electric generator, power converters, transformer and gearbox can be excluded, and because of the weight savings, the windmill needs to be less sturdy built. The Joule Machine has lower weight, smaller size, and lower costs than an electrical generator.  Equally important is that the cost of thermal storage is 60-70% lower compared to batteries or the use of backup thermal power plants. 
A water brake windmill built at the Institute for Agricultural Techniques in 1974. Photo by Ricard Matzen. Source: 
Second, converting wind or solar energy directly into heat (or mechanical energy) is more energy efficient than when electric conversion is involved. This means that less solar and wind energy converters – and thus less space and resources – are needed to supply a certain amount of heat. In short, the heat generating windmill addresses the main disadvantages of wind power: its low power density, and its intermittency.
Mechanical windmills are less complex, which makes them more affordable and less resource-intensive to build, and which increases their lifetime
Furthermore, direct heat generation greatly improves the economics and the sustainability of smaller types of windmills. Tests have shown that small wind turbines – producing electricity – are very inefficient and don’t always generate as much energy as was needed to produce them.  However, using similar models for heat production decreases embodied energy and costs, increases lifetime, and improves efficiency.How Much Heat Can a Windmill Produce?
The Danish water brake windmill from the 1970s was a relatively small machine, with a rotor diameter of around 6 meters and a height of around 12 meters. Larger heat generating windmills were built in the 1980s. Most used simple wooden blades. In total, at least a dozen different models have been documented, both DIY and commercial models.  Many were built with used car parts and other discarded materials. 
One of the smaller early Danish heat generating windmills was officially tested. The Calorius type 37 – which had a rotor diameter of 5 meters and a height of 9 meters – produced 3.5 kilowatt of heat at a wind speed of 11 m/s (a strong breeze, Beaufort 6). This is comparable to the heat output of the smallest electric boilers for space heating. From 1993 to 2000, the Danish firm Westrup built a total of 34 water brake windmills based on this design, and by 2012 there were still 17 in operation. 
A Calorius windmill producing up to 4 kW of heat. Image provided by the Nordic Folkecenter in Denmark.
A much larger water brake windmill (7.5m rotor diameter, 17m tower) was built in 1982 by the Svaneborg brothers, and heated the house of one of them (the other brother opted for a wind turbine and an electric heating system). The windmill, which had three fiberglass blades, produced up to 8 kilowatt of heat according to non-official measurements – comparable to the heat output of an electric boiler for a modest home. 
Further into the 1980s, Knud Berthou built the most sophisticated heat generating windmill to date: the LO-FA. In other models, heat generation happened at the bottom of the tower – from the top of the windmill there was a shaft down to the bottom where the water brake was installed. However, in the LO-FA windmill all mechanical parts for energy conversion were moved to the top of the tower. The lower 10 meters of the 20 meter high tower were filled up with 15 tonnes of water in an insulated reservoir. Consequently, hot water could literally be tapped out of the windmill. 
The tower of the LO-FA windmill was filled up with 15 tonnes of water in an insulated tank: hot water could literally be tapped out of the windmill.
The LO-FA was also the largest of the heat generating windmills, with a 12 meter diameter rotor. Its heat output was estimated to be 90 kilowatt at a wind speed of 14 m/s (Beaufort 7). This results seems to be excessive compared to the other heat generating windmills, but the energy output of a windmill increases more than proportionally with the rotor diameter and the wind speed. Furthermore, the friction liquid in the water brake was not water but hydraulic oil, which can be heated up to much higher temperatures. The oil then transferred its heat to the water storage in the tower. Renewed Interest
Interest in heat generating windmills resurfaced a few years ago, although for now it concerns only a handful of scientific studies. In a 2011 paper, German and UK scientists write that “small and remote households in northern regions demand thermal energy rather than electricity, and therefore wind turbines in such places should be build for thermal energy generation”. 
The researchers explain and illustrate the workings of the water brake windmill, and calculate the optimal performance of the technology. It was found that the torque-speed characteristics of wind rotor and impeller should be carefully matched to achieve maximum efficiency. For example, for the very small Savonius windmill that the scientists used as a model (0.5m rotor diameter, 2m tower), it was calculated that the impeller diameter should be 0.388m.
The researchers then ran simulations over a period of fifty hours to calculate the windmill’s heat output. Although the Savonius is a low speed windmill which is ill-suited for electricity generation, it turns out to be an excellent producer of heat: the small windmill produced up to 1 kW of thermal power (at wind speeds of 15 m/s).  A 2013 study using a prototype obtained similar results, and calculated the efficiency of the system to be 91%. 
A 2013 study using a prototype calculated the efficiency of the system to be 91%
Obviously, it’s not always stormy weather, which means that the average wind speed is at least as important. A 2015 study investigates the possibilities of heat generating windmills in Lithuania, a Baltic country with a cold climate that’s dependent on expensive fuel imports.  The researchers calculated that at the average wind speed in the country (4 m/s of Beaufort 3), generating one kilowatt of heat requires a windmill with a rotor diameter of 8.2 meters.
A heat generating windmill with a water brake, placed inside the bottom of the tower. The mill was built by Jorgen Andersen in 1975, and stood in Serritslev. Photo by Claus Nybroe. Source: 
They compare this with the thermal energy demand of a 120 m2 energy efficient new building, heated to modern comfort standards, and conclude that a heat generating windmill could cover from 40-75% of the annual heating needs (depending on the energy efficiency class of the construction). Heat Storage
The average wind speed is not guaranteed either, which means that a heat generating windmill requires heat storage – otherwise it would only provide heating when the wind blows. One cubic meter of heated water (1 ton, 1,000 liters) can hold up to 90 kWh of heat, which is roughly one to two days of supply for a household of four persons.
The same windmill as the one pictured above, seen from below. Source: 
Providing enough storage to bridge a week without wind thus requires up to 7 tonnes of water, which corresponds to a volume of 7 cubic meters plus insulation. However, energy losses (self-discharge) should also be taken into account, and this explains why the Danish heat generating windmills usually had a storage tank holding ten to twenty thousand liters of water. 
A heat generating windmill can be combined with a solar boiler, so that both sun and wind can supply direct thermal energy using a smaller water tank.
A heat generating windmill can also be combined with a solar boiler, so that both sun and wind can supply direct thermal energy using the same heat storage reservoir. In this case, it becomes possible to build a pretty reliable heating system with a smaller heat storage tank, because the combination of two – often complementary – energy sources increases the chances of direct heat supply. Especially in less sunny climates, heat generating windmills are a great addition to a solar thermal system, because the latter produces relatively less heat during winter, when heat demand is at its maximum.Retarders and Mechanical Heat Pumps
The most recent and extensive studies to date are from 2016 and 2018, and compare different types of heat generating windmills with different types of indirect heat generation.   In this case, the windmills no longer make use of the original water brake, but produce heat with mechanical heat pumps or hydrodynamic retarders. A mechanical heat pump is simply a heat pump without the electric motor – instead, the wind rotor is directly connected to the compressor(s) of the heat pump, involving one less energy conversion.
The hydrodynamic retarder is well known as a brake system in heavy vehicles. Like a joule machine, it converts rotational energy into heat without the involvement of electricity. Retarders and mechanical heat pumps have the same advantages as Joule Machines, in the sense that they are much smaller, lighter, cheaper and more efficient than electrical generators. However, in this case a gearbox is required to achieve optimal efficiency.
Different types of direct and indirect heating production compared. Source: 
The study compares heat generating windmills based on retarders and mechanical heat pumps with indirect heat production using electric boilers and electric heat pumps. It compares these four technologies for three system sizes: a small windmill aimed at heating an off-the-grid household, a large windmill aimed at supplying heat to a village, and a wind farm producing heat for 20,000 inhabitants. The four heating concepts are ranked based on their yearly capital and operational expenditures, assuming a lifespan of 20 years.  
Directly coupling a mechanical windmill to a mechanical heat pump is cheaper than using a gas boiler or the combination of a wind turbine and an electric heat pump.
For the off-grid system, directly coupling a mechanical windmill to a mechanical heat pump is the cheapest option, while the combination of a wind turbine and an electric boiler is the most expensive one. All other technologies are in between. Taking into account both investment and operational costs, small-scale heat generating windmills with mechanical heat pumps are equally expensive or cheaper than conventional gas boilers when assuming the typical performance of a small windmill (which produces – over a period of one year – 12% to 22% of its maximum energy output).
Image: Water brake windmill developed by O. Helgason (left), water brake with variable load system (right). Images from “Test at very high wind speed of a windmill controlled by a water brake”, O. Helgason and A.S. Sigurdson, Science Institute, University of Iceland. Source: 
On the other hand, the combination of a small wind turbine and an electric heat pump requires a windmill with a “capacity factor” of at least 30% to become cost-competitive with gas heating – but such high performance is very unusual. Larger systems present the same rankings – the combination of mechanical windmills and mechanical heat pumps is the cheapest option – but they have up to three times lower capital costs due to economies of scale. Larger windmills have higher capacity factors (16-40%), which result in even larger cost savings.
Due to the large energy losses for heat transportation, the heat generating windmill is at its best as a decentralised energy source, providing heat to an off-the-grid household or – in the optimal case – a small city.
However, larger systems also reveal a problem when scaling up the technology: storing heat may be cheaper and more efficient than storing electricity, but the opposite holds true for transportation: the energy losses for heat transportation are much larger than the energy losses for electricity transmission. The scientists calculate that the maximum distance that is cost-achievable under optimal wind conditions is 50 km. 
Consequently, the heat generating windmill is at its best as a decentralised energy source, providing heat to an off-the-grid household or – in the optimal case – a relatively small town or city. For even larger systems, energy needs to be transported in the form of electricity, and in that case direct generation of heat – with all its benefits – becomes unattractive.Blinded by Electricity
Heat generating windmills are also investigated for renewable electricity production, mainly because they offer a better solution for energy storage compared to batteries or other common technologies.  In these systems, the generated heat is converted to electricity by the use of a steam turbine. The storage system is similar to that of a concentrated solar power plant (CSP), and the solar concentrators are replaced by heat generating windmills.
An "eddy current heater". Source: 
Because high temperatures are needed to produce electricity efficiently with a steam turbine, these systems can’t make use of joule machines or hydrodynamic retarders, but instead rely on a type of retarder called an “eddy current heater” (or “induction heater”). These are comprised of a magnet mounted on a rotating shaft, and can reach temperatures of up to 600 degrees Celsius. Using eddy current heaters, windmills could provide direct heat at higher temperatures, making their potential use in industry even larger.
However, using the stored heat for electricity production is considerably more costly and less sustainable compared to using heat generating windmills for direct heat production. Converting the stored heat into electricity is at most 30% efficient, meaning that two thirds of the wind energy is lost due to needless energy conversions -- and the same is true when solar thermal is used for power production. 
Direct heat production thus offers the possibility to save three times more greenhouse gas emissions and fossil fuels using the same number of windmills, which are also cheaper and more sustainable to build. Hopefully, direct heat production will be given the priority it deserves. Despite a warming climate, the demand for thermal energy is as high as ever.
Kris De Decker
- Ditch the batteries: off-grid compressed air energy storage
- Keeping some of the lights on: redefining energy security
- How to run the economy on the weather
- Restoring the old way of warming: heating people, not places
- Back to basics: direct hydro-power
- Medieval smokestacks: fossil fuels in preindustrial times
- The bright future of solar thermal powered factories
- Wind powered factories: history and future of industrial windmills
- Nitto, Dipl-Ing Alejandro Nicolás, Carsten Agert, and Yvonne Scholz. "WIND POWERED THERMAL ENERGY SYSTEMS (WTES)".
- Integration of Thermal Energy Storage into Energy Network, Sharyar Ahmed, 2017
- The bright future of solar thermal powered factories, Kris De Decker, Low-tech Magazine, 2011
- Solar Heat Worldwide, edition 2018, International Energy Agency (IEA).
- Renewables 2018, Heat, International Energy Agency (IEA).
- World Bank: Renewable electricity output.
- The Rise of Modern Wind Energy: Wind Power for the World. Pan Stanford Publishing, 2013. See chapter 13 ("Water brake windmills", Jørgen Krogsgaard) and chapter 16 ("Consigned to Oblivion", Preben Maegaard). These seem to be the only English language documents on Danish water brake windmills.
- Chakirov, Roustiam, and Yuriy Vagapov. "Direct conversion of wind energy into heat using joule machine." Fourth International Conference on Environmental and Computer Science (ICECS 2011), Singapore, Sept. 2011.
- SMALL WIND ENERGY SYSTEM WITH PERMANENT MAGNET EDDY CURRENT HEATER, BY ION SOBOR, VASILE RACHIER, ANDREI CHICIUC and RODION CIUPERCĂ. BULETINUL INSTITUTULUI POLITEHNIC DIN IAŞI. Publicat de Universitatea Tehnică „Gheorghe Asachi” din Iaşi Tomul LIX (LXIII), Fasc. 4, 2013
- Joule’s experiment: An historico-critical approach, Marcos Pou Gallo Advisor.
- Okazaki, Toru, Yasuyuki Shirai, and Taketsune Nakamura. "Concept study of wind power utilizing direct thermal energy conversion and thermal energy storage." Renewable energy 83 (2015): 332-338.
- Real-world tests of small wind turbines in Netherlands and the UK, Kris De Decker, The Oil Drum, 2010.
- Selfbuilders, Winds of Change website, Erik Grove-Nielsen.
- Černeckienė, Jurgita, and Tadas Ždankus. "Usage of the Wind Energy for Heating of the Energy-Efficient Buildings: Analysis of Possibilities." Journal of Sustainable Architecture and Civil Engineering 10.1 (2015): 58-65.
- Cao, Karl-Kiên, et al. "Expanding the horizons of power-to-heat: Cost assessment for new space heating concepts with Wind Powered Thermal Energy Systems." Energy 164 (2018): 925-936.
- Okazaki, Toru, Yasuyuki Shirai, and Taketsune Nakamura. "Concept study of wind power utilizing direct thermal energy conversion and thermal energy storage." Renewable energy 83 (2015): 332-338.
As a society depends more on energy sources for its daily functioning, it becomes more vulnerable if the supply of energy is interrupted. This obvious fact is ignored in current strategies to achieve energy security, making them counter-productive.Camilla MP. What is Energy Security?
What does it mean for a society to have “energy security”? Although there are more than forty different definitions of the concept, they all share the fundamental idea that energy supply should always meet energy demand. This also implies that energy supply needs to be constant – there can be no interruptions in the service. [1-4] For example, the International Energy Agency (IEA) defines energy security as “the uninterrupted availability of energy sources at an affordable price”, the US Department of Energy and Climate Change (DECC) defines the concept as meaning that “the risks of interruption to energy supply are low”, and the EU defines it as a “stable and abundant supply of energy”. [5-7]
Historically, energy security was achieved by securing access to forests or peat bogs for thermal energy, and to human, animal, wind or water power sources for mechanical energy. With the arrival of the Industrial Revolution, energy security came to depend on the supply of fossil fuels. As a theoretical concept, energy security is most closely related to the oil crises from the 1970s, when embargoes and price manipulations limited oil supply to Western nations. As a result, most industrialised societies still stockpile oil reserves that are equivalent to several months of consumption.
Although oil remains as vital to industrial economies as it was in the 1970s, mainly for transportation and agriculture, it’s now recognised that energy security in modern societies also depends on other infrastructures, such as those supplying gas, electricity, and even data. Furthermore, these infrastructures increasingly interconnect and depend on each other. For example, gas is an important fuel for power production, while the power grid is now required to operate gas pipelines. Power grids are needed to run data networks, and data networks are now needed to run power grids.
Power grids are needed to run data networks, and data networks are needed to operate power grids.
This article investigates the concept of energy security by focusing on the power grid, which has become just as vital to industrial societies as oil. Moreover, electrification is seen as a way to decrease dependency on fossil fuels – think electric vehicles, heat pumps, and wind turbines. The “security” or “reliability” of a power grid can be measured precisely by indicators of continuity such as the “Loss-of-Load Probability” (LOLP), and the “System Average Interruption Duration Index” (SAIDI). Using these indicators, one can only conclude that power grids in industrial societies are very secure.
For example, in Germany, power is available for 99.996% of the time, which corresponds to an interruption in service of less than half an hour per customer per year.  Even the worst performing countries in Europe (Latvia, Poland, Lithuania) have supply shortages of only eight hours per customer per year, which corresponds to a reliability of 99.90%.  The US power grid is in between these values, with supply interruptions of less than four hours per customer per year (99.96% reliability). How Secure is a Renewable Power Grid?
In the current operation of infrastructures, the paradigm is that consumers could and should have access to as much electricity, gas, oil, data or water as they want, anytime they want it, for as long as they want it. The only requirement is that they pay the bill. Looking at the power sector, this vision of energy security is quite problematic, for several reasons. First of all, most energy sources from which electricity is made are finite – and maintaining a steady supply of something that’s finite is of course impossible. In the long run, the strategy to maintain energy security is certainly doomed to fail. In the shorter term, it may disrupt the climate and provoke armed conflicts.
The International Energy Agency (IEA), which was set up following the first oil crisis in the early 1970s, encourages the use of renewable energy sources in order to diversify the energy supply and improve energy security in the long term. A renewable power system is not dependent on foreign energy imports nor vulnerable to fuel price manipulations – which are the main worries in an energy infrastructure that is largely based on fossil fuels. Of course, solar panels and wind turbines have limited lifetimes and need to be manufactured, which also requires resources that could come from abroad or which can become depleted. But, once they are installed, renewable power systems are “secure” in a way and for a period of time that fossil fuels (and atomic energy) are not.
Renewable energy sources pose fundamental challenges to the current understanding of energy security
Furthermore, solar and wind power provide more security concerning physical failure or sabotage, even more so when renewable power production is decentralised. Renewable power plants also have lower CO2-emissions, and the extreme weather events caused by climate change are a risk to energy security as well.
However, in spite of all these advantages, renewable energy sources pose fundamental challenges to the current understanding of energy security. Most importantly, the renewable energy sources with the largest potential – sun and wind – are only intermittently available, depending on the weather and the seasons. This means that solar and wind power don’t match the criterium that all definitions of energy security consider to be essential: the need for an uninterrupted, unlimited supply of power.
Image: Michael Lokner.
The reliability of a power grid with a high share of solar and wind power would be significantly below today’s standards for continuity of service. [10-14] In such a renewable power grid, a 24/7 power supply can only be maintained at very high costs, because it requires an extensive infrastructure for energy storage, power transmission, and excess generation capacity. This additional infrastructure risks making a renewable power grid unsustainable, because above a certain threshold, the fossil fuel energy used for building, installing and maintaining this infrastructure becomes higher than the fossil fuel energy saved by the solar panels and the wind turbines.
Renewable energy sources like wind and sun have advantages that current definitions of energy security don’t capture
Intermittency is not the only disadvantage of renewable energy sources. Although many media and environmental organisations have painted a picture of solar and wind power as abundant sources of energy (“The sun delivers more energy to Earth in an hour than the world consumes in a year”), reality is more complex. The “raw” supply of solar (and wind) energy is enormous indeed. However, because of their very low power density, to convert this energy supply into a useful form solar panels and wind turbines require magnitudes of order more space and materials compared to thermal power plants – even if the mining and distribution of fuels is included.  Therefore, a renewable power grid cannot guarantee that consumers have access to as much electricity as they want, even if the weather conditions are optimal.How Secure is an Off-the-Grid Power System?
Today’s energy policies related to electricity try to reconcile three aims: an uninterrupted and limitless supply of power, affordability of electricity prices, and environmental sustainability. A power grid that is mainly based on fossil fuels and atomic energy cannot achieve the aim of environmental sustainability, and it can only achieve the other goals as long as foreign suppliers do not cut off supplies or raise energy prices (or as long as national or international reserves are not depleted).
However, a renewable power grid cannot reconcile these three goals either. To achieve an unlimited 24/7 supply of power, the infrastructure needs to be oversized, which makes it expensive and unsustainable. Without that infrastructure, a renewable power grid could be affordable and sustainable, but it could never offer an unlimited 24/7 supply of power. Consequently, if we want a power infrastructure that is affordable and sustainable, we need to redefine the concept of energy security – and question the criterium of an unlimited and uninterrupted power supply.
If we look beyond the typical large-scale central infrastructures in industrial societies, it becomes clear that not all provisioning systems offer a limitless supply of resources. Off-the-grid microgeneration – the local production and storage of electricity using batteries and solar PV panels or wind turbines – is one example. In principle, off-the-grid systems can be sized in such a way that they are “always on”. This can be done by following the “worst-month method”, which oversizes generation and storage capacity so that supply can meet demand even during the shortest and darkest days of the year.
Matching supply to demand at all times makes an off-the-grid system very costly and unsustainable, especially in high seasonality climates
However, just like in an imaginary large-scale renewable power grid, matching supply to demand at all times makes an off-the-grid system very costly and unsustainable, especially in high seasonality climates. [16-18] Therefore, most off-the-grid systems are sized according to a method that aims for a compromise between reliability, economic cost and sustainability. The “loss-of-load probability sizing method” specifies a number of days per year that supply does not match demand. [19-21] In other words, the system is sized, not only according to a projected energy demand, but also according to the available budget and/or the available space.
Off-the-grid. Image: Stephen Yang / The Solutions Project.
Sizing an off-the-grid power system in this way generates significant cost reductions, even if “reliability” is reduced just a little bit. For example, a calculation for an off-the-grid house in Spain shows that decreasing the reliability from 99.75% to 99.00% produces a 60% cost reduction, with similar benefits for sustainability. Supply would be interrupted for 87.6 hours per year, compared to 22 hours in the higher reliability system. 
According to the current understanding of energy security, off-the-grid power systems that are sized in this way are a failure: energy supply doesn’t always meet energy demand. However, off-gridders don’t seem to complain about a lack of energy security, on the contrary. There’s a simple reason for this: they adapt their energy demand to a limited and intermittent power supply.
In their 2015 book Off-the-Grid: Re-Assembling Domestic Life, Phillip Vannini and Jonathan Taggart document their travels across Canada to interview about 100 off-the-grid households.  Among their most important observations is that voluntary off-gridders use less electricity overall and routinely adapt their energy demand to the weather and the seasons.
Voluntary off-gridders use less electricity overall and routinely adapt their energy demand to the weather and the seasons.
For example, washing machines, vacuum cleaners, power tools, toasters or videogame consoles are not used at all, or they are only used during periods of abundant energy, when batteries can accommodate no further charge. If the sky is overcast, off-gridders act differently to draw less power and have some more left over for the day after. Vannini and Taggart also observe that voluntary off-gridders seem to feel perfectly happy with levels of lighting or heating that are different from the standards that many in the western world have come to expect. Often, this shows itself in concentrating activities around more localised sources of heat and light. 
Similar observations can be made in places where people – involuntarily – depend on infrastructures that are not always on. If centralised water, electricity and data networks are present in less industrialised countries, they are often characterised by regular and irregular interruptions in the supply. [23-25] However, in spite of the very low reliability of these infrastructures – according to common indicators of continuity – life goes on. Daily household routines are shaped around disruptions of supply systems, which are viewed as normal and a largely accepted part of life. For example, if electricity, water or Internet are only available during certain times of the day, household tasks or other activities are planned accordingly. People also use less energy overall: the infrastructure simply doesn’t allow for a resource-intensive lifestyle. More Reliable, Less Secure?
The very high “reliability” of power grids in industrial societies is justified by calculating the “value of lost load” (VOLL), which compares the financial loss due to power shortages to the extra investment costs to avoid these shortages.  [26-29] However, the value of lost load is highly dependent on how society is organised. The more it depends on electricity, the higher the financial losses due to power shortages will be.
Current definitions of energy security consider supply and demand to be unrelated, and focus almost entirely on securing energy supply. However, alternative forms of power infrastructures like those described above show that people adapt and match their expectations to a power supply that is limited and not always on. In other words, energy security can be improved, not just by increasing reliability, but also by reducing dependency on energy.
Natural gas storage terminal. Image: Jason Woodhead.
Demand and supply are also interlinked, and mutually influence each other, in 24/7 power systems – but with the opposite effect. Just like “unreliable” off-the-grid power infrastructures foster lifestyles that are less dependent on electricity, “reliable” infrastructures foster lifestyles that are increasingly dependent on electricity.
Industrial societies with “reliable” power grids are in fact the weakest and most fragile in the face of supply interruptions
In their 2018 book Infrastructures and Practices: the Dynamics of Demand in Networked Societies, Olivier Coutard and Elizabeth Shove argue that an unlimited and uninterrupted power supply has enabled people in industrial societies to adopt a multitude of power dependent technologies – such as washing machines, air conditioners, refrigerators, automatic doors, or 24/7 mobile internet access – which become “normal” and central to everyday life. At the same time, alternative ways of doing things – such as washing clothes by hand, storing food without electricity, keeping cool without air-conditioning, or navigating and communicating without mobile phones – have withered away, or are withering away. 
As a result, energy security is in fact higher in off-the-grid power systems and “unreliable” central power infrastructures, while industrial societies are the weakest and most fragile in the face of supply interruptions. What is generally assumed to be a proof of energy security – an unlimited and uninterrupted power supply – is actually making industrial societies ever more vulnerable to supply interruptions: people increasingly lack the skills and the technology to function without a continuous power supply.Redefining Energy Security
To arrive to a more accurate definition of energy security requires the concept to be defined, not in terms of commodities like kilowatt-hours of electricity, but in terms of energy services, social practices, or basic needs.  People don’t need electricity in itself. What they need, is to store food, wash clothes, open and close doors, communicate with each other, move from one place to another, see in the dark, and so on. All these things can be achieved either with or without electricity, and in the first case, with more or less electricity.
Defined in this way, energy security is not just about securing the supply of electricity, but also about improving the resilience of the society, so that it becomes less dependent on a continuous supply of power. This includes the resilience of people (do they have the skills to do things without electricity?), the resilience of devices and technological systems (can they handle an intermittent power supply?), and the resilience of institutions (is it legal to operate a power grid that is not always on?). Depending on the resilience of the society, a disruption of the power supply may or may not lead to a disruption of energy services or social practices.
For example, although our food distribution system is dependent on a cold chain that requires a continuous power supply, there are many alternatives. We could adapt refrigerators to an irregular power supply by insulating them much better, we could reintroduce cold cellars (which keep food fresh without electricity), or we could relearn older methods of food storage, like fermentation. We could also improve people’s skills in terms of fresh cooking, switch to diets based on ingredients that don’t need cold storage, and encourage local daily shopping over weekly trips to large supermarkets.
To improve energy security, we need to make infrastructures less reliable.
If we look at energy security in a more holistic way, taking into account both supply and demand, it quickly becomes clear that energy security in industrial societies continues to deteriorate. We keep delegating more and more tasks to machines, computers and large-scale infrastructures, thus increasing our dependency on electricity. Furthermore, the Internet is becoming just as essential as the power grid, and trends like cloud computing, the Internet of Things, and self-driving cars are all based on several interconnected layers of continuously operating infrastructures.
Abandoned power line. Image: Miura Paulison.
Because demand and supply influence each other, we come to a counter-intuitive conclusion: to improve energy security, we need to make the power grid less reliable. This would encourage resilience and substitution, and thus make industrial societies less vulnerable to supply interruptions. Coutard and Shove argue that “it would make sense to pay more attention to opportunities for innovation that are opened when large network systems are weakened and abandoned, or when they become less reliable”. They add that the experiences of voluntary off-gridders “provide some insights into the types of configuration at stake”. 
Arguing for a less reliable power supply is sure to be controversial. In fact, “Keeping the lights on” is a phrase that is often used to justify energy reforms such as building more atomic plants, or keeping them in operation past their planned lifetimes. To achieve real energy security, “keeping the lights on” should be replaced by phrases like “keeping some of the lights on”, “which lights should we turn off next?”, or “what’s wrong with a bit more dark?”.  Obviously, a less reliable energy supply would bring fundamental changes to routines and technologies, whether it is in households, factories, transport systems, or communications networks – but that’s exactly the point. Present ways of life in industrial societies are simply not sustainable.
Kris De Decker. This article was originally written for the UK Demand Centre.
 Winzer, Christian. "Conceptualizing energy security." Energy policy 46 (2012): 36-48. https://www.repository.cam.ac.uk/bitstream/handle/1810/242060/cwpe1151.pdf?sequence=1&isAllowed=y
 Sovacool, Benjamin K., and Ishani Mukherjee. "Conceptualizing and measuring energy security: A synthesized approach." Energy 36.8 (2011): 5343-5355. https://relooney.com/NS4053-Energy/00-Energy-Security_1.pdf
 Kruyt, Bert, et al. "Indicators for energy security." Energy policy37.6 (2009): 2166-2181. https://www.sciencedirect.com/science/article/pii/S0301421509000883
 Cherp, Aleh, and Jessica Jewell. "The concept of energy security: Beyond the four As." Energy Policy 75 (2014): 415-421. https://www.sciencedirect.com/science/article/pii/S0301421514004960
 Energy security, International Energy Agency. https://www.iea.org/topics/energysecurity/
 Lucas, Javier Noel Valdés, Gonzalo Escribano Francés, and Enrique San Martín González. "Energy security and renewable energy deployment in the EU: Liaisons Dangereuses or Virtuous Circle?." Renewable and Sustainable Energy Reviews 62 (2016): 1032-1046. https://www.researchgate.net/profile/Javier_Valdes4/publication/303361228_Energy_security_and_renewable_energy_deployment_in_the_EU_Liaisons_Dangereuses_or_Virtuous_Circle/links/5a536f45458515e7b72eab26/Energy-security-and-renewable-energy-deployment-in-the-EU-Liaisons-Dangereuses-or-Virtuous-Circle.pdf
 Strambo, Claudia, Måns Nilsson, and André Månsson. "Coherent or inconsistent? Assessing energy security and climate policy interaction within the European Union." Energy Research & Social Science 8 (2015): 1-12. https://www.sciencedirect.com/science/article/pii/S221462961500047X
 CEER Benchmarking Report 6.1 on the Continuity of Electricity and Gas Supply. Data update 2015/2016. Ref: C18-EQS-86-03. 26-July-2018. Council of European Energy Regulators. https://www.ceer.eu/documents/104400/-/-/963153e6-2f42-78eb-22a4-06f1552dd34c
 Average frequency and duration of electric distribution outages vary by states. U.S. Energy Information Administration (EIA). April 5, 2018. https://www.eia.gov/todayinenergy/detail.php?id=35652
 Röpke, Luise. "The development of renewable energies and supply security: a trade-off analysis." Energy policy 61 (2013): 1011-1021. https://www.econstor.eu/bitstream/10419/73854/1/IfoWorkingPaper-151.pdf
 "Evolutions in energy conservation policies in the time of renewables", Nicola Lablanca, Isabella Maschio, Paolo Bertoldi, ECEEE 2015 Summer Study -- First Fuel Now. https://www.eceee.org/library/conference_proceedings/eceee_Summer_Studies/2015/9-dynamics-of-consumption/evolutions-in-energy-conservation-policies-in-the-time-of-renewables/
 “How not to run a modern society on solar and wind power alone”, Kris De Decker, Low-tech Magazine, September 2017.
 Nedic, Dusko, et al. Security assessment of future UK electricity scenarios. Tyndall Centre for Climate Change Research, 2005. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.461.4834&rep=rep1&type=pdf
 Zhou, P., R. Y. Jin, and L. W. Fan. "Reliability and economic evaluation of power system with renewables: A review." Renewable and Sustainable Energy Reviews 58 (2016): 537-547. https://www.sciencedirect.com/science/article/pii/S136403211501727X
 Smil, Vaclav. Power density: a key to understanding energy sources and uses. MIT Press, 2015. https://mitpress.mit.edu/books/power-density
 Landeira, Cristina Cabo, Ángeles López-Agüera, and Fernando Núñez Sánchez. "Loss of Load Probability method applicability limits as function of consumption types and climate conditions in stand-alone PV systems." (2018). https://www.researchgate.net/profile/Cristina_Cabo2/publication/324080184_Loss_of_Load_Probability_method_applicability_limits_as_function_of_consumption_types_and_climate_conditions_in_stand-alone_PV_systems/links/5abca9fa45851584fa6e1efd/Loss-of-Load-Probability-method-applicability-limits-as-function-of-consumption-types-and-climate-conditions-in-stand-alone-PV-systems.pdf
 Singh, S. Sanajaoba, and Eugene Fernandez. "Method for evaluating battery size based on loss of load probability concept for a remote PV system." Power India International Conference (PIICON), 2014 6th IEEE. IEEE, 2014. https://ieeexplore.ieee.org/abstract/document/7117729
 How sustainanle is stored sunlight? Kris De Decker, Low-tech Magazine.
 Chapman, R. N. "Sizing Handbook for Stand-Alone Photovoltaic." Storage Systems, Sandia Report, SAND87-1087, Albuquerque (1987). https://prod.sandia.gov/techlib-noauth/access-control.cgi/1987/871087.pdf
 Posadillo, R., and R. López Luque. "A sizing method for stand-alone PV installations with variable demand." Renewable Energy33.5 (2008): 1049-1055. https://www.sciencedirect.com/science/article/pii/S096014810700184X
 Khatib, Tamer, Ibrahim A. Ibrahim, and Azah Mohamed. "A review on sizing methodologies of photovoltaic array and storage battery in a standalone photovoltaic system." Energy Conversion and Management 120 (2016): 430-448. https://staff.najah.edu/media/published_research/2017/01/19/A_review_on_sizing_methodologies_of_photovoltaic_array_and_storage_battery_in_a_standalone_photovoltaic_system.pdf
 Vannini, Phillip, and Jonathan Taggart. Off the grid: re-assembling domestic life. Routledge, 2014. http://lifeoffgrid.ca/off-grid-living-the-book/
 "Materialising energy and water resources in everyday practices: insights for securing supply systems", Yolande Strengers, Cecily Maller, in "Global Environmental Change 22 (2012), pp. 754-763. http://researchbank.rmit.edu.au/view/rmit%3A17990/n2006038376.pdf
 Pillai, N. "Loss of Load Probability of a Power System." (2008). https://mpra.ub.uni-muenchen.de/6953/1/MPRA_paper_6953.pdf
 Al-Rubaye, Mohannad Jabbar Mnati, and Alex Van den Bossche. "Decades without a real grid: a living experience in Iraq." International Conference on Sustainable Energy and Environment Sensing (SEES 2018). 2018. https://biblio.ugent.be/publication/8566224
 Telson, Michael L. "The economics of alternative levels of reliability for electric power generation systems." The Bell Journal of Economics (1975): 679-694. https://www.jstor.org/stable/3003250?seq=1#page_scan_tab_contents
 Schröder, Thomas, and Wilhelm Kuckshinrichs. "Value of lost load: an efficient economic indicator for power supply security? A literature review." Frontiers in energy research 3 (2015): 55. https://www.frontiersin.org/articles/10.3389/fenrg.2015.00055/full
 Ratha, Anubhav, Emil Iggland, and Goran Andersson. "Value of Lost Load: How much is supply security worth?." Power and Energy Society General Meeting (PES), 2013 IEEE. IEEE, 2013. https://www.ethz.ch/content/dam/ethz/special-interest/itet/institute-eeh/power-systems-dam/documents/SAMA/2012/Ratha-SA-2012.pdf
 De Nooij, Michiel, Carl Koopmans, and Carlijn Bijvoet. "The value of supply security: The costs of power interruptions: Economic input for damage reduction and investment in networks." Energy Economics 29.2 (2007): 277-295. https://s3.amazonaws.com/academia.edu.documents/40102922/The_Value_of_Supply_Security_The_Costs_o20151117-24458-1eo081r.pdf?AWSAccessKeyId=AKIAIWOWYYGZ2Y53UL3A&Expires=1544213977&Signature=d01qoyIcopj1rE5HpSWkCGcQzRk%3D&response-content-disposition=inline%3B%20filename%3DThe_value_of_supply_security.pdf
 Coutard, Olivier, and Elizabeth Shove. "Infrastructures, practices and the dynamics of demand." Infrastructures in Practice. Routledge, 2018. 10-22. https://www.routledge.com/Infrastructures-in-Practice-The-Dynamics-of-Demand-in-Networked-Societies/Shove-Trentmann/p/book/9781138476165
 Demand Dictionary of Phrase and Fable, seventeenth edition. Jenny Rinkinen, Elizabeth Shove, Greg Marsden, The Demand Centre, 2018. http://www.demand.ac.uk/wp-content/uploads/2018/07/Demand-Dictionary.pdf
The circular economy – the newest magical word in the sustainable development vocabulary – promises economic growth without destruction or waste. However, the concept only focuses on a small part of total resource use and does not take into account the laws of thermodynamics.
[Read this article on our solar powered website -- if the weather is good]
Illustration: Diego Marmolejo.Introducing the Circular Economy
The circular economy has become, for many governments, institutions, companies, and environmental organisations, one of the main components of a plan to lower carbon emissions. In the circular economy, resources would be continually re-used, meaning that there would be no more mining activity or waste production. The stress is on recycling, made possible by designing products so that they can easily be taken apart.
Attention is also paid to developing an “alternative consumer culture”. In the circular economy, we would no longer own products, but would loan them. For example, a customer could pay not for lighting devices but for light, while the company remains the owner of the lighting devices and pays the electricity bill. A product thus becomes a service, which is believed to encourage businesses to improve the lifespan and recyclability of their products.
The circular economy is presented as an alternative to the “linear economy” – a term that was coined by the proponents of circularity, and which refers to the fact that industrial societies turn valuable resources into waste. However, while there’s no doubt that the current industrial model is unsustainable, the question is how different to so-called circular economy would be.
Several scientific studies (see references) describe the concept as an “idealised vision”, a “mix of various ideas from different domains”, or a “vague idea based on pseudo-scientific concepts”. There’s three main points of criticism, which we discuss below.Too Complex to Recycle
The first dent in the credibility of the circular economy is the fact that the recycling process of modern products is far from 100% efficient. A circular economy is nothing new. In the middle ages, old clothes were turned into paper, food waste was fed to chickens or pigs, and new buildings were made from the remains of old buildings. The difference between then and now is the resources used.
Before industrialisation, almost everything was made from materials that were either decomposable – like wood, reeds, or hemp – or easy to recycle or re-use – like iron and bricks. Modern products are composed of a much wider diversity of (new) materials, which are mostly not decomposable and are also not easily recycled.
For example, a recent study of the modular Fairphone 2 – a smartphone designed to be recyclable and have a longer lifespan – shows that the use of synthetic materials, microchips, and batteries makes closing the circle impossible. Only 30% of the materials used in the Fairphone 2 can be recuperated. A study of LED lights had a similar result.
The large-scale use of synthetic materials, microchips, and batteries makes closing the circle impossible.
The more complex a product, the more steps and processes it takes to recycle. In each step of this process, resources and energy are lost. Furthermore, in the case of electronic products, the production process itself is much more resource-intensive than the extraction of the raw materials, meaning that recycling the end product can only recuperate a fraction of the input. And while some plastics are indeed being recycled, this process only produces inferior materials (“downcycling”) that enter the waste stream soon afterwards.
The low efficiency of the recycling process is, on its own, enough to take the ground from under the concept of the circular economy: the loss of resources during the recycling process always needs to be compensated with more over-extraction of the planet’s resources. Recycling processes will improve, but recycling is always a trade-off between maximum material recovery and minimum energy use. And that brings us to the next point.How to Recycle Energy Sources?
The second dent in the credibility of the circular economy is the fact that 20% of total resources used worldwide are fossil fuels. More than 98% of that is burnt as a source of energy and can’t be re-used or recycled. At best, the excess heat from, for example, the generation of electricity, can be used to replace other heat sources.
As energy is transferred or transformed, its quality diminishes (second law of thermodynamics). For example, it’s impossible to operate one car or one power plant with the excess heat from another. Consequently, there will always be a need to mine new fossil fuels. Besides, recycling materials also requires energy, both through the recycling process and the transportation of recycled and to-be-recycled materials.
To this, the supporters of the circular economy have a response: we will shift to 100% renewable energy. But this doesn’t make the circle round: to build and maintain renewable energy plants and accompanied infrastructures, we also need resources (both energy and materials). What’s more, technology to harvest and store renewable energy relies on difficult-to-recycle materials. That’s why solar panels, wind turbines and lithium-ion batteries are not recycled, but landfilled or incinerated.Input Exceeds Output
The third dent in the credibility of the circular economy is the biggest: the global resource use – both energetic and material – keeps increasing year by year. The use of resources grew by 1400% in the last century: from 7 gigatonnes (Gt) in 1900 to 62 Gt in 2005 and 78 Gt in 2010. That’s an average growth of about 3% per year – more than double the rate of population growth.
Growth makes a circular economy impossible, even if all raw materials were recycled and all recycling was 100% efficient. The amount of used material that can be recycled will always be smaller than the material needed for growth. To compensate for that, we have to continuously extract more resources.
Growth makes a circular economy impossible, even if all raw materials were recycled and all recycling was 100% efficient.
The difference between demand and supply is bigger than you might think. If we look at the whole life cycle of resources, then it becomes clear that proponents for a circular economy only focus on a very small part of the whole system, and thereby misunderstand the way it operates.Accumulation of Resources
A considerable segment of all resources – about a third of the total – are neither recycled, nor incinerated or dumped: they are accumulated in buildings, infrastructure, and consumer goods. In 2005, 62 Gt of resources were used globally. After subtracting energy sources (fossil fuels and biomass) and waste from the mining sector, the remaining 30 Gt were used to make material goods. Of these, 4 Gt was used to make products that last for less than one year (disposable products).
Illustration: Diego Marmolejo.
The other 26 Gt was accumulated in buildings, infrastructure, and consumer goods that last for more than a year. In the same year, 9 Gt of all surplus resources were disposed of, meaning that the “stocks” of material capital grew by 17 Gt in 2005. In comparison: the total waste that could be recycled in 2005 was only 13 Gt (4 Gt disposable products and 9 Gt surplus resources), of which only a third (4 Gt) can be effectively recycled.
About a third of all resources are neither recycled, nor incinerated or dumped: they are accumulated in buildings, infrastructure, and consumer goods.
Only 9 Gt is then put in a landfill, incinerated, or dumped – and it is this 9 Gt that the circular economy focuses on. But even if that was all recycled, and if the recycling processes were 100% efficient, the circle would still not be closed: 63 Gt in raw materials and 30 Gt in material products would still be needed.
As long as we keep accumulating raw materials, the closing of the material life cycle remains an illusion, even for materials that are, in principle, recyclable. For example, recycled metals can only supply 36% of the yearly demand for new metal, even if metal has relatively high recycling capacity, at about 70%. We still use more raw materials in the system than can be made available through recycling – and so there are simply not enough recyclable raw materials to put a stop to the continuously expanding extractive economy.The True Face of the Circular Economy
A more responsible use of resources is of course an excellent idea. But to achieve that, recycling and re-use alone aren’t enough. Since 71% of all resources cannot be recycled or re-used (44% of which are energy sources and 27% of which are added to existing stocks), you can only really get better numbers by reducing total use.
A circular economy would therefore demand that we use less fossil fuels (which isn’t the same as using more renewable energy), and that we accumulate less raw materials in commodities. Most importantly, we need to make less stuff: fewer cars, fewer microchips, fewer buildings. This would result in a double profit: we would need less resources, while the supply of discarded materials available for re-use and recycling would keep growing for many years to come.
It seems unlikely that the proponents of the circular economy would accept these additional conditions. The concept of the circular economy is intended to align sustainability with economic growth – in other words, more cars, more microchips, more buildings. For example, the European Union states that the circular economy will “foster sustainable economic growth”.
Even the limited goals of the circular economy – total recycling of a fraction of resources – demands an extra condition that proponents probably won’t agree with: that everything is once again made with wood and simple metals, without using synthetic materials, semi-conductors, lithium-ion batteries or composite materials.
Kris De Decker
Haas, Willi, et al. "How circular is the global economy?: An assessment of material flows, waste production, and recycling in the European Union and the world in 2005." Journal of Industrial Ecology 19.5 (2015): 765-777.
Murray, Alan, Keith Skene, and Kathryn Haynes. "The circular economy: An interdisciplinary exploration of the concept and application in a global context." Journal of Business Ethics 140.3 (2017): 369-380.
Gregson, Nicky, et al. "Interrogating the circular economy: the moral economy of resource recovery in the EU." Economy and Society 44.2 (2015): 218-243.
Krausmann, Fridolin, et al. "Global socioeconomic material stocks rise 23-fold over the 20th century and require half of annual resource use." Proceedings of the National Academy of Sciences (2017): 201613773.
Korhonen, Jouni, Antero Honkasalo, and Jyri Seppälä. "Circular economy: the concept and its limitations." Ecological economics 143 (2018): 37-46.
Fellner, Johann, et al. "Present potentials and limitations of a circular economy with respect to primary raw material demand." Journal of Industrial Ecology 21.3 (2017): 494-496.
Reuter, Markus A., Antoinette van Schaik, and Miquel Ballester. "Limits of the Circular Economy: Fairphone Modular Design Pushing the Limits." 2018
Reuter, M. A., and A. Van Schaik. "Product-Centric Simulation-based design for recycling: case of LED lamp recycling." Journal of Sustainable Metallurgy 1.1 (2015): 4-28.
Reuter, Markus A., Antoinette van Schaik, and Johannes Gediga. "Simulation-based design for resource efficiency of metal production and recycling systems: Cases-copper production and recycling, e-waste (LED lamps) and nickel pig iron." The International Journal of Life Cycle Assessment 20.5 (2015): 671-693.
Low-tech Magazine was born in 2007 and has seen minimal changes ever since. Because a website redesign was long overdue — and because we try to practice what we preach — we decided to build a low-tech, self-hosted, and solar-powered version of Low-tech Magazine. The new blog is designed to radically reduce the energy use associated with accessing our content.
First prototype of the solar powered server that runs the new website.
We were told that the Internet would “dematerialise” society and decrease energy use. Contrary to this projection, it has become a large and rapidly growing consumer of energy itself. According to the latest estimates, the entire network already consumes 10% of global electricity production, with data traffic doubling roughly every two years.
In order to offset the negative consequences associated with high energy consumption, renewable energy has been proposed as a means to lower emissions from powering data centers. For example, Greenpeace's yearly ClickClean report ranks major Internet companies based on their use of renewable power sources.
However, running data centers on renewable power sources is not enough to address the growing energy use of the Internet. To start with, the Internet already uses three times more energy than all wind and solar power sources worldwide can provide. Furthermore, manufacturing, and regularly replacing, renewable power plants also requires energy, meaning that if data traffic keeps growing, so will the use of fossil fuels.
Running data centers on renewable power sources is not enough to address the growing energy use of the Internet.
Finally, solar and wind power are not always available, which means that an Internet running on renewable power sources would require infrastructure for energy storage and/or transmission that is also dependent on fossil fuels for its manufacture and replacement. Powering websites with renewable energy is not a bad idea, however the trend towards growing energy use must also be addressed.
To start with, content is becoming increasingly resource-intensive. This has a lot to do with the growing importance of video, but a similar trend can be observed among websites.
The size of the average web page (defined as the average page size of the 500,000 most popular domains) increased from 0.45 megabytes (MB) in 2010 to 1.7 megabytes in June 2018. For mobile websites, the average “page weight” rose tenfold from 0.15 MB in 2011 to 1.6 MB in 2018. Using different measurement methods, other sources report average page sizes of up to 2.9 MB in 2018.
The growth in data traffic surpasses the advances in energy efficiency (the energy required to transfer 1 megabyte of data over the Internet), resulting in more and more energy use. “Heavier” or “larger” websites not only increase energy use in the network infrastructure, but they also shorten the lifetime of computers — larger websites require more powerful computers to access them. This means that more computers need to be manufactured, which is a very energy-intensive process.
Being always online doesn't combine well with renewable energy sources such as wind and solar power, which are not always available.
A second reason for growing Internet energy consumption is that we spend more and more time on-line. Before the arrival of portable computing devices and wireless network access, we were only connected to the network when we had access to a desktop computer in the office, at home, or in the library. We now live in a world in which no matter where we are, we are always on-line, including, at times, via more than one device simultaneously.
“Always-on” Internet access is accompanied by a cloud computing model – allowing more energy efficient user devices at the expense of increased energy use in data centers. Increasingly, activities that could perfectly happen off-line – such as writing a document, filling in a spreadsheet, or storing data – are now requiring continuous network access. This does not combine well with renewable energy sources such as wind and solar power, which are not always available.Low-tech Web Design
Our new web design addresses both these issues. Thanks to a low-tech web design, we managed to decrease the average page size of the blog by a factor of five compared to the old design – all while making the website visually more attractive (and mobile-friendly). Secondly, our new website runs 100% on solar power, not just in words, but in reality: it has its own energy storage and will go off-line during longer periods of cloudy weather.
The Internet is not an autonomous being. Its growing energy use is the consequence of actual decisions made by software developers, web designers, marketing departments, publishers and internet users. With a lightweight, off-the-grid solar-powered website, we want to show that other decisions can be made.
With 36 of roughly 100 articles now online, the average page weight on the solar powered website is roughly five times below that of the previous design.
To start with, the new website design reverses the trend towards increasingly larger page sizes. With 36 of roughly 100 articles now online, the average page weight on the solar powered website is 0.77 MB — roughly five times below that of the previous design, and less than half the average page size of the 500,000 most popular blogs in June 2018.
A web page speed test from the old and the new Low-tech Magazine. Page size has decreased more than sixfold, number of requests has decreased fivefold, and download speed has increased tenfold. Note that we did not design the website for speed, but for low energy use. It would be faster still if the server would be placed in a data center and/or in a more central location in the Internet infrastructure.
Source: Pingdom.Static Site
One of the fundamental choices we made was to build a static website. Most of today’s websites use server side programming languages that generate the website on the fly by querying a database. This means that every time someone visits a web page, it is generated on demand.
On the other hand, a static website is generated once and exists as a simple set of documents on the server’s hard disc. It's always there -- not just when someone visits the page. Static websites are thus based on file storage whereas dynamic websites depend on recurrent computation. Static websites consequently require less processing power and thus less energy.
The choice for a static site enables the possibility of serving the site in an economic manner from our home office in Barcelona. Doing the same with a database-driven website would be nearly impossible, because it would require too much energy. It would also be a big security risk. Although a web server with a static site can be hacked, there are significantly less attack routes and the damage is more easily repaired.
The main challenge was to reduce page size without making the website less attractive. Because images take up most of the bandwidth, it would be easy to obtain very small page sizes and lower energy use by eliminating images, reducing their number, or making them much smaller. However, visuals are an important part of Low-tech Magazine’s appeal, and the website would not be the same without them.
By dithering, we can make images ten times less resource-intensive, even though they are displayed much larger than on the old website.
Instead, we chose to apply an obsolete image compression technique called “dithering”. The number of colours in an image, combined with its file format and resolution, contributes to the size of an image. Thus, instead of using full-colour high-resolution images, we chose to convert all images to black and white, with four levels of grey in-between.
These black-and-white images are then coloured according to the pertaining content category via the browser’s native image manipulation capacities. Compressed through this dithering plugin, images featured in the articles add much less load to the content: compared to the old website, the images are roughly ten times less resource-intensive.Default typeface / No logo
All resources loaded, including typefaces and logos, are an additional request to the server, requiring storage space and energy use. Therefore, our new website does not load a custom typeface and removes the font-family declaration, meaning that visitors will see the default typeface of their browser.
We use a similar approach for the logo. In fact, Low-tech Magazine never had a real logo, just a banner image of a spear held as a low-tech weapon against prevailing high-tech claims.
Instead of a designed logotype, which would require the production and distribution of custom typefaces and imagery, Low-tech Magazine’s new identity consists of a single typographic move: to use the left-facing arrow in place of the hypen in the blog’s name: LOW←TECH MAGAZINE.No Third-Party Tracking, No Advertising Services, No Cookies
Web analysis software such as Google Analytics records what happens on a website — which pages are most viewed, where visitors come from, and so on. These services are popular because few people host their own website. However, exchanging these data between the server and the computer of the webmaster generates extra data traffic and thus energy use.
With a self-hosted server, we can make and view these measurements on the same machine: every web server generates logs of what happens on the computer. These (anonymous) logs are only viewed by us and are not used to profile visitors.
With a self-hosted server, there is no need for third-party tracking and cookies.
Low-tech Magazine has been running Google Adsense advertisements since the beginning in 2007. Although these are an important financial resource to maintain the blog, they have two important downsides. The first is energy use: advertising services raise data traffic and thus energy use.
Secondly, Google collects information from the blog’s visitors, which forces us to craft extensive privacy statements and cookie warnings — which also consume data, and annoy visitors. Therefore, we replace Adsense by other financing options (read more below). We use no cookies at all.How often will the website be off-line?
Quite a few web hosting companies claim that their servers are running on renewable energy. However, even when they actually generate solar power on-site, and do not merely “offset” fossil fuel power use by planting trees or the like, their websites are always on-line.
This means that either they have a giant battery storage system on-site (which makes their power system unsustainable), or that they are relying on grid power when there is a shortage of solar power (which means that they do not really run on 100% solar power).
The 50W solar PV panel. On top of it is a 10W panel powering a lighting system.
In contrast, this website runs on an off-the-grid solar power system with its own energy storage, and will go off-line during longer periods of cloudy weather. Less than 100% reliability is essential for the sustainability of an off-the-grid solar system, because above a certain threshold the fossil fuel energy used for producing and replacing the batteries is higher than the fossil fuel energy saved by the solar panels.
How often the website will be off-line remains to be seen. The web server is now powered by a new 50 Wp solar panel and a two year old 12V 7Ah lead-acid battery. Because the solar panel is shaded during the morning, it receives direct sunlight for only 4 to 6 hours per day. Under optimal conditions, the solar panel thus generates 6 hours x 50 watt = 300 Wh of electricity.
The web server uses between 1 and 2.5 watts of power (depending on the number of visitors), meaning that it requires between 24 Wh and 60 Wh of electricity per day. Under optimal conditions, we should thus have sufficient energy to keep the web server running for 24 hours per day. Excess energy production can be used for household applications.
We expect to keep the website on-line during one or two days of bad weather, after which it will go off-line.
However, during cloudy days, especially in winter, daily energy production could be as low as 4 hours x 10 watts = 40 watt-hours per day, while the server requires beteen 24 and 60 Wh per day. The battery storage is roughly 40 Wh, taking into account 30% of charging and discharging losses and 33% depth-or-discharge (the solar charge controller shuts the system down when battery voltage drops to 12V).
Consequently, the solar powered server will remain on-line during one or two days of bad weather, but not for longer. However, these are estimations, and we may add a second 7 Ah battery in autumn if this is necessary. We aim for an "uptime" of 90%, meaning that the website will be off-line for an average of 35 days per year.
First prototype with lead-acid battery (12V 7Ah) on the left, and Li-Po UPS battery (3,7V 6600mA) on the right. The lead-acid battery provides the bulk of the energy storage, while the Li-Po battery allows the server to shut down without damaging the hardware (it will be replaced by a much smaller Li-Po battery).When is the best time to visit?
The accessibility of this website depends on the weather in Barcelona, Spain, where the solar-powered web server is located. To help visitors “plan” their visits to Low-tech Magazine, we provide them with several clues.
A battery meter provides crucial information because it may tell the visitor that the blog is about to go down -- or that it's "safe" to read it. The design features a background colour that indicates the capacity of the solar-charged battery that powers the website server. A decreasing height indicates that night has fallen or that the weather is bad.
In addition to the battery level, other information about the website server is visible with a statistics dashboard. This includes contextual information of the server’s location: time, current sky conditions, upcoming forecast, and the duration since the server last shut down due to insufficient power.Computer Hardware
SERVER. The website runs on an Olimex A20 computer. This machine has 2 Ghz of processing power, 1 GB of RAM, and 16 GB of storage. The server draws 1 - 2.5 watts of power.
INTERNET CONNECTION. The server is connected to a 100 MBps fibre internet connection. For now, the router is powered by grid electricity and requires 10 watts of power. We are investigating how to replace the energy-hungry router with a more efficient one that can be solar-powered, too.
SOLAR PV SYSTEM. The server runs on a 50 Wp solar panel and one 12V 7Ah lead-acid battery (energy storage capacity will be doubled at the end of this month). The system is managed by a 20A solar charge controller.
The solar powered Low-tech Magazine is a work in progress. For now, the grid-powered Low-tech Magazine remains on-line. Readers will be encouraged to visit the solar powered website if it is available. What happens later, is not yet clear. There are several possibilities, but much will depend on the experience with the solar powered server.
Until we decide how to integrate the old and the new website, making and reading comments will only be possible on the grid-powered Low-tech Magazine, which is still hosted at TypePad. If you want to send a comment related to the solar powered web server itself, you can do so by commenting on this page or by sending an e-mail to solar (at) lowtechmagazine (dot) com.Can I help?
Yes, you can.
On the one hand, we're looking for ideas and feedback to further improve the website and reduce its energy use. We will document the project extensively so that others can build low-tech websites too.
On the other hand, we're hoping for people to support this project with a financial contribution. Advertising services, which have maintained Low-tech Magazine since its start in 2007, are not compatible with our lightweight web design. Therefore, we are searching for other ways to finance the website:
We will soon offer print-on-demand copies of the blog. These publications will allow you to read Low-tech Magazine on paper, on the beach, in the sun, or whenever and where ever you want.
We remain open to advertisements, but these can only take the form of a static banner image that links to the website of the advertiser. We do not accept advertisers who are incompatible with our mission.
Related article: How to build a low-tech internet.