It’s one of Hollywood’s favorite bits of pseudoscience: human beings use only 10 percent of their brain, and awakening the remaining 90 percent—supposedly dormant—allows otherwise ordinary human beings to display extraordinary mental abilities. In Phenomenon (1996), John Travolta gains the ability to predict earthquakes and instantly learns foreign languages. Scarlett Johansson becomes a superpowered martial-arts master in Lucy(2014). And in Limitless (2011) Bradley Cooper writes a novel overnight.
This ready-made blueprint for fantasy films is also a favorite among the general public. In a survey, 65 percent of respondents agreed with the statement, “People only use 10 percent of their brain on a daily basis.” But the truth is that we use all of our brain all of the time.
How do we know? For one thing, if we needed only 10 percent of our brain, the majority of brain injuries would have no discernible consequences, since the damage would affect parts of the brain that weren’t doing anything to begin with. We also know that natural selection discourages the development of useless anatomical structures: early humans who devoted scarce physical resources to growing and maintaining huge amounts of excess brain tissue would have been outcompeted by those who spent those precious resources on things more necessary for survival and reproductive success. Tougher immune systems, stronger muscles, better looking hair—just about anything would be more useful than having a head full of inert tissue.
We’ve been able to back up these logical conclusions with hard evidence. Imaging techniques, such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), allow doctors and scientists to map brain activity in real time. The data clearly shows that large areas of the brain—far more than 10 percent—are used for all sorts of activity, from seemingly simple tasks like resting or looking at pictures to more complex ones like reading or doing math. Scientists have yet to find an area of the brain that doesn’t do anything.
So how did we come to believe that 90 percent of our brain is useless? The myth is often incorrectly attributed to 19th-century psychologist William James, who proposed that most of our mental potential goes untapped. But he never specified a percentage. Albert Einstein—a magnet for misattribution of quotes—has also been held responsible. In reality, the concept most likely came from the American self-help industry. One of the earliest mentions appears in the preface to Dale Carnegie’s 1936 mega best seller, How to Win Friends and Influence People. The idea that we have harnessed only a fraction of our brain’s full potential has been a staple for motivational gurus, New Age hucksters, and uninspired screenwriters ever since.
Obviously, this is bad news for anyone hoping to find the secret to becoming a genius overnight. The good news, though, is that hard work still works. There is plenty of reason to believe that you can build brainpower by regularly working at challenging mental tasks, such as playing a musical instrument, doing arithmetic, or reading a novel.
An earlier version of this article was published on the Britannica blog Advocacy for Animals.
The partnership between humans and animals dates back to the first domestication of animals in the Stone Age, as long as 9,000 years ago. But never have animals provided such dedicated and particular help to humans as they do today in the form of trained service, or assistance, to people with disabilities. These animals, usually dogs, help people accomplish tasks that would otherwise be prohibitively difficult or simply impossible. Service animals are not pets but working animals doing a job. Thus, legislation—such as the Americans with Disabilities Act (1990) in the United States and the Disability Discrimination Act (1995) in the United Kingdom—makes service animals exempt from rules that prohibit animals from public places and businesses.
The most familiar service animals are guide dogs who help visually impaired people move about safely. Systematic training of guide dogs originated in Germany during World War I to aid blindedveterans. In the late 1920s Dorothy Harrison Eustis, an American dog trainer living in Switzerland, heard of the program and wrote a magazine article about it. The publicity led her to her first student, Morris Frank, with whose help she established a similar training school in the United States in 1929, the Seeing Eye (now located in Morristown, New Jersey).
Puppies are often bred for the purpose by the various organizations that train guide dogs. German shepherds, Labrador retrievers, and Labrador-golden retriever crosses are the most widely used breeds because of their calm temperaments, intelligence, natural desire to be helpful, and good constitutions. Puppies spend their first year with foster families who socialize them and prepare them for later training by teaching them basic obedience skills. At the age of approximately 18 months, guide dogs enter formal training, which lasts from about three to five months. During this period the dogs learn to adjust to a harness, stop at curbs, gauge the human partner’s height when traveling in low or obstructed places, and disobey a command when obedience will endanger the person.
In recent years, hearing dogs have become increasingly common. These dogs, usually mixed-breed rescues from animal shelters, are trained to alert their human partners to ordinary sounds, such as an alarm clock, a baby’s cry, or a telephone. The dogs raise the alert by touching the partner with a paw and then leading him or her to the source of the sound. They are also trained to recognize danger signals—such as fire alarms and sounds of intruders—and to raise the alert by touching with a paw and then lying down in a special “alert” posture, at which time the human partner can take appropriate action.
Dogs can be trained for a great variety of assistance purposes. For example, Service Dogs for America (SDA)/Great Plains Assistance Dogs Foundation, Inc., trains several categories of assistance animals, including service dogs who help people who use wheelchairs and other mobility devices; hearing dogs; seizure-alert or seizure-response dogs, who help persons with seizure disorders by activating an electronic alert system when symptoms occur (some can even predict the onset of a seizure); and therapeutic companion dogs, who provide emotional support for people in hospices, hospitals, and other situations in which loneliness and lack of stimulation are continual problems. There are many programs that train and certify pet animals, especially dogs and cats, as therapy animals who visit such institutions and bring much-welcomed companionship to patients.
Animals are also used in programs such as animal-assisted therapy (AAT). In the words of the Australia-based Delta Society, AAT is a “goal-directed intervention” that utilizes the motivating and rewarding presence of animals, facilitated by trained human professionals, to help patients make cognitive and physical improvements. For example, an elderly patient in a nursing home might be given the task of buckling a dog’s collar or feeding small treats to a cat, activities that enhance fine motor skills. Goals are set for the patients, and their progress is measured.
Dogs and cats are not the only animals who can assist humans with disabilities. Capuchin monkeys—small, quick, and intelligent—can help people who are paralyzed or have other severe impairments to their mobility, such as multiple sclerosis. These monkeys perform essential tasks such as turning on lights and picking up dropped objects. One of the more unusual assistance animals is the guide horse. An experimental program in the United States trains miniature horses to guide the visually impaired in the same way that guide dogs do. The tiny horses may be an alternative for people who are allergic to dogs or who have equestrian backgrounds and are more comfortable with horses.
Certain dogs and other animals have special skills similar to those of the seizure-assistance dogs, such as the ability to detect a diabetic’s drop in blood sugar and alert the person before danger occurs. The sometimes uncanny natural abilities of animals can benefit humans in many ways. Reputable organizations that train assistance animals also take steps to ensure that the animals are cherished and lead rewarding, enjoyable, and healthy lives. When the animals’ helping careers are over, provision is made for their well-deserved retirement.
Plastic is cheap and durable and has revolutionized human activity. Modern life is addicted to and dependent on this versatile substance, which is found in everything from computers to medical equipment to food packaging. Unfortunately, an estimated 19 billion pounds (more than 8.5 million metric tons) of plastic waste ends up in our oceans every year. Much of this plastic comes from single-use packaging, such as soda bottles and produce bags, and from other single-use products such as straws and disposable diapers. One study suggested that by the year 2050 there will be more plastic by weight in the oceans than fish!
Plastic pollution is more than unsightly. It has a deadly and direct effect on wildlife. Many marine organisms get physically entangled in plastic trash and either drown or slowly starve to death. Others eat the plastics, mistaking the ubiquitous materials for food. Leatherback sea turtles often confuse plastic bags for their jellyfish prey and asphyxiate. Seabirds, especially albatrosses, and other birds that scoop food from the sea have been found dead on their nests, their bellies too full of plastics to survive. A recent study found plastic trash in 90 percent of seabirds, with pieces ranging from bottle caps to rice-sized fragments that look like seeds.
Perhaps even more worrisome is microplastic pollution. The vast majority of plastics are not biodegradable, meaning they break down into smaller and smaller particles but never leave the environment entirely. Pieces smaller than 5 mm (0.2 inch) are classified as microplastics, and it is estimated that a significant portion of all plastic pollution in the oceans is now in this category. Microplastics also come from cosmetics, body washes, and toothpastes, which use tiny pieces of plastics as exfoliants and abrasives, and from items of synthetic clothing that shed minute fibers each time they are washed. These particles and fibers are too small for waste management systemsto filter and are directly discharged into the oceans. There is concern that these microplastics and/or the endocrine-disrupting chemicals they contain will bioaccumulate (become progressively more concentrated in the bodies of organisms up the food chain), since they are about the same size as plankton that serve as the base of the food chain. Many marine organisms have already been found with microplastics in their bodies. Studies on marine worms and oysters have found that microplastics disrupt their feeding and reproduction, causing a failure to thrive. These tiny fragments could also contaminate humans directly, as microplastics have been found in sea salt sold for human consumption.
Disturbingly, global plastic production doubles every 11 years, meaning the amount of plastic pollution will only continue to increase without drastic changes. To help battle this dire problem, be aware of your consumption of single-use plastics—it will likely shock you to realize how seemingly everything comes in plastic. Reduce your consumption of these products and reuse the containers whenever possible. Avoid health and beauty products that use plastic microbeads. Buy reusable bags, straws, and glass or metal beverage containers. Buy pantry basics, like rice and beans, in bulk, and avoid putting your produce in plastic bags for the short trip home. Recycle the plastic you do use, but be aware that not every plastic can be recycled. Participate in beach, river, or lake cleanups and help raise awareness of the problem. Encourage your employer and the companies and restaurants you patronize to facilitate greener options, such as paper products over plastic disposables. Support legislation that targets plastic pollution and the fossil fuels from which they are made. The challenge is huge, but, like plastics themselves, small actions accumulate.
Imagine the thrill of discovery when more than 10 years of research on the origin of a common genetic disease, cystic fibrosis (CF), results in tracing it to a group of distinct but mysterious Europeans who lived about 5,000 years ago.
CF is the most common, potentially lethal, inherited disease among Caucasians—about one in 40 carry the so-called F508del mutation. Typically only beneficial mutations, which provide a survival advantage, spread widely through a population.
CF hinders the release of digestive enzymes from the pancreas, which triggers malnutrition, causes lung disease that is eventually fatal and produces high levels of salt in sweat that can be life-threatening.
In recent years, scientists have revealed many aspects of this deadly lung disease which have led to routine early diagnosis in screened babies, better treatments and longer lives. On the other hand, the scientific community hasn’t been able to figure out when, where and why the mutation became so common. Collaborating with an extraordinary team of European scientists such as David Barton in Ireland and Milan Macek in the Czech Republic, in particular a group of brilliant geneticists in Brest, France led by Emmanuelle Génin and Claude Férec, we believe that we now know where and when the original mutation arose and in which ancient tribe of people.
We share these findings in an article in the European Journal of Human Genetics which represents the culmination of 20 years’ work involving nine countries.
What is cystic fibrosis?
My quest to determine how CF arose and why it’s so common began soon after scientists discovered the CFTR gene causing the disease in 1989. The most common mutation of that gene that causes the disease was called F508del. Two copies of the mutation—one inherited from the mother and the other from the father—caused the lethal disease. But, inheriting just a single copy caused no symptoms, and made the person a “carrier.”
I had been employed at the University of Wisconsin since 1977 as a physician-scientist focusing on the early diagnosis of CF through newborn screening. Before the gene discovery, we identified babies at high risk for CF using a blood test that measured levels of protein called immunoreactive trypsinogen (IRT). High levels of IRT suggested the baby had CF. When I learned of the gene discovery, I was convinced that it would be a game-changer for both screening test development and epidemiological research.
That’s because with the gene we could offer parents a more informative test. We could tell them not just whether their child had CF, but also whether they carried two copies of a CFTR mutation, which caused disease, or just one copy which made them a carrier.
One might ask what is the connection between studying CF newborn screening and learning about the disease origin. The answer lies in how our research team in Wisconsin transformed a biochemical screening test using the IRT marker to a two-tiered method called IRT/DNA.
Because about 90 percent of CF patients in the U.S. and Europe have at least one F508del mutation, we began analyzing newborn blood for its presence whenever the IRT level was high. But when this two-step IRT/DNA screening is done, not only are patients with the disease diagnosed but also tenfold more infants who are genetic carriers of the disease are identified.
As preconception-, prenatal- and neonatal screening for CF have proliferated during the past two decades, the many thousands of individuals who discovered they were F508del carriers and their concerned parents often raised questions about the origin and significance of carrying this mutation themselves or in their children. Would they suffer with one copy? Was there a health benefit? It has been frustrating for a pediatrician specializing in CF to have no answer for them.
The challenge of finding origin of the CF mutation
I wanted to zero in on when this genetic mutation first starting appearing. Pinpointing this period would allow us to understand how it could have evolved to provide a benefit—at least initially—to those people in Europe who had it. To expand my research, I decided to take a sabbatical and train in epidemiology while taking courses in 1993 at the London School of Hygiene and Tropical Medicine.
The timing was perfect because the field of ancient DNA research was starting to blossom. New breakthrough techniques like the Polymerase Chain Reaction made it possible to study the DNA of mummies and other human archaeological specimens from prehistoric burials. For example, early studies were performed on the DNA from the 5,000-year-old Tyrolean Iceman, which later became known as Ötzi.
I decided that we might be able to discover the origin of CF by analyzing the DNA in the teeth of Iron Age people buried between 700-100 B.C. in cemeteries throughout Europe.
Using this strategy, I teamed up with archaeologists and anthropologists such as Maria Teschler-Nicolaat the Natural History Museum in Vienna, who provided access to 32 skeletons buried around 350 B.C. near Vienna. Geneticists in France collected DNA from the ancient molars and analyzed the DNA. To our surprise, we discovered the presence of the F508del mutation in DNA from three of 32 skeletons.
This discovery of F508del in Central European Iron Age burials radiocarbon-dated to 350 B.C. suggested to us that the original CF mutation may have arisen earlier. But obtaining Bronze Age and Neolithic specimens for such direct studies proved difficult because fewer burials are available, skeletons are not as well-preserved and each cemetery merely represents a tribe or village. So rather than depend on ancient DNA, we shifted our strategy to examine the genes of modern humans to figure out when this mutation first arose.
Why would a harmful mutation spread?
To find the origin of CF in modern patients, we knew we needed to learn more about the signature mutation—F508del—in people who are carriers or have the disease.
This tiny mutation causes loss of one amino acid out of the 1,480 amino acid chain and changes the shape of a protein on the surface of the cell that moves chloride in and out of the cell. When this protein is mutated, people carrying two copies of it—one from the mother and one from the father—are plagued with thick sticky mucus in their lungs, pancreas and other organs. The mucus in their lungs allows bacteria to thrive, destroying the tissue and eventually causing the lungs to fail. In the pancreas, the thick secretions prevent the gland from delivering the enzymes the body needs to digest food.
So why would such a harmful mutation continue to be transmitted from generation to generation?
A mutation as harmful as F508del would never have survived among people with two copies of the mutated CFTR gene because they likely died soon after birth. On the other hand, those with one mutation may have a survival advantage, as predicted in Darwin’s “survival of the fittest” theory.
Perhaps the best example of a mutation favoring survival under stressful environmental conditions can be found in Africa, where fatal malaria has been endemic for centuries. The parasite that causes malaria infects the red blood cells in which the major constituent is the oxygen-carrying protein hemoglobin. Individuals who carry the normal hemoglobin gene are vulnerable to this mosquito-borne disease. But those who are carriers of the mutated “hemoglobin S” gene, with only one copy, are protected from severe malaria. However two copies of the hemoglobin S gene causes sickle cell disease, which can be fatal.
Here there is a clear advantage to carrying one mutant gene—in fact, about one in 10 Africans carries a single copy. Thus, for many centuries an environmental factor has favored the survival of individuals carrying a single copy of the sickle hemoglobin mutation.
Similarly we wondered whether there was a health benefit to carrying a single copy of this specific CF mutation during exposures to environmentally stressful conditions. Perhaps, we reasoned, that’s why the F508del mutation was common among Caucasian Europeans and Europe-derived populations.
Clues from modern DNA
To figure out the advantage of transmitting a single mutated F508del gene from generation to generation, we first had to determine when and where the mutation arose so that we could uncover the benefit this mutation conferred.
We obtained DNA samples from 190 CF patients bearing F508del and their parents residing in geographically distinct European populations from Ireland to Greece plus a Germany-derived population in the U.S. We then identified a collection of genetic markers—essentially sequences of DNA—within the CF gene and flanking locations on the chromosome. By identifying when these mutations emerged in the populations we studied, we were able to estimate the age of the most recent common ancestor.
Next, by rigorous computer analyses, we estimated the age of the CF mutation in each population residing in the various countries.
We then determined that the age of the oldest common ancestor is between 4,600 and 4,725 years and arose in southwestern Europe, probably in settlements along the Atlantic Ocean and perhaps in the region of France or Portugal. We believe that the mutation spread quickly from there to Britain and Ireland, and then later to central and southeastern European populations such as Greece, where F508del was introduced only about 1,000 years ago.
Who spread the CF mutation throughout Europe?
Thus, our newly published data suggest that the F508del mutation arose in the early Bronze Age and spread from west to southeast Europe during ancient migrations.
Moreover, taking the archaeological record into account, our results allow us to introduce a novel concept by suggesting that a population known as the Bell Beaker folk were the probable migrating population responsible for the early dissemination of F508del in prehistoric Europe. They appeared at the transition from the Late Neolithic period, around 4000 B.C., to the Early Bronze Age during the third millennium B.C. somewhere in Western Europe. They were distinguished by their ceramic beakers, pioneering copper and bronze metallurgy north of the Alps and great mobility. All studies, in fact, show that they were into heavy migration, traveling all over Western Europe.
Over approximately 1,000 years, a network of small families and/or elite tribes spread their culture from west to east into regions that correspond closely to the present-day European Union, where the highest incidence of CF is found. Their migrations are linked to the advent of Western and Central European metallurgy, as they manufactured and traded metal goods, especially weapons, while traveling over long distances. It is also speculated that their travels were motivated by establishing marriage networks. Most relevant to our study is evidence that they migrated in a direction and over a time period that fit well with our results. Recent genomic data suggest that both migration and cultural transmission played a major role in diffusion of the “Beaker Complex” and led to a “profound demographic transformation” of Britain and elsewhere after 2400 B.C.
Determining when F508del was first introduced in Europe and discovering where it arose should provide new insights about the high prevalence of carriers—and whether the mutation confers an evolutionary advantage. For instance, Bronze Age Europeans, while migrating extensively, were apparently spared from exposure to endemic infectious diseases or epidemics; thus, protection from an infectious disease, as in the sickle cell mutation, through this genetic mutation seems unlikely.
As more information on Bronze Age people and their practices during migrations become available through archaeological and genomics research, more clues about environmental factors that favored people who had this gene variant should emerge. Then, we may be able to answer questions from patients and parents about why they have a CFTR mutation in their family and what advantage this endows.
This article was originally published on The Conversation. Matthew E. Baker, Professor of Geography and Environmental Systems, University of Maryland, Baltimore County
But evidence-based research that can help us identify the healthiest environments to live is surprisingly scant. As scientists begin to tease apart the links between well-being and the environment, they are finding that many nuances contribute to and detract from the benefits offered by a certain environment – whether it be a metropolis of millions or a deserted beach.
“What we’re trying to do as a group of researchers around the world is not to promote these things willy-nilly, but to find pro and con evidence on how natural environments – and our increasing detachment from them – might be affecting health and well-being,” says Mathew White, an environmental psychologist at the University of Exeter Medical School.
White and other researchers are revealing that a seemingly countless number of factors determine how our surroundings influence us. These can include a person’s background and life circumstances, the quality and duration of exposure and the activities performed in it.
Generally speaking, evidence suggests that green spaces are good for those of us who live in urban areas. Those who reside near parks or trees tend to enjoy lower levels of ambient air pollution, reduced manmade noise pollution and more cooling effects (something that will become increasingly useful as the planet warms).
Time in nature has been linked to reduced physical markers of stress. When we are out for a stroll or just sitting beneath the trees, our heart rate and blood pressure both tend to go down. We also release more natural ‘killer cells’: lymphocytes that roam throughout the body, hunting down cancerous and virus-infected cells.
Researchers are still trying to determine why this is so, although they do have a number of hypotheses. “One predominate theory is that natural spaces act as a calming backdrop to the busy stimuli of the city,” says Amber Pearson, a health geographer at Michigan State University. “From an evolutionary perspective, we also associate natural things as key resources for survival, so we favour them.”
This does not necessarily mean that urban denizens should all move to the countryside, however.
City residents tend to suffer from more asthma, allergies and depression – but they also tend to be less obese, at a lower suicide risk and are less likely to get killed in an accident
City residents tend to suffer from higher levels of asthma, allergies and depression. But they also tend to be less obese, at a lower risk of suicide and are less likely to get killed in an accident. They lead happier lives as seniors and live longerin general. (Read more aboutfive of the world’s healthiest cities).
In other cases, rural pollution poses a major threat. In India, air pollution contributed to the deaths of 1.1 million citizens in 2015 – with rural residents rather than urban ones accounting for 75% of the victims. This is primarily because countryside dwellers are at greater risk of breathing air that is polluted by burning of agricultural fields, wood or cow dung (used for cooking fuel and heat).
Indonesia’s slash and burn-style land clearing likewise causes a blanket of toxic haze that lasts for months and sometimes affects neighbouring countries, including Singapore, Malaysia and Thailand. Meanwhile, smoke pollution from fires lit in South America and southern Africa has been known to make its way around the entire southern hemisphere. (That said, the air in the southern hemisphere is generally cleaner than in the northern hemisphere – simply because there are fewer people living there).
What about the idea of that pure mountain air? It’s true that black carbon aerosols and particulate matter pollution tends to be lower at higher altitudes. But trying to move above air pollution may cause other issues.
While people who live in in places 2,500m or higher seem to have lower mortality from cardiovascular disease, stroke and some types of cancers, data indicate that they also seem to be at an elevated risk of death from chronic pulmonary disease and from lower respiratory tract infections. This is likely at least in part because cars and other vehicles operate less efficiently at higher altitudes, emitting greater amounts of hydrocarbons and carbon monoxide – which is made even more harmful by the increased solar radiation in such places. Living at a moderate altitude of 1,500 to 2,500 meters, therefore, may be the healthiest choice.
There is a strong argument to be made for living near the sea – or at least near some body of water
On the other hand, there is a strong argument to be made for living near the sea – or at least near some body of water. Those in the UK who live closer to the ocean, for example, tend to have a better bill of health than those who live inland, taking into account their age and socioeconomic status. This is likely due to a variety of reasons, White says, including the fact that our evolution means we are attracted to the high levels of biodiversity found there (in the past, this would have been a helpful indicator of food sources) and that beaches offer opportunities for daily exercise and vitamin D.
Then there are the psychological benefits. A 2016 study Pearson and her colleagues conducted in Wellington, New Zealand found that residents with ocean views had lower levels of psychological distress. For every 10% increase in how much blue space people could see, the researchers found a one-third point reduction in the population’s average Kessler Psychological Distress Scale (used to predict anxiety and mood disorders), independent of socioeconomic status. Given that finding, Pearson says, “One might expect that a 20 to 30% increase in blue space visibility could shift someone from moderate distress into a lower category.” Pearson found similar results in a follow-up study conducted near the Great Lakes in the US (currently in review), as did White in an upcoming study of Hong Kong residents.
The team’s second analysis of nearly 200 recently redeveloped water sites will allow them to tease out how factors such as climate, weather, pollution levels, smells, seasonality, safety and security, accessibility and more, influence a given water body’s appeal. The ultimate goal, Bell says, is to find “what makes a great blue space.” Once the results are in, he and his colleagues will develop a quality assessment tool for those looking to most effectively restore urban canals, overgrown lakes, former docklands, rivers and other neglected blue spaces to make residents’ lives better.
“There might be a million other important things besides weather and daylight that influence someone in Hawaii versus Finland,” White says.
People who live in less regularly sunny places, like Vermont or Denmark, tend to have higher rates of skin cancer
In terms of health, data also suggest that, counterintuitively, people who live in more intermittently rather than regularly sunny places – Vermont and Minnesota in the US, for example, and Denmark and France – tend to have higher rates of skin cancer, likely because sunscreen is not part of daily routines. (Read more aboutfive countries where people live the longest).
Just as some green and blue spaces may be more beneficial than others, researchers are also coming to realize that the environment’s influence on well-being is not evenly distributed.
People living in lower socioeconomic conditions tend to derive more benefits from natural spaces than wealthy residents, White says. That’s likely because richer people enjoy other health-improving privileges, such as taking holidays and leading generally less stressful lives – a finding with important real-world implications. “Here in the UK, local authorities have a legal obligation to reduce health inequalities. So one way to do that is to improve the park system,” White says. “The poorest will benefit the most.”
Bell adds that proximity to nature actually tends to rank low on people’s lists of the most important factors for selecting a place to live, after things like safety, quietness and closeness to key locations like schools and work. But while the benefits of green and blue spaces should not be overplayed on an individual level, they are important for the scale at which they work.
And even so, one takeaway seems obvious: those living in a clean, oceanside city with ready access to nature – think Sydney or Wellington – may have struck the jackpot in terms of the healthiest places to live.