Advertisements

A Trillion Tonnes of Antarctica Fell into the Sea

In late August 2016, sunlight returned to the Antarctic Peninsula and unveiled a rift across the Larsen C Ice Shelf that had grown longer and deeper over the austral winterNASA/John Sonntag

Antarctica, Earth’s coldest continent, is known for its remoteness, its unique fauna, and its frigid surface of ice. Around Antarctica’s periphery, dozens of ice shelves (that is, masses of glacier-fed floating ice that are attached to land) project outward into the Southern Ocean. The two largest ice shelves, the Ross Ice Shelfand the Ronne Ice Shelf, span a combined area of nearly 350,000 square km (about 135,000 square miles)—an area roughly equivalent to Venezuela—but Antarctica’s Larsen Ice Shelf, the continent’s fourth largest, has received the bulk of the attention over the last 25 years because it is slowly coming apart. The latest episode in this saga occurred between July 10 and July 12, 2017, when a one-trillion-metric-ton chunk of ice—possibly critical to holding back a large section of the remaining shelf—calved (that is, broke away).

The Larsen Ice Shelf is located on the eastern side of the Antarctic Peninsula and juts out into the Weddell Sea. It originally covered an area of 86,000 square km (33,000 square miles), but its footprint has declined dramatically, possibly as a result of warming air temperatures over the Antarctic Peninsula during the second half of the 20th century. In January 1995 the northern portion (known as Larsen A) disintegrated, and a giant iceberg calved from the middle section (Larsen B). Larsen B steadily retreated until February–March 2002, when it too collapsed and disintegrated. The southern portion (Larsen C) made up two-thirds of the ice shelf’s original extent, covering an area of about 50,000 square km (19,300 square miles) alone. Its thickness ranges from 200 to 600 meters (about 660 to 1,970 feet). Sometime between July 10 and July 12, 2017, a 5,800-square-km- (~2,240-square-mile-) section—some 12% of the Larsen C—broke away. Signs of Larsen C’s impending fracture date back to 2012, when satellite monitoring detected a steadily growing crack near the Joerg Peninsula at the southern end of the shelf. NASA and ESA satellites tracked the rift as it grew to more than 200 km (124 miles) in length and the huge iceberg separated from the continent.

Although some 88% of Larsen C remains, many scientists worry that it will fall apart like Larsen A and Larsen B, because the loss of such a huge area of the shelf’s ice front may make the remainder of the ice shelf less stable. The shelf’s mass, along with the fact that it is pinned behind shallow undersea outcrops of rock below, creates a natural dam that significantly slows the flow of the ice into the Weddell Sea. Scientists note that the section that calved was not held back by rock, so they are less worried that the loss of the calved section will result in the shelf’s wholesale disintegration in the near term. Some scientists even concede that the calved area could regrow to form a new ice dam that reinforces the shelf. However, the results of ice-calving and glacier-flow models predict that the shelf will continue to break apart over the course of years and decades.

Calving is a natural process driven, in part, by seasonal changes in temperature and the pressures associated with the build-up of compressional stress on the ice. Some studies argue that spring and summer foehns (warm dry gusty winds that periodically descend the leeward slopes of mountain ranges) have also contributed to the weakening of the ice. As investigations into ice shelf dynamics continue, such large iceberg calving events are often regarded as symptoms of climate changeassociated with global warming. While global warming may turn out to play a part in ice shelf calving events, scientists disagree on the role, if any, the phenomenon has played in recent developments on Larsen C.

Disintegration of Larsen Ice ShelfThe map shows the section of Larsen C that calved in July 2017.Encyclopædia Britannica, Inc.
Advertisements

How Service Animals Help Humans Live Fuller Lives

Guide dog is helping a blind man in the city, service dog, service animal, labrador
© Stieber/Shutterstock.com

An earlier version of this article was published on the Britannica blog Advocacy for Animals.

The partnership between humans and animals dates back to the first domestication of animals in the Stone Age, as long as 9,000 years ago. But never have animals provided such dedicated and particular help to humans as they do today in the form of trained service, or assistance, to people with disabilities. These animals, usually dogs, help people accomplish tasks that would otherwise be prohibitively difficult or simply impossible. Service animals are not pets but working animals doing a job. Thus, legislation—such as the Americans with Disabilities Act (1990) in the United States and the Disability Discrimination Act (1995) in the United Kingdom—makes service animals exempt from rules that prohibit animals from public places and businesses.

The most familiar service animals are guide dogs who help visually impaired people move about safely. Systematic training of guide dogs originated in Germany during World War I to aid blindedveterans. In the late 1920s Dorothy Harrison Eustis, an American dog trainer living in Switzerland, heard of the program and wrote a magazine article about it. The publicity led her to her first student, Morris Frank, with whose help she established a similar training school in the United States in 1929, the Seeing Eye (now located in Morristown, New Jersey).

Puppies are often bred for the purpose by the various organizations that train guide dogs. German shepherdsLabrador retrievers, and Labrador-golden retriever crosses are the most widely used breeds because of their calm temperaments, intelligence, natural desire to be helpful, and good constitutions. Puppies spend their first year with foster families who socialize them and prepare them for later training by teaching them basic obedience skills. At the age of approximately 18 months, guide dogs enter formal training, which lasts from about three to five months. During this period the dogs learn to adjust to a harness, stop at curbs, gauge the human partner’s height when traveling in low or obstructed places, and disobey a command when obedience will endanger the person.

In recent years, hearing dogs have become increasingly common. These dogs, usually mixed-breed rescues from animal shelters, are trained to alert their human partners to ordinary sounds, such as an alarm clock, a baby’s cry, or a telephone. The dogs raise the alert by touching the partner with a paw and then leading him or her to the source of the sound. They are also trained to recognize danger signals—such as fire alarms and sounds of intruders—and to raise the alert by touching with a paw and then lying down in a special “alert” posture, at which time the human partner can take appropriate action.

Dogs can be trained for a great variety of assistance purposes. For example, Service Dogs for America (SDA)/Great Plains Assistance Dogs Foundation, Inc., trains several categories of assistance animals, including service dogs who help people who use wheelchairs and other mobility devices; hearing dogs; seizure-alert or seizure-response dogs, who help persons with seizure disorders by activating an electronic alert system when symptoms occur (some can even predict the onset of a seizure); and therapeutic companion dogs, who provide emotional support for people in hospices, hospitals, and other situations in which loneliness and lack of stimulation are continual problems. There are many programs that train and certify pet animals, especially dogs and cats, as therapy animals who visit such institutions and bring much-welcomed companionship to patients.

Animals are also used in programs such as animal-assisted therapy (AAT). In the words of the Australia-based Delta Society, AAT is a “goal-directed intervention” that utilizes the motivating and rewarding presence of animals, facilitated by trained human professionals, to help patients make cognitive and physical improvements. For example, an elderly patient in a nursing home might be given the task of buckling a dog’s collar or feeding small treats to a cat, activities that enhance fine motor skills. Goals are set for the patients, and their progress is measured.

Dogs and cats are not the only animals who can assist humans with disabilities. Capuchin monkeys—small, quick, and intelligent—can help people who are paralyzed or have other severe impairments to their mobility, such as multiple sclerosis. These monkeys perform essential tasks such as turning on lights and picking up dropped objects. One of the more unusual assistance animals is the guide horse. An experimental program in the United States trains miniature horses to guide the visually impaired in the same way that guide dogs do. The tiny horses may be an alternative for people who are allergic to dogs or who have equestrian backgrounds and are more comfortable with horses.

Certain dogs and other animals have special skills similar to those of the seizure-assistance dogs, such as the ability to detect a diabetic’s drop in blood sugar and alert the person before danger occurs. The sometimes uncanny natural abilities of animals can benefit humans in many ways. Reputable organizations that train assistance animals also take steps to ensure that the animals are cherished and lead rewarding, enjoyable, and healthy lives. When the animals’ helping careers are over, provision is made for their well-deserved retirement.

Plastic Disaster: How Your Bags, Bottles, and Body Wash Pollute the Oceans

Environmental problem of plastic rubbish pollution in ocean
© Rich Carey/Shutterstock.com

Plastic is cheap and durable and has revolutionized human activity. Modern life is addicted to and dependent on this versatile substance, which is found in everything from computers to medical equipment to food packaging. Unfortunately, an estimated 19 billion pounds (more than 8.5 million metric tons) of plastic waste ends up in our oceans every year. Much of this plastic comes from single-use packaging, such as soda bottles and produce bags, and from other single-use products such as straws and disposable diapers. One study suggested that by the year 2050 there will be more plastic by weight in the oceans than fish!

Plastic pollution is more than unsightly. It has a deadly and direct effect on wildlife. Many marine organisms get physically entangled in plastic trash and either drown or slowly starve to death. Others eat the plastics, mistaking the ubiquitous materials for food. Leatherback sea turtles often confuse plastic bags for their jellyfish prey and asphyxiate. Seabirds, especially albatrosses, and other birds that scoop food from the sea have been found dead on their nests, their bellies too full of plastics to survive. A recent study found plastic trash in 90 percent of seabirds, with pieces ranging from bottle caps to rice-sized fragments that look like seeds.

Perhaps even more worrisome is microplastic pollution. The vast majority of plastics are not biodegradable, meaning they break down into smaller and smaller particles but never leave the environment entirely. Pieces smaller than 5 mm (0.2 inch) are classified as microplastics, and it is estimated that a significant portion of all plastic pollution in the oceans is now in this category. Microplastics also come from cosmetics, body washes, and toothpastes, which use tiny pieces of plastics as exfoliants and abrasives, and from items of synthetic clothing that shed minute fibers each time they are washed. These particles and fibers are too small for waste management systemsto filter and are directly discharged into the oceans. There is concern that these microplastics and/or the endocrine-disrupting chemicals they contain will bioaccumulate (become progressively more concentrated in the bodies of organisms up the food chain), since they are about the same size as plankton that serve as the base of the food chain. Many marine organisms have already been found with microplastics in their bodies. Studies on marine worms and oysters have found that microplastics disrupt their feeding and reproduction, causing a failure to thrive. These tiny fragments could also contaminate humans directly, as microplastics have been found in sea salt sold for human consumption.

Disturbingly, global plastic production doubles every 11 years, meaning the amount of plastic pollution will only continue to increase without drastic changes. To help battle this dire problem, be aware of your consumption of single-use plastics—it will likely shock you to realize how seemingly everything comes in plastic. Reduce your consumption of these products and reuse the containers whenever possible. Avoid health and beauty products that use plastic microbeads. Buy reusable bags, straws, and glass or metal beverage containers. Buy pantry basics, like rice and beans, in bulk, and avoid putting your produce in plastic bags for the short trip home. Recycle the plastic you do use, but be aware that not every plastic can be recycled. Participate in beach, river, or lake cleanups and help raise awareness of the problem. Encourage your employer and the companies and restaurants you patronize to facilitate greener options, such as paper products over plastic disposables. Support legislation that targets plastic pollution and the fossil fuels from which they are made. The challenge is huge, but, like plastics themselves, small actions accumulate.

How Did the Sperm Whale Get Its Name?

Sperm whale

Sperm whales (Physeter catodon), or cachalots, are the largest of the toothed whales, with males up to 19 meters (62 feet) long—more than five times the length of a large elephant—and females up to 12 meters (39 feet) in length. They are easily recognized by their enormous square head and narrow lower jaw. Probably the most famous sperm whale was Moby Dick, the great white whale from Herman Melville’s classic novel of the same name. (As far as we can tell, Moby Dick was the only sperm whale that delivered a unique brand of karmic justice to one-legged sea captains bent on vengeance.) Despite the public’s passing familiarity with sperm whales, many people have wondered why they are so named. Are they called sperm whales because their body shape is similar to that of male sex cells, or is there another reason?

The whale’s common name originated during the heyday of the commercial whaling industry, from the end of the 18th century through the 19th century. The head of the sperm whale contains an enormous fluid-filled organ (which whalers called the case). During whale harvests, this organ, now called the spermaceti organ, was discovered to contain a white liquid that whalers mistook for the sperm of the whale. The spermaceti organ is unique to sperm whales, although bottlenose whales possess a similar organ. It has a volume as large as 2,000 liters (530 gallons) and can extend through 40 percent of the whale’s length.

Whalers valued spermaceti (the name of the material within the spermaceti organ) because it could be cooled into a wax that could be made into ointments, cosmetic creams, fine wax candles, pomades, textile finishing products, and industrial lubricants. The whale’s spermaceti organ and blubber also hold sperm oil, a pale yellow oil that was used as a superior lighting oil and later as a lubricant and in soap manufacturing.

WRITTEN BY:  John P. Rafferty 

Tracking Down the Origins of Cystic Fibrosis in Ancient Europe

Human Lungs CF
The airways inside the human lung. (Magic mine/Shutterstock.com)

Imagine the thrill of discovery when more than 10 years of research on the origin of a common genetic disease, cystic fibrosis (CF), results in tracing it to a group of distinct but mysterious Europeans who lived about 5,000 years ago.

CF is the most common, potentially lethal, inherited disease among Caucasians—about one in 40 carry the so-called F508del mutation. Typically only beneficial mutations, which provide a survival advantage, spread widely through a population.

CF hinders the release of digestive enzymes from the pancreas, which triggers malnutrition, causes lung disease that is eventually fatal and produces high levels of salt in sweat that can be life-threatening.

CF Symptom Diagram
Depending on the mutation a patient carries, they may experience some or all symptoms of cystic fibrosis. (Blausen.com staff (2014), CC BY-SA)

In recent years, scientists have revealed many aspects of this deadly lung disease which have led to routine early diagnosis in screened babies, better treatments and longer lives. On the other hand, the scientific community hasn’t been able to figure out when, where and why the mutation became so common. Collaborating with an extraordinary team of European scientists such as David Barton in Ireland and Milan Macek in the Czech Republic, in particular a group of brilliant geneticists in Brest, France led by Emmanuelle Génin and Claude Férec, we believe that we now know where and when the original mutation arose and in which ancient tribe of people.

We share these findings in an article in the European Journal of Human Genetics which represents the culmination of 20 years’ work involving nine countries.

What is cystic fibrosis?

My quest to determine how CF arose and why it’s so common began soon after scientists discovered the CFTR gene causing the disease in 1989. The most common mutation of that gene that causes the disease was called F508del. Two copies of the mutation—one inherited from the mother and the other from the father—caused the lethal disease. But, inheriting just a single copy caused no symptoms, and made the person a “carrier.”

I had been employed at the University of Wisconsin since 1977 as a physician-scientist focusing on the early diagnosis of CF through newborn screening. Before the gene discovery, we identified babies at high risk for CF using a blood test that measured levels of protein called immunoreactive trypsinogen (IRT). High levels of IRT suggested the baby had CF. When I learned of the gene discovery, I was convinced that it would be a game-changer for both screening test development and epidemiological research.

That’s because with the gene we could offer parents a more informative test. We could tell them not just whether their child had CF, but also whether they carried two copies of a CFTR mutation, which caused disease, or just one copy which made them a carrier.

CF Mutation
Parents carrying one good copy of the CF gene (R) and one bad copy of the mutated CF gene (r) are called carriers. When both parents transmit a bad copy of the CF gene to their offspring, the child will suffer from cystic fibrosis. Children who inherit just one bad copy will be carriers like their parents and can transmit the gene to their children. (Cburnett, CC BY-SA)

One might ask what is the connection between studying CF newborn screening and learning about the disease origin. The answer lies in how our research team in Wisconsin transformed a biochemical screening test using the IRT marker to a two-tiered method called IRT/DNA.

Because about 90 percent of CF patients in the U.S. and Europe have at least one F508del mutation, we began analyzing newborn blood for its presence whenever the IRT level was high. But when this two-step IRT/DNA screening is done, not only are patients with the disease diagnosed but also tenfold more infants who are genetic carriers of the disease are identified.

As preconception-, prenatal- and neonatal screening for CF have proliferated during the past two decades, the many thousands of individuals who discovered they were F508del carriers and their concerned parents often raised questions about the origin and significance of carrying this mutation themselves or in their children. Would they suffer with one copy? Was there a health benefit? It has been frustrating for a pediatrician specializing in CF to have no answer for them.

The challenge of finding origin of the CF mutation

I wanted to zero in on when this genetic mutation first starting appearing. Pinpointing this period would allow us to understand how it could have evolved to provide a benefit—at least initially—to those people in Europe who had it. To expand my research, I decided to take a sabbatical and train in epidemiology while taking courses in 1993 at the London School of Hygiene and Tropical Medicine.

The timing was perfect because the field of ancient DNA research was starting to blossom. New breakthrough techniques like the Polymerase Chain Reaction made it possible to study the DNA of mummies and other human archaeological specimens from prehistoric burials. For example, early studies were performed on the DNA from the 5,000-year-old Tyrolean Iceman, which later became known as Ötzi.

Ancient Burial
A typical prehistoric burial in a crouched fetal position. (Philip Farrell, CC BY-SA)

I decided that we might be able to discover the origin of CF by analyzing the DNA in the teeth of Iron Age people buried between 700-100 B.C. in cemeteries throughout Europe.

Using this strategy, I teamed up with archaeologists and anthropologists such as Maria Teschler-Nicolaat the Natural History Museum in Vienna, who provided access to 32 skeletons buried around 350 B.C. near Vienna. Geneticists in France collected DNA from the ancient molars and analyzed the DNA. To our surprise, we discovered the presence of the F508del mutation in DNA from three of 32 skeletons.

This discovery of F508del in Central European Iron Age burials radiocarbon-dated to 350 B.C. suggested to us that the original CF mutation may have arisen earlier. But obtaining Bronze Age and Neolithic specimens for such direct studies proved difficult because fewer burials are available, skeletons are not as well-preserved and each cemetery merely represents a tribe or village. So rather than depend on ancient DNA, we shifted our strategy to examine the genes of modern humans to figure out when this mutation first arose.

Why would a harmful mutation spread?

To find the origin of CF in modern patients, we knew we needed to learn more about the signature mutation—F508del—in people who are carriers or have the disease.

This tiny mutation causes loss of one amino acid out of the 1,480 amino acid chain and changes the shape of a protein on the surface of the cell that moves chloride in and out of the cell. When this protein is mutated, people carrying two copies of it—one from the mother and one from the father—are plagued with thick sticky mucus in their lungs, pancreas and other organs. The mucus in their lungs allows bacteria to thrive, destroying the tissue and eventually causing the lungs to fail. In the pancreas, the thick secretions prevent the gland from delivering the enzymes the body needs to digest food.

So why would such a harmful mutation continue to be transmitted from generation to generation?

Iron and Bronze Age Teeth and Bones
The Natural History Museum in Vienna, Austria, houses a large collection of Iron Age and Bronze Age skeletons which are curated by Dr. Maria Teschler-Nicola. These collections were the source of teeth and bones for investigation of ancient DNA and studies on ‘The Ancient Origin of Cystic Fibrosis.’ (Philip Farrell, CC BY-ND)

A mutation as harmful as F508del would never have survived among people with two copies of the mutated CFTR gene because they likely died soon after birth. On the other hand, those with one mutation may have a survival advantage, as predicted in Darwin’s “survival of the fittest” theory.

Perhaps the best example of a mutation favoring survival under stressful environmental conditions can be found in Africa, where fatal malaria has been endemic for centuries. The parasite that causes malaria infects the red blood cells in which the major constituent is the oxygen-carrying protein hemoglobin. Individuals who carry the normal hemoglobin gene are vulnerable to this mosquito-borne disease. But those who are carriers of the mutated “hemoglobin S” gene, with only one copy, are protected from severe malaria. However two copies of the hemoglobin S gene causes sickle cell disease, which can be fatal.

Here there is a clear advantage to carrying one mutant gene—in fact, about one in 10 Africans carries a single copy. Thus, for many centuries an environmental factor has favored the survival of individuals carrying a single copy of the sickle hemoglobin mutation.

Sickle Cell Gene
Individuals who carry two copies of the sickle cell gene suffer from sickle cell anemia, in which the blood cells become rigid sickle shapes and get stuck in the blood vessels, causing pain. Normal red blood cells are flexible discs that slide easily through vessels. (Designua/Shutterstock.com)

Similarly we wondered whether there was a health benefit to carrying a single copy of this specific CF mutation during exposures to environmentally stressful conditions. Perhaps, we reasoned, that’s why the F508del mutation was common among Caucasian Europeans and Europe-derived populations.

Clues from modern DNA

To figure out the advantage of transmitting a single mutated F508del gene from generation to generation, we first had to determine when and where the mutation arose so that we could uncover the benefit this mutation conferred.

We obtained DNA samples from 190 CF patients bearing F508del and their parents residing in geographically distinct European populations from Ireland to Greece plus a Germany-derived population in the U.S. We then identified a collection of genetic markers—essentially sequences of DNA—within the CF gene and flanking locations on the chromosome. By identifying when these mutations emerged in the populations we studied, we were able to estimate the age of the most recent common ancestor.

Next, by rigorous computer analyses, we estimated the age of the CF mutation in each population residing in the various countries.

Sickle Cell and Malaria
Two copies of the sickle cell gene cause the disease. But carrying one copy reduces the risk of malaria. The gene is widespread among people who live in regions of the world (red) where malaria is endemic. ( ellepigrafica)

We then determined that the age of the oldest common ancestor is between 4,600 and 4,725 years and arose in southwestern Europe, probably in settlements along the Atlantic Ocean and perhaps in the region of France or Portugal. We believe that the mutation spread quickly from there to Britain and Ireland, and then later to central and southeastern European populations such as Greece, where F508del was introduced only about 1,000 years ago.

Who spread the CF mutation throughout Europe?

Thus, our newly published data suggest that the F508del mutation arose in the early Bronze Age and spread from west to southeast Europe during ancient migrations.

Moreover, taking the archaeological record into account, our results allow us to introduce a novel concept by suggesting that a population known as the Bell Beaker folk were the probable migrating population responsible for the early dissemination of F508del in prehistoric Europe. They appeared at the transition from the Late Neolithic period, around 4000 B.C., to the Early Bronze Age during the third millennium B.C. somewhere in Western Europe. They were distinguished by their ceramic beakers, pioneering copper and bronze metallurgy north of the Alps and great mobility. All studies, in fact, show that they were into heavy migration, traveling all over Western Europe.

Bell Beaker Sites
Distribution of Bell Beaker sites throughout Europe. (DieKraft via Wikimedia Commons)

Over approximately 1,000 years, a network of small families and/or elite tribes spread their culture from west to east into regions that correspond closely to the present-day European Union, where the highest incidence of CF is found. Their migrations are linked to the advent of Western and Central European metallurgy, as they manufactured and traded metal goods, especially weapons, while traveling over long distances. It is also speculated that their travels were motivated by establishing marriage networks. Most relevant to our study is evidence that they migrated in a direction and over a time period that fit well with our results. Recent genomic data suggest that both migration and cultural transmission played a major role in diffusion of the “Beaker Complex” and led to a “profound demographic transformation” of Britain and elsewhere after 2400 B.C.

Determining when F508del was first introduced in Europe and discovering where it arose should provide new insights about the high prevalence of carriers—and whether the mutation confers an evolutionary advantage. For instance, Bronze Age Europeans, while migrating extensively, were apparently spared from exposure to endemic infectious diseases or epidemics; thus, protection from an infectious disease, as in the sickle cell mutation, through this genetic mutation seems unlikely.

As more information on Bronze Age people and their practices during migrations become available through archaeological and genomics research, more clues about environmental factors that favored people who had this gene variant should emerge. Then, we may be able to answer questions from patients and parents about why they have a CFTR mutation in their family and what advantage this endows.

Bell Beaker Artifacts
Examples of tools and ceramics created by the Bell Beaker people. (Benutzer:Thomas Ihle via German Wikipedia, CC BY-SA) 

This article was originally published on The Conversation. The ConversationMatthew E. Baker, Professor of Geography and Environmental Systems, University of Maryland, Baltimore County

God vs. the Gods – The First Known Instance of Monotheism in History

Featured image

In the long line of pharaohs of the dynasties of ancient Egypt, Akhenaten was unique. Yet until recently, almost nothing was known about him. Akhenaten lived during the 14th century BC and his reign lasted for 17 years.. Evidence of his existence was discovered only in the late 19th century.

The future king of Egypt was originally named Amenhotep IV, son of pharaoh Amenhotep III and Queen Tiye. He was not first in line to the throne but his older brother died at a young age. Some scholars believe that the young prince was shunned as a child, as he never appeared in family portraits. He later married the well-known Queen Nefertiti.

Once on the throne, Akhenaten made revolutionary changes to Egyptian life. He banished worship of Egypt’s many gods, including Amun-Ra, popular among the priestly class. Instead, only one deity, the sun disk god Aten, was to be recognized as the Supreme Being. Akhenaten considered himself a direct descendant of Aten.

Worship of Aten may have been the first known movement away from polytheism toward monotheism. Psychoanalyst Sigmund Freud once suggested that Moses may have been a priest to the cult of Aten, who later fled Egypt with his followers to maintain their beliefs after the death of Akhenaten.

After changing his name to Akhenaten, the pharaoh ordered grand monuments built for Aten in the Egyptian capital, Thebes. Temples were reoriented toward the east, where the sun rose each day. Icons for other Egyptian gods were removed.

Akhenaten then had a new city built in honor of his god. Two years later, he moved the royal palace there. The new city was located at modern day Amarna and was filled with up to 10,000 people. The population included priests to the sun god, merchants, builders, and traders. Akhenaten lived here for ten years until his death.

Along with statues, there were a number of sculptures portraying the royal family. This was common for the pharaohs of ancient Egypt. Almost all previous royal portraits depicted the king and queen as rigid. They are serious. They are wearing the royal insignia and their bodies are shaped perfectly and muscular. They look like gods themselves.

Not Akhenaten though. His face looks stretched. The nose is narrow and the chin is pointy. He has large lips and broad hips. A pot belly oozes over his waist. Why does Akhenaten look so different from other sculptures of the period?

One theory is that the king may have suffered some sort of ailment. One of the possibilities is that he had Marfan’s Syndrome, a genetic disorder that affects the body’s connective tissue. Some of the possible symptoms include a tall and thin body type, long arms, legs, and fingers, as well as curvature of the spine.

Yet, Akhenaten and his family look like real people with physical flaws. It is timeless. The images reach out to us through the many centuries. In one stone relief, the sun god Aten’s light is shining down on Akhenaten, Nefertiti, and some of their children.

The pharaoh is holding one child in his arms, giving her a kiss. Nefertiti is holding two younger kids, one child reaching for the queen’s jewelry. It’s a scene that might look like any contemporary family.

It appears Akhenaten’s rule was not popular, both within the kingdom and beyond. Correspondences from foreign rulers allied to Egypt describe frustration with Akhenaten’s lack of military and financial support. Egyptian power and influence declined during the king’s reign.

Akhenaten’s religious reforms did not outlive him. Almost immediately after his death, the priestly elites of Amun-Ra and the other gods regained their influence. Statues and references to Akhenaten and Aten were removed. Akhenaten’s name was erased from official royal lists.

His temples were destroyed and the material used for new building projects. The city at Amarna was abandoned — even the mummified body of Akhenaten was removed from his tomb, never to be seen again.

Akhenaten’s successor was one of his sons, King Tutankhamun, also known as King Tut. He is more famous today than his father because his tomb was discovered mostly intact by archaeologists in the early 20th century.

As his name suggests, Tutankhamun embraced the old deity of Amun-Ra and the traditional ways of ancient Egypt. During his short reign, King Tut mostly turned away from his father’s legacy, the heretic pharaoh, Akhenaten.

 Mark Shiffer

%d bloggers like this: