ReportWire

Category: Bazaar News

Bazaar News | ReportWire publishes the latest breaking U.S. and world news, trending topics and developing stories from around globe.

  • What’s the Difference Between Scotch, Whiskey, and Bourbon?

    What’s the Difference Between Scotch, Whiskey, and Bourbon?

    [ad_1]

    This might be common knowledge for some, but it’s worth a refresher before you go out and buy a bottle. Let’s start with the basics.

    The quick answer is the law. Making bourbon is an exceedingly technical exercise, and requires that the whiskey meet rigid criteria. The Federal Standard of Identify for Bourbon stipulate what is and what isn’t bourbon. For a whiskey to call itself bourbon, its mash—the mixture of grains from which the product is distilled—must contain at least 51 percent corn. The rest of the mash is usually filled out with malted barley and either rye or wheat.

    Then, the mash must be distilled at 160 proof or less and put into the barrel at 125 proof or less, It also can’t contain any additives. The distillate needs to be aged in a new charred oak barrel. If you distill a whiskey in your kitchen that meets all of these standards, congrats—you’ve made bourbon. Also, you’ve broken the law. The ATF will be on its way shortly.

    The main difference between scotch and whiskey is geographic, but it also involves ingredients and spellings. Scotch is whisky—no e—made in Scotland, while bourbon is whiskey made in the U.S., generally in Kentucky. Scotch is made mostly from malted barley, while bourbon is distilled from corn. If you’re in England and ask for a whisky, you’ll get scotch. But in Ireland, you’ll get Irish whiskey (yep, they spell it differently).

    Yes and no. The difference between Tennessee whiskey, like Jack Daniel’s, and bourbon is that after the spirit is distilled, Tennessee whiskey is filtered through sugar maple charcoal. This filtering, known as the Lincoln County Process, is what distinguishes Tennessee whiskey from your average bourbon, like Jim Beam. The name bourbon actually comes from an area known as Old Bourbon, around what is now Bourbon County, Kentucky.

    On top of these types of whiskey, we also have rye whiskey, which can refer either to American rye whiskey, which must be distilled from at least 51 percent rye, or Canadian whisky, which may or may not actually include any rye in its production process.

    [ad_2]

    David K. Israel

    Source link

  • Why Can’t Some People See Magic Eye Pictures?

    Why Can’t Some People See Magic Eye Pictures?

    [ad_1]

    In the 1990s, you couldn’t escape the visually chaotic art known as Magic Eye pictures, which promised to reveal hidden images … if you could only figure out how to look at them the right way. But for many, no 3D image ever revealed itself, no matter how hard they stared. In July 2022, Eye on Design magazine went so far as to call Magic Eye pictures “the world’s most famous—and infamously frustrating—optical illusion,” noting that “the fact that it was so difficult to see the 3D shape hiding behind the hypercolored patterns was a major part of its appeal.” So what gives? If people can’t see the illusion, is there something wrong with their eyes? Are there really no hidden pictures? Is this all a hoax?

    Most Magic Eye problems have to do with the way the eyes work with each other and the brain. To view 3D stereo images, your peepers have to work together as a coordinated team. If they’re not pulling together, you’re going to have some glitches in your binocular (two-eyed) vision or stereo vision (where the two slightly different views from your eyes are combined in the brain).

    A number of things can cause binocular and stereo vision impairment—most commonly, deviations or misalignments of one or both eyes (“crossed eyes” or “wall eyes”); situations where one eye is dominant because visual stimulation either transmits poorly or not at all from the other; astigmatism; or cataracts. If you think you have an eye problem, go see an eye doctor who can test and treat your stereo vision.

    If your eyes are fine, then your Magic Eye problems could just be a matter of technique. Plenty have offered advice, including crossing your eyes, squinting, and practicing using your index figure and a picture on the wall. WikiHow also offers step-by-step instructions for several Magic Eye-viewing methods.

    Should those strategies fail, the makers of Magic Eye offer these instructions:

    ”Hold the center of the printed image right up to your nose. It should be blurry. Focus as though you are looking through the image into the distance. Very slowly move the image away from your face until the two squares above the image turn into three squares. If you see four squares, move the image farther away from your face until you see three squares. If you see one or two squares, start over!

    ”When you clearly see three squares, hold the page still, and the hidden image will magically appear. Once you perceive the hidden image and depth, you can look around the entire 3D image. The longer you look, the clearer the illusion becomes. The farther away you hold the page, the deeper it becomes. Good luck!”

    A version of this story ran in 2012; it has been updated for 2023.

    [ad_2]

    Matt Soniak

    Source link

  • Why Are Unidentified People Called John or Jane Doe?

    Why Are Unidentified People Called John or Jane Doe?

    [ad_1]

    From the courts to the morgue, if the government doesn’t know a person’s name, or wants to withhold it for some reason, they generally use the name John Doe or Jane Doe as a placeholder. But why?

    The John Doe custom was born out of a strange and long since vanished British legal process called an action of ejectment. Under old English common law, the actions landowners could take against squatters or defaulting tenants in court were often too technical and difficult to be of any use. So landlords would instead bring an action of ejectment on behalf of a fictitious tenant against another fictitious person who had allegedly evicted or ousted him. In order to figure out what rights to the property the made-up persons had, the courts first had to establish that the landlord really was the owner of the property, which settled their real reason for action without the landlord having to jump through too many legal hoops.

    Frequently, landlords named the fictitious parties in their actions John Doe (the plaintiff) and Richard Roe (the defendant), though no one has been able to find the case where these names were first used or figure out why they were picked. The names don’t appear to have any particular relevance, and it might be that the first names were chosen because they were among the most common at the time. The surnames, meanwhile, both reference deer—a doe being a female deer and roe being a specific deer species (Capreolus capreolus) common in Britain. They might also have been the actual names of real people that a particular landlord knew and decided to use. Unfortunately, we just don’t know.

    Whatever their ultimate origin, the names eventually became standard placeholders for unidentified, anonymous, or hypothetical parties to a court case. Most U.S. jurisdictions continue to use John Doe and his female counterpart, Jane Doe, as placeholder names, and will bring in Roe if two anonymous or unknown parties are involved in the same case. The Feds use these placeholders, too, perhaps most famously in Roe v. Wade. The Jane Roe in that case was actually Norma Leah McCorvey, who revealed herself soon after the Supreme Court decision.

    Have you got a Big Question you’d like us to answer? If so, let us know by emailing us at bigquestions@mentalfloss.com.

    A version of this story ran in 2012; it has been updated for 2023.

    [ad_2]

    Matt Soniak

    Source link

  • Where Did Groundhog Day Come From?

    Where Did Groundhog Day Come From?

    [ad_1]

    Groundhog Day is celebrated on February 2 because it’s close to the midpoint of winter, halfway between the winter solstice and the spring equinox. Whether the groundhog (or any hibernating mammal) sees his shadow or not, winter will still last another six-and-a-half weeks. Yet Groundhog Day marks a turning point of the season, and for agricultural communities, midwinter was a time to take stock and determine whether you had enough food and firewood to last until late March.

    Pre-industrial socities commemorated this important time with holidays—which may have evolved into our current rodent-based weather forecasting.

    A Marmot With A Branch Of Plums

    Groundhogs are cuter in real life. / Heritage Images/GettyImages

    Ancient pagans marked the solstices and the equinoxes as a way of measuring the cycle of a year. Important dates, considered the real beginnings of the seasons, fell at the midpoint between the solstices and equinoxes. These “cross-quarter days” were Beltaine, Lughnasadh, Samhain, and Imbolc. The old pagan holiday of Imbolc falls on February 1 and denoted the beginning of spring. An early Gaelic verse tell of the date’s importance in the cycle of the year:

    The serpent will come from the hole
    On the brown Day of Bride,
    Though there should be three feet of snow
    On the flat surface of the ground.

    Imbolc is also sometimes celebrated as the festival of the Celtic goddess Bríd or Brigid, who is often conflated with the Catholic saint Brigit. Saint Brigit of Ireland’s feast day is February 1. Catholic sources say Brigit was the daughter of a Celtic king and an enslaved woman, and that her innate purity was evident from her early life. In one story, she gave all of her family’s butter to the poor, but it was replenished as she prayed.

    Some Christians today celebrate a feast day called Candlemas on February 2. The holiday occurs 40 days after Christmas, symbolizing the end of the Virgin Mary’s 40-day post-childbirth purification period required by Jewish law and her presentation of the infant Jesus at the Temple in Jerusalem. In Orthodox communities that still use the Julian calendar, Candlemas is celebrated on the Gregorian calendar’s February 14. Worshipers took their candles to church for blessing on Candlemas.

    Candlemas is likely linked to midwinter celebrations pre-dating Christianity and associated with weather prognostication. According to an old English folk song:

    If Candlemas be fair and bright,
    Come, Winter, have another flight;
    If Candlemas bring clouds and rain,
    Go, Winter, and come not again.

    A Scottish folk song echoes the sentiment: “If Candlemas Day is bright and clear, there’ll be twa winters in the year.”

    A badger pictured in an early 20th-century German spelling book.

    A badger pictured in an early 20th-century German spelling book. / brandstaetter images/GettyImages

    Cultures around the world, and particularly those in wintry regions of Europe, looked to the emergence of hibernating animals as harbingers of spring. Though the snake mentioned in the old Celtic verses was rarely ever seen (especially in Ireland), mammals were believed to be fairly reliable prognosticators.

    In some parts of Eastern Europe, Candlemas is also known as the day of the bear. Good weather on this day will cause bears to stay outside their dens, meaning spring will come soon. In other places that observe the folk ritual, if the weather is nice, the bear will see his shadow and be frightened back into his den for more winter weather.

    In Germany, Candlemas is associated with the weather forecasting prowess of badgers (dachs in German). “Dachstag, or Badger Day, is a German folk expression for Candlemas,” writes folklorist Ralph Yoder in his book Groundhog Day. “The belief was […] if the badger encountered sunshine on Candlemas and therefore saw his shadow, he crawled back into his hole to stay for four more weeks, which would be a continuation of winter weather.”

    In France, farmers kept their eyes on the Alpine marmot (a cousin of the North American groundhog) for their take on the arrival of spring, while English villagers waited on the emergence of hedgehogs.

    Staten Island Chuck appears on Groundhog Day

    Staten Island Chuck gets his annual 15 minutes of fame. / Shahar Azran/GettyImages

    Immigrants from Germany known as the Pennsylvania Dutch brought their Dachstag customs to America in the 18th and 19th centuries. There are no badgers endemic to the eastern U.S., but Marmota monax, popularly known as a whistle-pig, woodchuck, or groundhog, hibernated in the winter and filled the requirements of the old tradition. The first reference to Groundhog Day in the U.S. was made in 1841, in the diary of James Morris of Morgantown, Pennsylvania:

    “Last Tuesday, the 2nd, was Candlemas day, the day on which, according to the Germans, the Groundhog peeps out of his winter quarters and if he sees his shadow he pops back for another six weeks nap, but if the day be cloudy he remains out, as the weather is to be moderate.”

    The tradition spread across the U.S., but the most famous groundhog still lives in Pennsylvania. Punxsutawney is trotted out every February 2 to check for his shadow while reporters and photographers look on. Other cities have their own groundhogs, and people in rural areas may look for evidence of spring from anonymous groundhogs.

    Groundhogs, strangely enough, have never been particularly accurate when it comes to predicting the weather. The National Weather Service gives Punxsutawney Phil has a 40 percent accuracy rate since 2013. After all, how smart can an animal be if he’s spooked by his own shadow?

    A version of this story was published in 2012; it has been updated for 2024.

    [ad_2]

    Miss Cellania

    Source link

  • When Did Americans Lose Their British Accents?

    When Did Americans Lose Their British Accents?

    [ad_1]

    There are many, many evolving regional British and American accents, so the terms “British accent” and “American accent” are gross oversimplifications. What a lot of Americans think of as the typical “British accent” is what’s called standardized Received Pronunciation (RP), also known as Public School English or BBC English. What most people think of as an “American accent,” or most Americans think of as “no accent,” is the General American (GenAm) accent, sometimes called a “newscaster accent” or “Network English.” We’ll focus on these two general sounds for now and leave the regional accents for another time.

    English colonists established their first permanent settlement in North America at Jamestown, Virginia, in 1607, sounding very much like their countrymen back home. By the time we had recordings of both Americans and Brits some three centuries later (the first audio recording of a human voice was made in 1860), the sounds of English as spoken in the Old World and New World were very different. We’re looking at a silent gap of some 300 years, so we can’t say exactly when Americans first started to sound noticeably different from the British.

    As for the “why,” though, one big factor in the divergence of the accents is rhotacism. The General American accent is rhotic and speakers pronounce the r in words such as hard. The BBC-type British accent is non-rhotic, and speakers don’t pronounce the r, leaving hard sounding more like hahd. Before and during the American Revolution, English people, both in England and in the colonies, mostly spoke with a rhotic accent. We don’t know much more about said accent, though. Various claims about the accents of Appalachia, the Outer Banks, the Tidewater region, and Smith and Tangier islands in the Chesapeake Bay sounding like an uncorrupted Elizabethan-era English accent have been busted as myths by linguists. 

    Around the turn of the 18th to 19th century, not long after the Revolution, non-rhotic speech took off in southern England, especially among the upper and upper-middle classes. It was a signifier of class and status. This posh accent was standardized as Received Pronunciation and taught widely by pronunciation tutors to people who wanted to learn to speak fashionably. Because the Received Pronunciation accent was regionally “neutral” and easy to understand, it spread across England and the empire through the armed forces, the civil service, and, later, the BBC.

    Across the pond, many former colonists also adopted and imitated Received Pronunciation to show off their status. This happened especially in the port cities that still had close trading ties with England — Boston, Richmond, Charleston, and Savannah. From the southeastern coast, the RP sound spread through much of the South along with plantation culture and wealth.

    After industrialization and the Civil War and well into the 20th century, political and economic power largely passed from the port cities and cotton regions to the manufacturing hubs of the Mid-Atlantic and Midwest—New York, Philadelphia, Pittsburgh, Cleveland, Chicago, and Detroit. The British elite had much less cultural and linguistic influence in these places, which were mostly populated by the Scots-Irish and other settlers from Northern Britain, and rhotic English was still spoken there. As industrialists in these cities became the self-made economic and political elites of the industrial era, Received Pronunciation lost its status and fizzled out in the U.S. The prevalent accent in the Rust Belt, though, got dubbed General American and spread across the states just as RP had in Britain.

    Of course, with the speed that language changes, a General American accent is now hard to find in much of this region, with New York, Philadelphia, Pittsburgh, and Chicago developing their own unique accents, and GenAm now considered generally confined to a small section of the Midwest.

    As mentioned above, there are regional exceptions to both these general American and British sounds. Some of the accents of southeastern England, plus the accents of Scotland and Ireland, are rhotic. Some areas of the American Southeast, plus Boston, are non-rhotic.

    A version of this story was published in 2012; it has been updated for 2023.

    [ad_2]

    Matt Soniak

    Source link

  • What’s the Difference Between Ketchup and Catsup?

    What’s the Difference Between Ketchup and Catsup?

    [ad_1]

    Ketchup and catsup: You’ve heard both words, and probably even dipped a plate of French fries in a pile of each one. You didn’t notice a difference in taste, so what gives?

    Ketchup and catsup are simply two different spellings for the same thing: a modern, Westernized version of a condiment that European traders were introduced to while visiting the Far East in the late 17th century. What exactly that condiment was, and where they found it, is a matter of a much wider debate.

    It could have been ke-chiap, from China’s southern coastal Fujian region. Or it could have been kicap, a Malay word borrowed from the Cantonese dialect of Chinese from Indonesia, also spelled kecap and ketjap, both of which are sauces based on brined or pickled fish or shellfish, herbs, and spices. Whatever it was, the Europeans liked it, and as early 1690, they brought it back home with them and began calling it catchup.

    The early Western versions of the sauce—which, beginning in 1711, was sometimes called ketchup, another Anglicization of the Malay name popularized in the book An Account of Trade in India—were pretty faithful to the original Eastern ones, with one of the earliest recipes published in England (1727) calling for anchovies, shallots, vinegar, white wine, cloves, ginger, mace, nutmeg, pepper, and lemon peel. It wasn’t until almost a century later that tomatoes found their way into the sauce, in a recipe in an American cookbook published in 1801. In the meantime, another alternative spelling popped up, which was mentioned in a 1730 Jonathan Swift poem: “And, for our home-bred British cheer, Botargo [a fish roe-based relish], catsup, and caveer [caviar].”

    The tomato-based version of ketchup quickly caught on in the U.S. during the first few decades of the 19th century. At first, it was made and locally sold by farmers, but by 1837 at least one company was producing and distributing it on a national scale. The H. J. Heinz Company, a name that’s synonymous with ketchup for most people today, was a relative latecomer to the game and didn’t produce a tomato-based ketchup until 1876. They originally referred to their product as catsup, but switched to ketchup in the 1880s to stand out. Eventually, ketchup became the standard spelling in the industry and among consumers, though you can still find catsup strongholds sprinkled across the U.S.

    Have you got a Big Question you’d like us to answer? If so, let us know by emailing us at bigquestions@mentalfloss.com.

    A version of this story ran in 2012; it has been updated for 2023.

    [ad_2]

    Matt Soniak

    Source link

  • Relive MTV’s First Two Hours on the Air in Real Time

    Relive MTV’s First Two Hours on the Air in Real Time

    [ad_1]

    On August 1, 1981, MTV launched at 12:01 a.m. Above you can relive the first two hours of the network’s existence, thanks to YouTuber Max Speedster.

    The nostalgic trip back in time is fun for anyone who either remembers MTV’s early days or wasn’t old enough to witness it. It is also historically interesting for the commercials, shows a surprisingly seat-of-the-pants approach to cable TV—complete with lots of dead air and bad audio cuts—and features a surprising mix of videos. Oh, and they won’t shut up about how they’re broadcasting in stereo.

    Stick around until after The Buggles’ video for “Video Killed the Radio Star” for a bit of dead air, then a weird micro-documentary about MTV, followed by a Pat Benatar video. Then we meet some of the original VJs, including Martha Quinn and J.J. Jackson.

    The first MTV commercial starts just over 10 minutes into the hour. It’s for “The Bulk,” a three-ring binder. The second ad is for Superman II, “the most exciting movie event of our time.” The third ad is for Dolby noise reduction. Then you’ll be treated to a vintage Rod Stewart video in which he wears some very unfortunate trousers. (The much-better “You Better You Bet” video should erase your memory of Rod’s pants.)

    While the offer to receive an “MTV Dial Sticker” to put on your TV so you’ll never forget which channel it’s on is probably no longer valid, we hope you’ll enjoy it nonetheless.

    Still looking for more? Read all about the launch of MTV here.

    A version of this story ran in 2011; it has been updated for 2023.

    [ad_2]

    Chris Higgins

    Source link

  • 8 Decidedly Unromantic Facts About Mistletoe

    8 Decidedly Unromantic Facts About Mistletoe

    [ad_1]

    Before someone asks to kiss you under the mistletoe this holiday season, arm yourself with these facts about the legendary plant—some of which aren’t so romantic.

    The plant grows on trees, sucking up water and minerals from its host through a sinister-sounding bump called a haustorium. It might make you feel better to know that mistletoe is only partially parasitic: The plant is capable of photosynthesis, unlike true parasites that take all of their nutrients from their hosts.

    Candle companies love to peddle holiday scents labeled “mistletoe.” You can even buy mistletoe-scented air fresheners for your car. But the actual plant, says mistletoe expert Jonathan Briggs, has no scent at all. Briggs, who hails from Gloucestershire, England, debunks all manner of mistletoe misinformation on his Mistletoe Diary blog.

    A saleswoman is seen at the Santa Llúcia Christmas market selling bunches of mistletoe

    Mistletoe is sold at markets during the Christmas season. / SOPA Images/GettyImages

    Over many centuries, mistletoe has been used to treat a battery of ailments, from leprosy, worms, and labor pains to high blood pressure. In Europe, injections of mistletoe extract are often prescribed as a complementary treatment for to reduce the side effects of chemotherapy in cancer patients.

    Mistletoe typically grows in the highest branches of tall trees. Harvesting the bunches with a shotgun, to be sold as decorations during the holiday season, is a time-honored winter activity in the southern U.S. Let’s hope no one’s kissing under it at the time.

    In the medieval era, mistletoe wasn’t just a Christmas decoration, but one perhaps better suited to Halloween. Hung over doors to homes and stables, it was thought to prevent witches and ghosts from entering.

    Bunches of mistletoe grow on a host tree in the UK.

    Bunches of mistletoe grow on a host tree in the UK. / Tim Graham/GettyImages

    Mistletoe may be nod to the seeds’ ability to stick to tree branches when pooped out by birds. The unusual word might come from the Old English missel, which could refer to the mistle thrush, a bird thought to spread the seeds through its droppings; and the Proto-Germanic word for “twig.” The viscous middle layer of the mistletoe fruit is so sticky that the seeds get glued where they land post-digestion, which starts a new mistletoe plant.

    The Roman historian Pliny the Elder told how druids viewed mistletoe as sacred, recounting a ceremony where they gathered it with a golden sickle, then sacrificed two white bulls. The ceremony still takes place each year, minus the bull-slaying, at the Tenbury Mistletoe Festival in England.

    Balder, the favorite son of chief god Odin and the goddess Frigg, was felled by an arrow made of mistletoe, the only substance that could harm him. Oddly, this may have been the origin of the kissing tradition. Some retellings of the story say that Frigg revived Balder and was so happy, she commanded anyone who stood under the mistletoe to kiss as a reminder of how love conquered death.

    A version of this story was published in 2010; it has been updated for 2022.

    [ad_2]

    Alisson Clark

    Source link

  • 10 Fascinating Facts About Cleopatra

    10 Fascinating Facts About Cleopatra

    [ad_1]

    Cleopatra, born in 69 BCE as a descendent of the powerful Ptolemaic dynasty, was pharaoh of Egypt when it was under Greek rule. Known in life for her diplomacy (which involved affairs with Julius Caesar and Mark Antony), Cleopatra has achieved legendary status more than two millennia after her death—which, contrary to rumor, may not have been caused by the bite of an asp. Here are a few facts that add to her mystique.

    She was Macedonian Greek. Her father, Ptolemy XII, was a direct descendant of Ptolemy I Soter, Alexander the Great’s famed general. She was, however, the first person in her family dynasty to speak fluent Egyptian.

    When their father died, Cleopatra and her younger brother Ptolemy XIII became co-rulers of Egypt. The joint reign did not sit well with either one of them and they immediately began battling for supremacy. Julius Caesar and Roman forces sided with Cleopatra, while Ptolemy XIII and his followers raised an army against them. Cleopatra’s younger half-sister Arsinoë IV marshaled Egyptian forces to turn back the Roman advance. Eventually, Caesar and Cleopatra won the war, Ptolemy XIII was killed, and Arsinoë was exiled to Ephesus in present-day Turkey.

    She instructed Mark Antony to have Arsinoë murdered so she couldn’t threaten Cleo’s status and power. That’s pretty cold.

    Romantic descriptions of Cleopatra picture her as an irresistible goddess, but most contemporary accounts say that she had a longish nose and rather masculine features. Her charm, wit, and intelligence more than compensated for her lack of classical features, though. Hollywood films probably went further toward establishing the pharaoh as a jaw-dropping beauty than anything else.

    For reference, half a sesterce (a type of Roman coin) could buy a loaf of bread. The dinner story—which is probably just a story—goes like this: Cleopatra playfully bet Antony that she could spend the astronomical amount on a meal. He couldn’t fathom any food that would cost so much and agreed to the bet. The joke was on him when the second course, a cup of vinegar, was served. Cleo removed one of her pearl earrings, dropped it into the vinegar, where it dissolved, and then drank the whole concoction. (It’s probably a myth because vinegar is usually not strong enough to dissolve whole pearls.)

    She makes Elon Musk look average. Cleopatra had so much money, riches, and assets that when Rome conquered Egypt in 30 BCE, her fortune was enough for Rome to be able to decrease its interest rate from 12 to 4 percent.

    Many modern-day depictions of the pharaoh show a glamorous woman with smooth, straight black hair and bangs. A more historically accurate description of her ‘do would be a wig of long, tight curls and no bangs. The be-banged look can be traced to the 1934 film Cleopatra, starring Claudette Colbert, who had a signature hairstyle that included bangs. Her appearance may have influenced Elizabeth Taylor’s portrayal of the queen in the 1963 epic Cleopatra.

    The budget for her 65 costumes in Cleopatra was nearly $200,000, an unheard-of amount for the time. One dress was even made from 24-carat gold cloth.

    Strabo, a Greek historian who was alive when the Cleopatra died by suicide, suggests that her cause of death have been a toxic ointment, not an asp bite. Other accounts written within 10 years of her death say a pair of asps bit her. But one aspect is almost certain: She didn’t do it because she was heartbroken over the death of Antony, a trope played out in dozens of modern movies and novels. Her reasoning was likely related to Egypt’s fall to Rome and her impending march through Rome as a conquered opponent. One more myth to debunk: It’s doubtful that the asp(s) bit her on her breast. Prior to Shakespeare romanticizing the event, all accounts reported that she was bitten on the arm.

    According to the Greek writer Plutarch, Cleopatra was buried with Mark Antony somewhere in Egypt. There’s also some evidence to suggest she had a tomb built for herself in Alexandria, and that it now sits submerged in the Mediterranean. Archaeologists excavating the Taposiris Magna temple near Alexandria have found evidence that could suggest Cleopatra (and perhaps Antony, too) were buried at the site, but more research is needed.

    A version of this story ran in 2010; it has been updated for 2022.

    [ad_2]

    Stacy Conradt

    Source link

  • Kurt Vonnegut’s Letter to His Family About His Imprisonment in Slaughterhouse Five

    Kurt Vonnegut’s Letter to His Family About His Imprisonment in Slaughterhouse Five

    [ad_1]

    Kurt Vonnegut’s most famous novel is Slaughterhouse-Five, which is taught in many high school English classes in the U.S. (though in others it has been banned—so it goes). Slaughterhouse-Five is partially autobiographical; it’s based partly on Vonnegut’s experiences as a prisoner of war in World War II, when he and other POWs were imprisoned in an underground slaughterhouse meat locker in Dresden, Germany, in 1944. By day, they worked in labor camps; at night, they slept in the slaughterhouse. During his imprisonment in the slaughterhouse (which was indeed slaughterhouse number five), the Allies fire-bombed Dresden, largely destroying it and inflicting mass casualties (estimated at 250,000 by Vonnegut). But Vonnegut survived.

    Twenty-five years later, Vonnegut published the novel Slaughterhouse-Five, and the rest is history. But what was his frame of mind during the imprisonment? What happened before he ended up in the slaughterhouse? How did he get out of it? Letters of Note published a letter Vonnegut wrote to his family from a repatriation camp in France, shortly after his POW experience. Below are some excerpts; you can read the rest here.

    “Well, the supermen marched us, without food, water or sleep to Limberg, a distance of about sixty miles, I think, where we were loaded and locked up, sixty men to each small, unventilated, unheated box car. There were no sanitary accommodations—the floors were covered with fresh cow dung. There wasn’t room for all of us to lie down. Half slept while the other half stood.”

    “Under the Geneva Convention, Officers and Non-commissioned Officers are not obliged to work when taken prisoner. I am, as you know, a Private. One-hundred-and-fifty such minor beings were shipped to a Dresden work camp on January 10th. I was their leader by virtue of the little German I spoke. It was our misfortune to have sadistic and fanatical guards. We were refused medical attention and clothing: We were given long hours at extremely hard labor. Our food ration was two-hundred-and-fifty grams of black bread and one pint of unseasoned potato soup each day. After desperately trying to improve our situation for two months and having been met with bland smiles I told the guards just what I was going to do to them when the Russians came. They beat me up a little. I was fired as group leader. Beatings were very small time: —one boy starved to death and the SS Troops shot two for stealing food.”

    “On about February 14th the Americans came over, followed by the R.A.F. their combined labors killed 250,000 people in twenty-four hours and destroyed all of Dresden—possibly the world’s most beautiful city. But not me.”

    “I’ve too damned much to say, the rest will have to wait, I can’t receive mail here so don’t write. May 29, 1945 Love, Kurt – Jr.”

    The letter is a riveting first-person account of being a POW in WWII, and the wry voice of Vonnegut the novelist was already apparent in his letter. In the same way he repeats “so it goes” in Slaughterhouse-Five, he repeats “but not me” in this letter.

    A version of this story ran in 2010; it has been updated for 2022.

    [ad_2]

    Chris Higgins

    Source link

  • 4 Utopian Communities That Didn’t Pan Out

    4 Utopian Communities That Didn’t Pan Out

    [ad_1]

    Every once in a while, a proud little community will sprout up just to let the world know how Utopia should be run. With chins raised almost as high as ideals, the community marches forth to be an example of perfection. But in most cases, all that harmonious marching gets tripped up pretty quickly. Here are four “perfect” communities that whizzed and sputtered thanks to human nature.

    Perhaps the best-known utopian community in America, Brook Farm was founded in 1841 in West Roxbury, Massachusetts, by George and Sophia Ripley. The commune was built on a 200-acre farm with four buildings and centered on the ideals of radical social reform and self-reliance. For free tuition in the community school and one year’s worth of room and board, the residents were asked to complete 300 days of labor by either farming, working in the manufacturing shops, performing domestic chores or grounds maintenance, or planning the community’s recreation projects. The community prospered in 1842-1843 and was visited by numerous dignitaries and utopian writers.

    However, Ripley joined the unpopular Fourierism movement, which meant that soon the young people (out of a “sense of honor”) had to do all the dirty work like repairing roads, cleaning stables, and slaughtering the animals. This caused many residents, especially the younger ones, to leave. Things went downhill from there. The community was hit by an outbreak of smallpox followed by fire and finally collapsed in 1847.

    After visiting Brook Farm and finding it almost too worldly by their standards, Bronson Alcott (the father of Louisa May) and Charles Lane founded the Fruitlands Commune in June 1843, in Harvard, Massachusetts.

    Structured around the British reformist model, the commune’s members were against the ownership of property, were political anarchists, believed in free love, and were vegetarians. The group of 11 adults and a small number of children were forbidden to eat meat or use any animal products such as honey, wool, beeswax, or manure. They were also not allowed to use animals for labor and only planted produce that grew up out of the soil so as not to disturb worms and other organisms living in the soil.

    Many in the group of residents saw manual labor as spiritually inhibiting and soon it became evident that the commune could not provide enough food to sustain its members. The strict diet of grains and fruits left many in the group malnourished and sick. Given this situation, many of the members left and the community collapsed in January 1844.

    Officially known as the United Society of Believers in Christ’s Second Appearing, the Shakers were founded in Manchester, England, in 1747. As a group of dissenting Quakers under the charismatic leadership of Mother Ann Lee, the Shakers came to America in 1774.

    Like most reformist movements of the time, the Shakers were agriculturally based, and believed in common ownership of all property and the confession of sins. Unlike most of the other groups, the Shakers practiced celibacy, or the lack of procreation. Membership came via converts or by adopting children. Shaker families consisted of “brothers” and “sisters” who lived in gender-segregated communal homes of up to 100 individuals. During the required Sunday community meetings it was not uncommon for members to break into a spontaneous dance, thus giving them the Shaker label.

    As pacifists they were exempted from military service and became the United States’ first conscientious objectors during the Civil War. Currently, however, there isn’t a whole lot of Shaking going on. As the younger members left the community, converts quit coming, and the older ones died off, many of the communities were forced to close. Of the original 19 communities, most had closed by the early 1900s.

    Located 15 miles south of Chicago, the town of Pullman was founded in the 1880s by George Pullman (of luxury railway car fame) as a utopian community based on the notion that capitalism was the best way to meet all material and spiritual needs. According to Pullman’s creed, the community was built to provide Pullman’s employees with a place where they could exercise proper moral values and where each resident had to adhere to the strict tenets of capitalism under the direction and leadership of Pullman. The community was run on a for-profit basis—the town had to return a profit of 7 percent annually. This was done by giving the employees two paychecks, one for rent, which was automatically turned back in to Pullman, and one for everything else. Interestingly, the utopian community had very rigid social class barriers, with the management and skilled workers living in stately homes and the unskilled laborers living in tenements. The experiment lasted longer than many of the other settlements, but ultimately failed. Pullman began demanding more and more rent to offset company losses, while union sentiment grew among the employee residents.

    This article originally appeared in the Mental Floss book Forbidden Knowledge.

    [ad_2]

    Floss books

    Source link

  • 4 Famous Cases of Plagiarism

    4 Famous Cases of Plagiarism

    [ad_1]

    Norway’s minister for research and higher education resigned in January 2024 after a student discovered that parts of her master’s thesis had been taken from another author’s work without attribution—and she’s far from the only public figure who has faced accusations of plagiarism. Let’s revisit a few famous cases of word borrowing.

    Martin Luther King Jr.

    MLK at the March on Washington—where parts of his speech were inspired by another. / CNP/GettyImages

    In 1955, Martin Luther King, Jr. received a doctorate in systematic theology from Boston University on the strength of his dissertation comparing the theologians Paul Tillich and Henry Nelson Weiman. In a review long after King’s assassination, though, the university discovered that King had plagiarized about a third of his thesis from another student’s dissertation.

    King’s iconic “I Have a Dream” speech, delivered at the 1963 March on Washington for Jobs and Freedom, also echoed the work of a colleague. A leading Chicago minister and lawmaker named Archibald Carey, Jr. had given a speech at the 1952 Republican National Convention that ended on an inspiring note:

    “From every mountain side, let freedom ring. Not only from the Green Mountains and the White Mountains of Vermont and New Hampshire; not only from the Catskills of New York; but from the Ozarks in Arkansas, from the Stone Mountain in Georgia, from the Great Smokies of Tennessee and from the Blue Ridge Mountains of Virginia.”

    King’s rousing finale in Washington—which was partly improvised on the spot—was noticeably similar, leading some to believe that he was inspired by Carey’s speech:

    “And so let freedom ring from the prodigious hilltops of New Hampshire. Let freedom ring from the mighty mountains of New York. Let freedom ring from the heightening Alleghenies of Pennsylvania. Let freedom ring from the snowcapped Rockies of Colorado. Let freedom ring from the curvaceous slopes of California. But not only that, let freedom ring from Stone Mountain of Georgia. Let freedom ring from Lookout Mountain of Tennessee. Let freedom ring from every hill and molehill of Mississippi. From every mountainside, let freedom ring.

    “And when this happens, and when we allow freedom ring, when we let it ring from every village and every hamlet, from every state and every city, we will be able to speed up that day when all of God’s children, Black men and white men, Jews and Gentiles, Protestants and Catholics, will be able to join hands and sing in the words of the old Negro spiritual: Free at last. Free at last. Thank God almighty, we are free at last.”

    John Milton

    John Milton: Not a plagiarist (despite William Lauder’s efforts) / Heritage Images/GettyImages

    Was the poet behind Paradise Lost a plagiarist? Well, no, but William Lauder, a Scottish scholar and noted forger, sure wanted you to think so. In 1747, embittered by his professional failures, Lauder published several essays in the Gentlemen’s Magazine claiming to prove that Milton had stolen almost all of his 1667 epic poem from other authors. Lauder accused Milton—who was by then deceased—of lifting text from now-obscure works like Hugo Grotius’s Adamus Exul (1601) and Andrew Ramsay’s Poemata Sacra (1633).

    There was just one problem: Lauder had forged the “evidence” by inserting lines from Paradise Lost into the other authors’ works. For a while, many scholars (including the great Samuel Johnson) supported Lauder. But skeptics studied extant copies of the older poems and it soon became obvious that Lauder, not Milton, was the cheat. And cheating, at least in this case, didn’t pay. Lauder fled to Barbados and died in obscurity.

    Alex Haley

    Alex Haley admitted inadvertently lifting material from another writer. / Mickey Adair/GettyImages

    Journalist Alex Haley initially gained prominence for being the “as told to” co-author behind The Autobiography of Malcolm X, published less than a year after the civil rights leader’s assassination in 1965. Haley then went on to publish the epic Roots: The Saga of an American Family in 1976, supposedly a true story in which he traced his own ancestry back to an African man, Kunta Kinte, who was enslaved and forcibly bought to the U.S. in the 18th century. Haley won a Pulitzer Prize the next year, and the book was made into a wildly popular miniseries.

    After the book’s publication, however, several historians and authors challenged the truthfulness of the story. In one case, an author named Harold Courlander sued Haley for plagiarizing his 1967 novel, The African. Haley eventually admitted that three paragraphs in the earlier novel had found their way into Roots.

    Courlander’s lawyer mentioned an example in court. In The African, enslaved people called to each other in the fields by saying: “well, yooo‐hooo‐ahhooo, don’t you hear me calling you?”

    In Roots, the lawyer alleged, the phrase appears almost exactly: “the field hands heard a rising, lingering singsong. Yooo‐hooo‐ah‐hooo, don’t you hear me calling you?”

    Haley and Courlander settled the dispute out of court.

    Portrait Of Marie-Henri Beyle a.k.a. Stendhal

    Stendhal: Guilty as charged. / Heritage Images/GettyImages

    During his life, French writer Stendhal (whose real name was Marie-Henri Beyle) was most famous not for his novels, but for his books about art and travel. Yet, in his published debut, The Lives of Haydn, Mozart, and Metastasio (1814), he plagiarized extensively from at least one previous biography. In a review of a reissued edition in the journal Modern Language Review, a critic described Stendhal’s literary lift:

    “[Stendhal] made up his mind to write … a life of Haydn, about whose music and life he himself knew virtually nothing. This perilous, even ludicrous problem he solved by downright plagiarism … in a tearing hurry he concocted (or rather brazenly translated) his amazing work, borrowing practically all of it (without a single word of acknowledgment) from a well-known if not remarkably discerning Italian biography of Haydn by Giuseppe Carpani, then a relatively prominent musicologist.”

    When Stendhal was confronted with overwhelming evidence of the theft, he took it even further by manufacturing evidence to exonerate himself, the critic continued:

    “The author had no qualms at all; he proceeded to invent a facetious brother with a similarly provocative pseudonym, merely to cock snooks at poor old Carpani, beside himself with righteous anger … [Stendhal] was uncommonly lucky to live in a very easy-going century; otherwise; he might speedily have found himself in some court of bankruptcy.”

    At the very least, he could have added forgery to his list of literary crimes.

    This article was excerpted from the Mental Floss book Forbidden Knowledge. A version of this story was published in 2012; it has been updated for 2024.

    [ad_2]

    Floss books

    Source link

  • Why Can’t I Use My Cell Phone on an Airplane?

    Why Can’t I Use My Cell Phone on an Airplane?

    [ad_1]

    The Federal Aviation Administration (FAA) bars the use of all transmitting devices in the off chance that transmissions could interfere with a plane’s navigation and communications equipment and cause system malfunctions [PDF]. It’s true that these concerns are a bit overblown, but the FAA likes to err on the side of caution. (Can you blame them?)

    Initially, the reason authorities didn’t want you calling your mom or dialing into a work meeting had less to do with crashing your plane and more to do with crashing the cell phone network. The Federal Communications Commission (FCC) had determined that mid-flight calls have a direct impact on cell phone service on the ground. That’s because cell phones are primarily designed for callers who are firmly planted on land, communicating with a single, nearby tower.

    If you’re speeding through the sky at 550 mph, your phone will connect with multiple towers and eat up valuable space on their circuits, wreaking havoc on service. A 2007 plan to lift the ban was strongly opposed by cell carriers for this reason.

    In 2013, passengers gained the ability to use their smartphones and other electronic devices as long as they remain on airplane mode, which prevents them from connecting to the cellular network. People could connect to in-flight Wi-Fi and theoretically make voice or video calls that way, but even that remains prohibited for reasons involving more than just safety.

    The FCC scuttled a plan in 2017 to consider letting passengers make calls once the plane gained at least 10,000 feet of altitude after it faced strong opposition. Pilots, flight attendants, and various members of the general public were against it—partially because many don’t want to endure an hours-long flight filled with competing phone calls. As Ajit Pai, then the FCC chairman, put it, “taking it off the table permanently will be a victory for Americans who, like me, value a moment of quiet at 30,000 feet.”

    A version of this story originally ran in 2009; it has been updated for 2023.

    [ad_2]

    Ethan Trex

    Source link

  • Why Are St. Bernards Always Depicted With Barrels Around Their Necks?

    Why Are St. Bernards Always Depicted With Barrels Around Their Necks?

    [ad_1]

    The props you’d typically expect to see in a portrait of a dog include a collar and perhaps a toy or two. But if you’re looking at pictures of St. Bernards, don’t be surprised if you see images of them in the mountains with a barrel strapped around their neck. The big dogs have long been used in alpine rescue missions, so a background of snow-capped peaks makes sense. But that barrel (which is said to be filled with brandy) isn’t historically accurate.

    St. Bernards on an alpine rescue.

    St. Bernards on an alpine rescue. / George Pickow/GettyImages

    High in the Alps near the border between Italy and Switzerland is the Great St. Bernard Pass, used by humans to cross the mountain range since the Bronze Age. The Romans erected a temple to Jupiter there as they headed north to conquer somebody or other. In 1049, Bernard of Menthon (canonized St. Bernard in 1681 and confirmed as patron saint of the Alps in 1923) built a hospice on top of the temple ruins as a shelter for travelers.

    A group of monks maintained the hospice, took care of guests, acted as guides through the pass, and served as search and rescue teams for travelers who had gotten lost or injured. At some point, the monks began to train their dogs, who were brought from the villages in the valleys below to work as watchdogs and companions and as rescue animals. The dogs, with their strength, weather-resistant coats and superior sense of smell, were well-equipped to guide and rescue travelers.

    It’s not clear when dogs were first brought to the hospice or when they were trained for rescue purposes—the hospice was destroyed by a fire in the late 16th century and its archives were lost. Based on info from outside sources, historians estimate that dogs first arrived at the monastery between the 1550s and 1660. The oldest surviving written reference to the dogs, which is the monastery prior’s account of the cook harnessing a dog to an exercise wheel of his own invention to turn a cooking spit, is from 1707.

    The Saint Bernard we know today is the result of centuries of breeding at the hospice and the surrounding areas. The family tree likely starts with the mastiff-type dogs—brought to Switzerland by the Roman armies—that bred with the native dogs of the region. By 1800, the monks had their own kennel and breeding program, a melting pot that combined Great Pyrenees, Great Danes, bulldogs, Newfoundlands, and others. The dogs of the hospice were well known in the region and were variously referred to as Barryhunds (in tribute to Barry, a dog that reportedly saved 40 lives), sacred dogs, Alpine mastiffs, Alpendogs, and hospice dogs until 1880, when the name “St. Bernard” was officially designated.

    'Alpine Mastiffs Reanimating a Distressed Traveler' by Edwin Henry Landseer.

    Those alpine dogs definitely kept busy. But if you happened to find one while lost in the Alps, you probably wouldn’t see a barrel strung around its neck.

    The barrels we see around the dogs’ necks in paintings and cartoons is the invention of an artist named Edwin Henry Landseer. In 1820, Landseer, a 17-year-old painter from England, produced a work titled Alpine Mastiffs Reanimating a Distressed Traveler. The painting portrays two Saint Bernards standing over a fallen traveler, one dog barking in alarm, the other attempting to revive the traveler by licking his hand. The dog doing the licking has a barrel strapped around its neck, which Landseer claimed contains brandy.

    Despite the fact that brandy wouldn’t be something you’d want if you were trapped in a blizzard—alcohol causes blood vessels to dilate, resulting in blood rushing to your skin and your body temperature decreasing rapidly—and that the dogs never carried such barrels, the collar keg stuck in the public’s imagination and the image has endured.

    A version of this story originally ran in 2009; it has been updated for 2023.

    [ad_2]

    Matt Soniak

    Source link

  • 11 Extinct, Dead, and Dormant U.S. Languages

    11 Extinct, Dead, and Dormant U.S. Languages

    [ad_1]

    Word to the wise: Not all languages stick around forever. Communication systems from a few cultures in the U.S. (often Native American tribes) have already hit the dead or extinct list, and many more are on their way out. In fact, according to National Geographic, “one language dies every 14 days.”

    But a dead language isn’t necessarily what you think: Per the language website Babbel, a language that is dead is “no longer the native language of a community of people”; an extinct language, on the other hand, is a language that is no longer spoken at all. Another classification, according to Ethnologue, is dormant, for languages that, while “not used for daily life … [have] an ethnic community that associates itself with a dormant language and views the language as a symbol of that community’s identity.”

    Here are 11 tongues, some extinct, some dead or dormant, and some that are finding new life.

    In January 2008, Alaska resident Marie Smith Jones, who was believed to be the last full-blooded Eyak and the only remaining person known to be fluent in the Eyak language, died at age 89. Jones tried to help preserve Eyak by helping with a dictionary and compiling the language’s grammar rules; she also gave two speeches at the United Nations about the importance of preserving Indigenous languages. Unfortunately, the language didn’t carry on among a large group of people—not even her nine children learned Eyak, because when they were young, it was considered improper to speak anything but English.

    Today, there’s no one who learned Eyak as a first language, but there’s work to change that. An online project called the dAXunhyuuga’ eLearning Place (“The Words of the People”) seeks to “help Eyak descendants, wherever they live, find ways to use the Eyak language and culture in a way that has meaning for them,” according to their website. And in 2016, the Cordova Times reported that 100 people were using it, including 40 Eyak Alaska Natives.

    The Yana language consisted of several dialects spoken by the Yana people of north-central California, whose numbers were devastated by illness and massacres brought on the influx of treasure-seeking settlers during the Gold Rush. Famously, one dialect—called Yahi—was spoken by a man named Ishi (which means “man”), and he was instrumental in helping linguist-anthropologist Edward Sapir [PDF] preserve some of the language. When Ishi succumbed to tuberculosis in 1916, that was the end of Yahi; the last Yana speakers in general died around 1940. Ishi’s story would later be told in several books and movies.

    There is a sad, but all too common, side note to this tale: Following Ishi’s death, his remains were cremated and buried. But his brain, which had been removed during his autopsy, was sent to the Smithsonian Institution in 1917. It would remain there until 2000, when—following passage of legislation like the National Museum of the American Indian Act of 1989 and the Native American Grave Protection and Repatriation Act of 1990—both Ishi’s brain and his ashes were repatriated to tribes determined to be his closest living relatives. To this day, the remains of some 116,000 Native Americans can be found in museums and institutions around the United States.

    The Tunica language could be found in Louisiana until the 1940s. A man named Sesostrie Youchigant of the Tunica tribe was considered the last native speaker of Tunica, but even he didn’t have a full grasp of the language—after his mother died in 1915, he typically spoke French and English. Youchigant worked with linguist Mary Haas—a student of Edward Sapir—to try to write down everything he remembered. (“I often had the feeling,” she would later write, “that the Tunica grooves in Youchigant’s memory might be compared to the grooves in a phonograph record; for he could repeat what he had heard but was unable to make up new expressions of his own accord.”) Haas even made recordings of him speaking the language, but the quality is so poor that little can be understood.

    Bits and pieces of Tunica would survive as phrases, and according to linguist Patricia Anderson, “[i]n the 1990s, Donna Pierite and her family were designated Tunica storytellers and legend keepers and, as such, began performing Tunica stories as dictated to Haas.” Then, in 2010, Brenda Lintinger, a councilwoman in the Tunica-Biloxi tribe, contacted Tulane University for help understanding Haas’s documentation of the language—which was written for an audience of linguists, not the layperson— so that new Tunica could be created. This ultimately led to the Tunica Language Project, as well as the writing of Tunica-language children’s books, prayers, and even language camps. The effort has had success: The language website Ethnologue now classifies Tunica as “Reawakening” with 32 speakers as a second language as of 2017.

    The Tillamook language, spoken by an Oregon-based tribe of the same name, is part of the Salishan languages family, which was originally made up of 23 languages. Though the last fluent speakers collaborated with scholars to record the language from 1965 to 1970, it didn’t survive: According to some, the last known speaker of the language was Minnie Scovell, who died in 1972.

    Susquehannock has been gone for a long time. It was part of the Iroquois language family, but almost everything we know about it is from a short vocabulary guide collected by Swedish missionary Johannes Campanius in the 1640s. Even then, the vocabulary guide consisted of only about 100 words. In 1608, explorer John Smith encountered and described the Susquehannock people, calling them gigantic and writing that their language “may well beseeme their proportions, sounding from them, as a voyce in a vault.”

    From the early 18th century to the mid 20th century, the population of Deaf people in the isolated town of Chilmark on Martha’s Vineyard, was so large that The Atlantic estimates it included “one in every 25 people” in the town. The population of residents who communicated with Martha’s Vineyard Sign Language was even larger, consisting of virtually everyone in both the Deaf and hearing communities. For many reasons—most of them related to Martha’s Vineyard’s relative isolation ending in the mid-19th century—the language started to decline. The last Deaf person fluent in Martha’s Vineyard Sign Language died in the 1950s; without any formal records of the regional language, it didn’t get passed down to younger generations.

    After settlers from the Netherlands landed in the Americas in the 1600s, variants of Dutch began to crop up across the Northeast. As William Z. Shetter wrote in a 1958 issue of American Speech, the North American versions of Dutch “survived a remarkably long time, but by the end of the 19th century it was in active use only around Albany, New York, and in the northernmost part of New Jersey.” Dubbed Albany Dutch and Jersey Dutch, respectively, Shetter writes that these languages, too, were going the way of the dodo by the early 1900s, “and began attracting attention as collectors’ items.” Around 700 words of Jersey Dutch were preserved in 1910 thanks to linguist/politician J. Dyneley Prince.

    Penobscot, a dialect of the Eastern Abenaki language, was used by the Penobscot tribe in Maine until its last fluently native speaker died in the early 1990s. The language was preserved beginning in the 1930s by Frank Siebert, who, in 1982, hired Carol Dana, a member of the tribe, as his research assistant. They worked together to get down as much of the language as possible, and Dana learned about the language through the materials Sieberg had gathered. Though she’s not fluent, Dana knows more about the language than anyone else and is teaching it to the next generation

    All we have left of the Eastern Atakapa language is 287 words written down in 1802 by a man named Martin Duralde [PDF]. The people who spoke the language lived near modern-day Franklin, Louisiana. The degree of its separation from the better-attested Western Atakapa is debated; some think they’re different enough they must be different languages, others that they’re similar enough that they’re the same. But even if they were the same, no types of Atakapa survived past the early 20th century.

    The Siuslaw language of the Oregon Pacific coast disappeared in the 1970s, but it’s been preserved quite well for anyone who wants to try to pick it up again. There are dictionaries, plus audio recordings, several hours of fieldwork, and a few books. Despite all of this preservation, few currently speak it fluently.

    A version of this story ran in 2009; it has been updated for 2022.

    [ad_2]

    Stacy Conradt

    Source link

  • The Origins of 6 Ancient Herbs and Spices

    The Origins of 6 Ancient Herbs and Spices

    [ad_1]

    The world’s lust for spices has shaped thousands of years of history. Let’s take a look at the origins of some of our favorite herbs and spices, from the ubiquitous salt to the more obscure horseradish, in this list adapted from the Mental Floss book In the Beginning: The Origins of Everything.

    Dried black peppercorns.

    Dried black peppercorns. / Abhishek Mehta/Moment/Getty Images

    If you eat enough pepper you’ll start to sweat, which explains why ancient peoples thought it made an excellent medical treatment. Chinese physicians employed it as a treatment for malaria, cholera, and dysentery, while Indian monks used it as a sort of PowerBar: they swallowed small amounts of pepper in hopes that it would help them survive their long treks through the rough countryside. Later, pepper became so valuable that it served as a de facto form of currency; it was used for centuries in Europe to pay rent and taxes.

    In one exceptional case, it was also used for ransom. Attila the Hun is said to have demanded about 3000 pounds of pepper in 408 CE; in exchange, he promised to stop sacking the city of Rome.

    Salt being harvested from Salar de Uyuni in Bolivia.

    Salt being harvested from Salar de Uyuni in Bolivia. / Jami Tarris/Stone/Getty Images

    It’s probably been the most valuable food additive in all of history, mostly because it did such a good job of preserving foods in the centuries before the refrigerator was invented. Salt mines in Chehr Abad, Iran, also testify to salt’s ability to preserve people. Four “salt men” have been discovered there, eerily mummified by what they were digging for, and two of them may date as far back as 650 BCE.

    But the use of salt far predates the Iranian salt men. In China, 4700-year-old writings testify to its value; the Peng-Tzao-Kan-Mu, the earliest known treatise on pharmacology, mentions more than 40 kinds of salt. And a tragic piece of Chinese folklore tells a story of how the mythical phoenix first brought salt to the attention of a lowly peasant, who was accidentally put to death by a temperamental emperor before anyone realized the value of what he had found.

    A stack of rolled-up cinnamon sticks.

    A stack of rolled-up cinnamon sticks. / Jacob Maentz/Corbis Documentary/Getty Images

    Although it’s originally from Sri Lanka, cinnamon has been a global sensation for millennia. It first appears in Chinese writings from 2800 BCE (they called it kwai). Cinnamon was also used by ancient Egyptian embalmers, perhaps for the same reason that it became a popular cooking spice—its warm aroma and antibacterial properties could hide the stench of meat starting to go bad.

    Romans had attachments to cinnamon that were both medical and sentimental. Pliny the Elder records cinnamon as being worth about 15 times its weight in silver. And the Roman Emperor Nero, known for both his evil tendencies and his extravagance, sacrificed a year’s supply of cinnamon as punishment for murdering his wife.

    Fresh nutmegs covered in mace.

    Fresh nutmegs covered in mace. / Bob Krist/The Image Bank/Getty Images

    Like cinnamon, this one’s been a popular spice since the days of, yep, Pliny the Elder. He wrote about a curious plant that bears two spices: nutmeg (its seed) and mace (the reddish covering around the seed). Nutmeg’s distinctive scent has made it consistently popular through the ages; the Holy Roman Emperor Henry VI reportedly had workers blanket Roman streets with the aroma in celebration of his crowning.

    The majority of the world’s nutmeg now comes from the Caribbean island of Grenada—in fact, the local economy is based almost entirely on tourism and nutmeg exports, and the spice is the centerpiece of the country’s flag. But nutmeg didn’t even exist in Grenada until British sailors brought it there from Indonesia in the early 1800s.

    Fresh ginger on a green mat.

    Fresh ginger on a green mat. / Martin Harvey/The Image Bank/Getty Images

    Marco Polo didn’t bring back pasta from his trip to China, but he did reintroduce ginger to Europe. Hugely popular in the Roman Empire, ginger suffered roughly the same fate as that empire: By Polo’s day, it was barely remembered by European cooks. After Polo and company imported it as a rare luxury, it stayed that way for centuries; Queen Elizabeth I was a noted enthusiast. Some historians think she may have popularized the gingerbread man.

    Whole horseradish next to the grated version.

    Whole horseradish next to the grated version. / Westend61/Getty Images

    Anything that tastes as strong as horseradish must have a history of use in medicine. In the 3500 years humans have been eating it, people have treated everything from rheumatism to tuberculosis, lower back pain to low libido with horseradish. Hippocrates wrote about it (along with the 400 other spicy medicines he recommended). The Oracle at Delphi was a big fan, too; it supposedly told Apollo that “the radish is worth its weight in lead, the beet its weight in silver, and the horseradish its weight in gold.”

    Horseradish had a bit of a renaissance during, well, the Renaissance. As a food fad, it spread all over Europe and Scandinavia, and by the late 1600s, it was a British staple, eaten alongside beef and oysters and made into pungent cordials.

    You may be wondering why it’s called horseradish. The answer has very little to do with horses. Germans called the root meerrettich, or “sea radish,” since the herb grows wild in coastal areas. English speakers may have picked up the word as mare-radish, which then became the genderless horseradish. American settlers had an even more evocative name for it: stingnose.

    A version of this story was published in 2009; it has been updated for 2024.

    [ad_2]

    mentalfloss .com

    Source link

  • 12 Star-Powered College Roommates

    12 Star-Powered College Roommates

    [ad_1]

    Plenty of college students fret about their roommates, complaining about how they smell bad or steal Pop Tarts. But be careful—that person on the lower bunk may turn out to be powerful someday. Or maybe you both will. Here’s a look at 12 pairs of famous roommates.

    Actor Tommy Lee Jones and former vice president Al Gore (and eventually actor John Lithgow) shared a room at Harvard and, like plenty of college roomies, chased skirts together, even joining a country music band to get girls. The unlikely duo also served as the inspiration for the character of Oliver in Love Story, written by fellow Harvard alum Erich Segal.

    When rooming together at the University of Minnesota, Tony Dungy and Flip Saunders dreamed of capturing a national championship for the school, Dungy on the football field and Saunders on the basketball court. Both went on to make their mark as coaches. Dungy led the Indianapolis Colts to a Super Bowl victory in 2007, and Saunders took the Detroit Pistons to three straight Eastern Conference Finals in 2006, 2007, and 2008.

    Filmmaker Wes Anderson and actor Owen Wilson have had great success collaborating on movies from Rushmore to The Royal Tenebaums. And it all started when they were roommates at the University of Texas, where they co-wrote Anderson’s directoral debut Bottle Rocket. That’s not the only writing Anderson did with Wilson, though; he wrote a paper about Edgar Allan Poe for him in order to score the better bedroom in their apartment.

    Charlie Weis may have run a pretty no-nonsense football program at Notre Dame, but back when he was a student there, he was fond of playing pranks on roommate (and quarterback) Joe Montana.

    Actor Ving Rhames got an assist for his successful career from roommate Stanley Tucci. While at SUNY Purchase, Tucci convinced Rhames to shorten his name from Irving. And let’s face it, that was probably a good career move; Marsellus Wallace just wouldn’t be the same played by someone named Irving.

    When you’ve got two Rhodes Scholars rooming together at Oxford, you know there’s bound to be some brainpower. But the pairing of future U.S. president Bill Clinton with future TIME editor and U.S. diplomat Strobe Talbot just seems excessive.

    Bill Clinton may have had a high-powered roommate, but his wife’s roomie was no slouch either. At Wellesley, Hillary Clinton roomed with Janet Hill, a future attorney. But you may know Hill more for her son, former NBA star Grant Hill. The ties between the families continued past college; Grant has been a public supporter of the Democratic Party and on the night he was drafted, he got a congratulatory call from then-president Bill.

    While attending the School of the Museum of Fine Arts in Boston, Peter Wolf, lead vocalist for the J. Giels Band, ended up rooming with surrealist director David Lynch. Ironically, Lynch ended up kicking Wolf out of the apartment, saying that he was “too weird.” As if anything could be too weird for David Lynch.

    If any roommate on this list would be too weird, you might have thought it would be creepy illustrator Edward Gorey. But apparently poet Frank O’Hara didn’t mind when the two roomed together at Harvard.

    At Julliard, roomies Robin Williams and Christopher Reeve vowed to always be friends and help each other throughout life. Both held true to the promise and they remained close. Williams even covered some of Reeve’s medical expenses after he was paralyzed. Of course, being Robin Williams, he couldn’t stop there; Williams visited Reeve dressed as a doctor and pretending to be his proctologist, reportedly causing Reeve to smile for the first time since his accident.

    When they were friends in Michigan, Steve Mariucci and Tom Izzo would race up the steps of a man-made ski jump to see who would be successful. If Izzo won, he’d become head basketball coach at Notre Dame, but if Mariucci won, he’d be the school’s football coach. The two roomed together at Northern Michigan University and then went on to live their childhood dreams. Izzo became head basketball coach at Michigan State University, while Mariucci went on to coach the San Francisco 49ers.

    Canadian politicians Bob Rae and Michael Ignatieff roomed together at the University of Toronto. The two were close friends, coming from similar backgrounds, and spent a lot of time together. They remained close, even in 2006 when they ran against each other for leadership of the Liberal Party.

    A version of this story ran in 2009; it has been updated for 2022.

    [ad_2]

    Jason Plautz

    Source link

  • Why Do We Yawn? Science Has Some Theories

    Why Do We Yawn? Science Has Some Theories

    [ad_1]

    The short answer is that no one really knows why we yawn. But people have theories.

    [ad_2]

    Matt Soniak

    Source link

  • 8 Sparkling Facts About Champagne

    8 Sparkling Facts About Champagne

    [ad_1]

    As midnight approaches on December 31, more than a few of us will pop open a bottle or two of champagne to help ring in the New Year. With a few choice facts about the bubbly stuff, you can look knowledgeable rather than just tipsy when you drain your flute. Here are a few little nuggets you can share with fellow revelers.

    A waiter pours champagne into a tower of glasses to celebrate the opening of a new Casino at the Ritz Hotel, London.

    Don’t try this at home (unless you want to clean a lot of sticky champagne off your floor). / Evening Standard/GettyImages

    Strictly speaking, champagne is a sparkling wine that comes from the Champagne region of northeastern France. If it’s a bubbly wine from another region, it’s sparkling wine, not champagne. While many people use the term “champagne” generically for any sparkling wine, the French have maintained their legal right to call their wines champagne for over a century. The Treaty of Madrid, signed in 1891, established this rule, and the Treaty of Versailles reaffirmed it.

    The European Union helps protect this exclusivity now, although certain American producers can still generically use “champagne” on their labels if they were using the term before early 2006.

    Sparkling wines can be made in a variety of ways, but traditional champagne comes to life by a process called the méthode champenoise. Champagne starts its life like any normal wine. The grapes are harvested, pressed, and allowed to undergo a primary fermentation. The acidic results of this process are then blended and bottled with a bit of yeast and sugar so it can undergo a secondary fermentation in the bottle. (It’s this secondary fermentation that gives champagne its bubbles.) This new yeast starts doing its work on the sugar, and then dies and becomes what’s known as lees. The bottles are then stored horizontally so the wine can “age on lees” for 15 months or more.

    After this aging, winemakers turn the bottles upside down so the lees can settle to the bottom. Once the dead yeast has settled, producers open the bottles to remove the yeast, add a bit of sugar known as dosage to determine the sweetness of the champagne, and slip a cork onto the bottle.

    Several factors make the chardonnay, pinot noir, and pinot meunier grapes grown in the Champagne region particularly well suited for crafting delicious wines. The northern location makes it a bit cooler than France’s other wine-growing regions, which gives the grapes the proper acidity for sparkling wine production. Moreover, the porous, chalky soil of the area—the result of large earthquakes millions of years ago—aids in drainage.

    Bottles of Prosecco Treviso on a shelf

    Prosecco is a popular type of sparkling wine. / SOPA Images/GettyImages

    Although many champagnes are delightful, most of the world’s wine regions make tasty sparkling wines of their own. You can find highly regarded sparkling wines from California, Spain, Italy, Australia, and other areas without shelling out big bucks for Dom Perignon.

    Illustration of Dom Pierre Perignon Making Champagne

    Illustration of Dom Pierre Perignon making champagne. / Stefano Bianchetti/GettyImages

    Contrary to popular misconception, the namesake of the famous brand didn’t invent champagne. But Perignon, a Benedictine monk who worked as cellar master at an abbey near Epernay during the 17th and 18th centuries, did have quite an impact on the champagne industry.

    In Perignon’s day, sparkling wine wasn’t a really sought-after beverage. In fact, the bubbles were considered to be something of a flaw, and early production methods made producing the wine somewhat dangerous. (Imprecise temperature controls could lead to fermentation starting again after the wine was in the bottle. If one bottle in a cellar exploded and had its cork shoot out, a chain reaction would start.) Perignon helped standardize production methods to avoid these explosions, and he also added two safety features to his wines: thicker glass bottles that better withstood pressure and a rope snare that helped keep corks in place.

    You’ll see these terms on champagne labels to describe how sweet the good stuff in the bottle is. The sugar dosage is added to the bottle right before it’s corked, and these terms describe exactly how much sugar went in. Extra brut has fewer than six grams of sugar per liter added, while brut contains fewer than 15 grams of additional sugar per liter. Several other classifications exist, but drier champagnes are more common.

    Dan Gurney, A.J. Foyt, Jo Siffert, Rainer Schlegelmilch after the 24 Hours of Le Mans race

    Dan Gurney and A.J. Foyt spraying people champagne in 1967. / Bernard Cahier/GettyImages

    Throughout its history, champagne has been a celebratory drink that’s made appearances at coronations of kings and the launching of ships. However, the bubbly-spraying throwdowns that now accompany athletic victories are a much more recent development. When Dan Gurney and A.J. Foyt won the grueling 24 Hours of Le Mans race in 1967, they ascended the winner’s podium with a bottle of champagne in hand. Gurney looked down and saw team owner Carroll Shelby and Ford Motors CEO Henry Ford II standing with some journalists and decided to have a bit of fun. Gurney gave the bottle a shake and sprayed the crowd, and a new tradition was born.

    After the French Revolution, members of Napoleon’s cavalry decided that the normal pop-and-foam ritual of opening a bottle of champagne just wasn’t as visually impressive as it could be. They responded by popularizing a way of opening bottles using a sword. The technique, known as sabrage, involved holding a bottle at arm’s length while quickly running a saber down the bottle toward the neck. When the saber’s blade struck the glass lip just beneath the cork, the glass breaks, shooting off the cork and neck of the bottle while leaving the rest of the vessel intact.

    Ceremonial “champagne swords” are available for just this purpose, and if you can pull off this trick, you’ll be the toast of your shindig. (Be careful, though. A flying champagne cork is already you’ll-put-your-eye-out dangerous, and adding a ring of ragged broken glass to the equation doesn’t make the whole endeavor any safer.)

    A version of this story originally ran in 2008; it has been updated for 2023.

    [ad_2]

    Ethan Trex

    Source link

  • The Zany History of Mini Golf

    The Zany History of Mini Golf

    [ad_1]

    Whether you call it mini golf, putt putt, or a cheap date, miniature golf has been popular since the 19th century.

    The oldest mini golf course in existence, according to Guinness World Records, can actually be found in Scotland: The St Andrews Ladies Putting Club was formed in 1867 as a members-only green for women golfers. Of course, the club was a result of the conventions of the day that decreed it improper for a lady to “take the club back past their shoulder.” There may not have been any windmills or loop-the-loop obstacles on this course, but the green was—and remains—one of the most prestigious miniature courses around.

    A vintage postcard of the St Andrews golf club.

    A vintage postcard of the St Andrews golf club. / Print Collector/GettyImages

    All of the early miniature golf courses fell under a few broad categories, including the pitch and putt, the regulation par-3, and the executive. All of them used a short driver along with a putter, and kept the same design of the larger courses: sand traps, hills, ponds, and trees.

    In 1916, James Barber designed a miniature golf course in North Carolina called Thistle Dhu. The course was compact and featured a classical design, with fountains, gardens, and geometric walkway patterns. In 1926, a few innovative designers created miniature golf courses on the roof of a New York City skyscraper, and other buildings followed suit; approximately 150 rooftop courses were in existence by the end of the decade in New York City alone.

    Once the Great Depression hit, regulation miniature golf courses were too expensive for most to afford, so “rinkie-dink” courses sprang up. These courses included obstacles scrounged from whatever was around: tires, rain gutters, barrels, and pipes. Eventually, the wild obstacles became so popular that they became a regular feature in courses all over America.

    As for the first miniature golf franchise, you have 1929’s Tom Thumb Golf to thank for that. In the early 1930s, it was estimated that approximately 25 percent of the miniature golf courses in the U.S. were Tom Thumb-patented designs. Building on the popularity of the rinkie-dink courses, the Tom Thumbs featured similar hazards, built by workers in their “fantasy factory.” By the end of the 1930s, some 4 million Americans were playing miniature golf.

    In 1953, however, a mini golf revolution occurred. Don Clayton, the founder of Putt Putt Golf and Games, was fed up with the “trick shots” in the Tom Thumb style courses, and became an advocate for miniature golf as a serious sport. He designed a back-to-basics course of only straight putts, with none of the gimmicky hazards players had come to love.

    A woman celebrates while playing mini golf

    Walter B. McKenzie/Stockbyte via Getty Images

    Unfortunately for Clayton, his vision didn’t hold out. In 1955, Al Lomma and Lomma Enterprises, Inc. ushered in a new era of mechanically animated hazards like rotating windmill blades, twisting statues, and moving ramps, and the trend remained for decades.

    Toward the end of the 1990s, country-club style miniature golf courses began to make a comeback, thanks in part to the interest of well-known celebrity golfers like Jack Nicklaus. Today, miniature golf competitions are held not only on courses with windmills and castles, but also on miniature replicas of famous greens, with the same sand and water traps courses used back in the early 20th century.

    A version of this story ran in 2008; it has been updated for 2023.

    This article was written by Ransom Riggs and excerpted from the Mental Floss book In the Beginning: The Origins of Everything.

    [ad_2]

    mentalfloss .com

    Source link