ReportWire

Tag: language

  • TSA tests handheld language translation devices at Philadelphia airport

    TSA tests handheld language translation devices at Philadelphia airport

    [ad_1]

    The U.S. Transportation Security Administration is using Philadelphia International Airport as a test site for new handheld devices that translate statements from TSA officers for people who aren’t proficient with English.

    The TSA said the devices are intended to make interactions with travelers who do not speak English easier and help them understand what officers are asking of them.

    The devices are smaller than cell phones and contain libraries of 83 languages. An officer speaks into it and what was said gets translated into the language spoken by the passengers. The electronic translator can reproduce speech from a TSA officer as audio and as text displayed on the device’s screen, making it useful for communicating with people who are deaf, hearing impaired or blind, too.

    “We hope that this will turn out to be a valuable tool for our officers to provide guidance to passengers who might not speak English,” said Gerardo Spero, TSA’s federal security director at Philadelphia International Airport.

    TSA has begun using five of the devices at the airport — in terminals A-East and A-West, and at the busiest checkpoints in terminals B, D and E.

    The federal agency said officers have encountered benefits and technical challenges. One of the problems is that colloquial terms, like “pat-down,” cannot be translated in every language, so officers have been advised find different words to explain what they need to do.

    TSA officers can store up to 10,000 pre-programmed translations to handle typical interactions that help speed up the process. The devices also distinguish among dialects of the same language. With Spanish, for example, the translators are programmed to understand dialects spoken in Spain, Argentina, Colombia and the United States.

    The TSA said it views the technology as a “game-changer” for assisting people who aren’t fluent in English. Travelers at Philadelphia airport already have among the shortest wait times — at just over 9 minutes — at security points among the nation’s 31 busiest airports, according to a study released last year.

    “The field testing of these units is one step that we are taking to improve our communication with a broader traveling population,” said Jose Bonilla, TSA’s executive director of traveler engagement.

    [ad_2]

    Michael Tanenbaum

    Source link

  • 50+ Motivational Latin Proverbs to Elevate Your Thinking to New Levels

    50+ Motivational Latin Proverbs to Elevate Your Thinking to New Levels

    [ad_1]


    Times change but wisdom stays the same. Check out this collection of inspirational Latin proverbs and find one that really resonates with you.


    Wisdom surpasses time and place. Powerful thoughts spoken hundreds and thousands of years ago still ring true to us today.

    One of my lifelong pastimes is collecting positive thoughts of all stripes. I have whole documents dedicated to inspirational quotes from people I look up to as role models, uplifting and motivational affirmations I’ve discovered over the years, and personal thoughts (every now and then I create a good one all on my own!).

    Latin proverbs, in particular, possess a special power. Many of us are already familiar with a few popular ones: carpe diem (“seize the day”), cogito ergo sum (“I think, therefore I am”), or veni, vidi, vici (“I came, I saw, I conquered”).

    These phrases have endured over the centuries, with some becoming part of our everyday discourse and others adopted as popular sayings in various institutions, such as primum non nocere (“first, do no harm”), a common saying in medicine and healthcare, or pro bono (“for the good”) a phrase in law referring to a lawyer working for no charge, or sic semper tyrannis (“thus always to tyrants”) which is often applied to politics and government.

    Here’s a compilation of the more popular and noteworthy Latin proverbs. These cover a broad range of subjects and ideas, but you’re bound to find a few new ones that resonate with you.

    50+ Motivational Latin Proverbs

    Acta non verba
    (“deeds not words”)

    Ad meliora
    (“towards better things”)

    Ad victoriam
    (“to victory”)

    Audere est faucere
    (“to dare is to do”)

    Astra inclinant, sed non obligant
    (“the stars incline us, they do not bind us”)

    Bono malum superate
    (“good will overcome evil”)

    Carpe diem
    (“seize the day”)

    Calamus gladio fortior
    (“the pen is mightier than the sword”)

    Cogito, ergo sum
    (“I think, therefore I am”)

    Cras es noster
    (“tomorrow, be ours”)

    Dictum factum
    (“what is said is done”)

    Duc, sequere, aut de via decede
    (“lead, follow, or get out of the way”)

    Dum spiro, spero
    (“while I breathe, I hope”)

    Ego te provoco
    (“I challenge you”)

    Est modus in rebus
    (“there is a middle way in all things”)

    Faber est suae quisque fortunae
    (“every man is the artisan of his own fortune”)

    Familia supra omnia
    (“family over everything”)

    Fons vitae caritas
    (“love is the fountain of life”)

    Fortiter et fideliter
    (“bravely and faithfully”)

    Gladiator in arena consilium capit
    (“the gladiator is formulating his plan in the arena”)

    Grandescunt aucta labore
    (“by work, all things increase and grow”)

    Humilitas occidit superbiam
    (“humility kills pride”)

    Igne natura renovatur integra
    (“through fire nature is reborn whole”)

    Incepto ne desistam
    (“may I not shrink from my purpose”)

    Magna est vis consuetudinis
    (“great is the power of habit”)

    Memento mori
    (“remember you must die”)

    Memento vivere
    (“remember you have to live”)

    Memores acti prudentes future
    (“mindful of what has been done, aware of what will be”)

    Morior invictus
    (“death before defeat”)

    Non ducor, duco
    (“I am not led, I lead”)

    Nosce te ipsum
    (“know thyself”)

    Omne initium difficile est
    (“every beginning is difficult”)

    Ordo ab chao
    (“order out of chaos”)

    Palma non sine pulvere
    (“no reward without effort”)

    Pax vobiscum
    (“peace be with you”)

    Praesis ut prosis ne ut imperes
    (“lead in order to serve, not in order to rule”)

    Praemonitus, praemunitus
    (“forewarned is forearmed”)

    Pro bono
    (“for the good”)

    Primum non nocere
    (“first do no harm”)

    Qui non proficit, deficit
    (“he who does not advance, goes backward”)

    Qui totum vult totum perdit
    (“he who wants everything loses everything”)

    Sapientia potentia est
    (“wisdom is power”)

    Si vis amari, ama
    (“if you wish to be loved, love”)

    Sic parvis magna
    (“greatness from small beginnings”)

    Sic semper tyrannis
    (“thus always to tyrants”)

    Sic vita est
    (“such is life”)

    Suum cuique
    (“to each his own”)

    Tempus fugit
    (“time flies”)

    Tendit in ardua virtus
    (“virtue strives for what is difficult”)

    Ubi concordia, ibi victoria
    (“where is unity, there is victory”)

    Vacate et scire
    (“be still and know”)

    Veni, vidi, vici
    (“I came, I saw, I conquered”)

    Verba volant, scripta manent
    (“words fly away, writing remains”)

    Vincit qui se vincit
    (“he conquers who conquers himself”)

    Vis medicatrix naturae
    (“the healing power of nature”)

    Recommended Exercise

    Which ones do you like the best from the list above?

    Choose 1-3 of these Latin proverbs and find a way to integrate them into your daily life. Practice unconscious positivity: write one down and post it on your fridge or bathroom mirror, create a piece of art or music dedicated to one, or make one into a digital password.

    I have “cras es noster” (tomorrow, be ours) on the top of my whiteboard going into the new year.


    Enter your email to stay updated on new articles in self improvement:

    [ad_2]

    Steven Handel

    Source link

  • New migrants face fear and loneliness. A town on the Great Plains has a storied support network

    New migrants face fear and loneliness. A town on the Great Plains has a storied support network

    [ad_1]

    FORT MORGAN, Colo. — Magdalena Simon’s only consolation after immigration officers handcuffed and led her husband away was the contents of his wallet, a few bills.

    The hopes that had pushed her to trudge thousands of miles from Guatemala in 2019, her son’s small frame clutched to her chest, ceded to despair and loneliness in Fort Morgan, a ranching outpost on Colorado’s eastern plains, where some locals stared at her too long and the wind howls so fiercely it once blew the doors half off a hotel.

    The pregnant Simon tried to mask the despair every morning when her toddlers asked, “Where’s papa?”

    To millions of migrants who have crossed the U.S. southern border in the past few years, stepping off greyhound buses in places across America, such feelings can be constant companions. What Simon would find in this unassuming city of a little more than 11,400, however, was a community that pulled her in, connecting her with legal council, charities, schools and soon friends, a unique support network built by generations of immigrants.

    In this small town, migrants are building quiet lives, far from big cities like New York, Chicago and Denver that have struggled to house asylum-seekers and from the halls of Congress where their futures are bandied about in negotiations.

    The Fort Morgan migrant community has become a boon for newcomers, nearly all of whom arrive from perilous journeys to new challenges: pursuing asylum cases; finding a paycheck big enough for food, an attorney and a roof; placing their kids in school; and navigating a language barrier, all while facing the threat of deportation.

    The United Nations used the community, 80 miles (129 kilometers) west of Denver, as a case study for rural refugee integration after a thousand Somalis arrived to work in meatpacking plants in the late 2000s. In 2022, grassroots groups sent migrants living in mobile homes to Congress to tell their stories.

    In the last year, hundreds more migrants have arrived in Morgan County. More than 30 languages are spoken in Fort Morgan’s only high school, which has translators for the most common languages and a phone service for others. On Sundays, Spanish is heard from the pulpits of six churches.

    The demographic shift in recent decades has forced the community to adapt: Local organizations hold monthly support groups, train students and adults about their rights, teach others how to drive, ensure kids are in school and direct people to immigration attorneys.

    Simon herself now tells her story to those stepping off buses. The community can’t wave away the burdens, but they can make them lighter.

    “It’s not like home where you have your parents and all of your family around you,” Simon tells those she meets in grocery stores and school pickup lines. “If you run into a problem, you need to find your own family.”

    The work has grown amid negotiations in Washington, D.C., on a deal that could toughen asylum protocols and bolster border enforcement.

    On a recent Sunday, advocacy groups organized a posada, a Mexican celebration of the biblical Joseph and Mary seeking shelter for Mary to give birth and being turned away until they were given the stable.

    Before marching down the street singing a song adaption in which migrants are seeking shelter instead of Joseph and Mary, participants signed letters urging Colorado’s two Democratic senators and Republican U.S. Rep. Ken Buck to reject stiffer asylum rules.

    A century ago, it was sugar beet production that brought German and Russian migration to the area. Now, many migrants work inside dairy plants.

    When area businesses were raided several times in the 2000s, friends disappeared overnight, seats sat empty in schools and gaps opened on factory lines.

    “That really changed the the understanding of how deeply embedded migrants are in community,” said Jennifer Piper of American Friends Service Committee, which organized the posada celebration.

    Guadalupe “Lupe” Lopez Chavez, who arrived in the U.S. alone in 1998 from Guatemala at age 16, spends long hours working with migrants, including helping connect Simon to a lawyer after her husband was detained.

    One recent Saturday, Lopez Chavez sat in the low-ceilinged office of One Morgan County, a nearly 20-year-old migration nonprofit. In a folding chair, Maria Ramirez sifted through manila folders dated November 2023, when she’d arrived in the U.S.

    Ramirez fled central Mexico, where cartel violence claimed her younger brother’s life, and asked Lopez Chavez how she could get health care. Ramirez’s 4-year-old daughter — who pranced behind her mother, blowing bubbles and popping the ones that landed in her brown curls — has a lung condition.

    Ramirez said she would work anywhere to move from the living room they sleep in, with just a blanket on the floor as cushioning.

    In the offices resembling a hostel’s well-loved communal space, Lopez Chavez cautioned Ramirez to consult a lawyer before applying for health care. Sitting aside Ramirez were two settled migrants offering support and advice.

    “A lot of stuff that you heard in Mexico (about the U.S.) was you couldn’t walk on the streets, you had to live in the shadows, you’d be targeted,” said Ramirez. “It’s beautiful to come into a community that’s united.”

    Lopez Chavez works with new migrants because she remembers shackles snapping around her ankles after she was stopped for a traffic violation in 2012 and turned over to the U.S. immigration authorities.

    “I just wanted to leave there because I’d never been in a cage before,” Lopez Chavez said in an interview, her eyes filling with tears.

    At her first court hearing, Lopez Chavez and her husband stood alone. At her second hearing, after Lopez Chavez was connected to the community, she was flanked by new friends. That wall of support allowed her to keep her chin up as she fought her immigration case before being granted residency last year.

    Lopez Chavez now works to cultivate that strength across the community.

    “I don’t want any more families to go through what we went through,” said Lopez Chavez, who also encourages others to tell their stories. “Those examples give people the idea: If they can manage their case and win, maybe I can too.”

    In Fort Morgan, train tracks divide a mobile home park, where many migrants live, and the city’s older homes. Some older migrants see new arrivals as getting better treatment by the U.S. and feel that is unfair. The community can’t solve every challenge, and hasn’t laid the last brick on cultural bridges between the diverse communities.

    But at the posada event, crowded in the One Morgan County offices, the assurances of community itself showed through the eyes of partygoers as children in cultural regalia danced traditional Mexican dances.

    Among those bouncing around the long room was 7-year-old Francisco Mateo Simon. He doesn’t remember the journey to the U.S., but his mother, Magdalena, does.

    She remembers how ill he became as she carried him the last miles to the border. Now he spits out armadillo facts between the nubs of incoming front teeth in their mobile home, then points to his favorite ornament on their white, plastic Christmas tree.

    “That’s our brand new tree,” said his mother, as her eldest daughter practiced English with a kids’ book.

    “It’s new,” she repeated, “It’s our first new tree because in the past we’ve only had trees from the thrift store.”

    ___

    Bedayn is a corps member for the Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues.

    [ad_2]

    Source link

  • Census Bureau wants to change how it asks about disabilities. Some don't like it

    Census Bureau wants to change how it asks about disabilities. Some don't like it

    [ad_1]

    The U.S. Census Bureau wants to change how it asks people about disabilities, and some advocates are complaining that they were not consulted enough on what amounts to a major overhaul in how disabilities would be defined by the federal government.

    Disability advocates say the change would artificially reduce their numbers by almost half. At stake are not only whether people with disabilities get vital resources for housing, schools or program benefits but whether people with disabilities are counted accurately in the first place, experts said.

    Some also question the timing of the change, which comes just as more people are living with new, long-term conditions from the COVID-19 pandemic.

    Census Bureau officials say the proposed change on its most comprehensive survey of American life will align the U.S. with international standards, allowing comparisons among countries. They also say it will better capture how disabilities occur in the real world, since they rarely fit neatly into stark yes-or-no boxes that don’t account for variations or nuance.

    The bureau has spent time, money and energy trying to improve counts of racial and ethnic minorities who have been historically undercounted, but the statistical agency seems willing to adapt questions that will shortchange the numbers of people with disabilities, said Scott Landes, an associate professor of sociology at Syracuse University.

    “This, in my mind, is illogical,” Landes, who is visually impaired, said in an interview. “There is a piece of me that thinks, ‘How dare you — to think that we don’t count.’ I get offended.”

    If given final approval, the changes to the American Community Survey questions would be implemented in 2025. The ACS is the most comprehensive survey of American life, covering commuting times, internet access, family life, income, education levels, disabilities and military service, among other topics. The statistical agency was asked to make the change by the National Center for Health Statistics and is accepting public comment on the proposal through Dec. 19.

    The existing questions ask respondents to answer “yes” or “no” if they have difficulty or “serious difficulty” seeing, even with glasses, or are blind; hearing, or are deaf; concentrating, remembering or making decisions because of a physical, mental or emotional condition; walking or climbing stairs; dressing or bathing; or performing everyday tasks because of a physical, mental or emotional condition. If the answer is ”yes,” they are counted as having a disability.

    Under the proposed change, respondents would be allowed to answer most of the same questions with four choices: “no difficulty,” “some difficulty,” “a lot of difficulty” and “cannot do at all.” There are tweaks to the language of the questions, and the proposal adds a query on whether respondents have trouble communicating.

    But the most significant change involves the threshold beyond which people are determined to have a disability. The international standards being considered by the Census Bureau typically define a person as having a disability if they answer “cannot do at all” or “a lot of difficulty” for any task or function.

    During testing last year by the Census Bureau, the percentage of respondents who were defined as having a disability went from 13.9% using the current questions to 8.1% under the international standards. When the definition was expanded to also include “some difficulty,” it grew to 31.7%.

    Marlene Sallo said her degenerative spine condition presents difficulties on some days, but overall she is able to function on a daily basis, so she worries that she might not be considered as having a disability with the revised questions.

    “Right now, it’s not inclusive and it will miss many individuals within my community,” Sallo, executive director of the National Disability Rights Network, said last month at a meeting of a Census Bureau advisory committee, of which she is a member.

    Officials at the Census Bureau and the health statistics agency argue that the change will give officials better information and details about disabilities that can inform how services or resources are provided. Census Bureau officials had two conference calls with disability advocates on the subject this week.

    “Forcing a dichotomy masks nuance,” Julie Weeks, an official at the National Center for Health Statistics, said during a presentation last month.

    The terminology surrounding disabilities has evolved in recent years, moving away from labels that imply inferiority and toward more sensitive language that outlines the specific conditions or circumstances in which individuals or groups live. The Associated Press defers whenever possible to the wishes of people or groups in how they choose to be described but uses neutral language that withholds judgment about a person’s condition.

    Disability advocates said the international standards were formulated without their input. Last month, the Census Bureau’s National Advisory Committee recommended that the statistical agency not adopt the change until it meets further with disability advocates and researchers.

    While the proposal may be better for scientific research, the questions, if approved, will be adapted with the needs of agencies and not people with disabilities in mind, Andrew Houtenville, research director at the Institute on Disability at the University of New Hampshire, told members of the National Advisory Committee at last month’s meeting.

    “This has taken a lot of people by surprise,” Houtenville said.

    Some experts believe the current questions don’t adequately account for people with mental health problems, developmental disabilities or chronic health conditions, like those faced by many people living with long COVID. But they say the proposed change isn’t the answer.

    “Disability is an evolving concept, and there is a new kind of disability we didn’t have five years ago, Long COVID, and we need to be able to account for that and other changes,” said Susan Popkin, co-director of the Disability Equity Policy Initiative at the Urban Institute, who has a chronic autoimmune condition.

    The proposed change is grating to some advocates since it is occurring at a time when disability has grown to be an identity and a social movement, rather than just a function-based definition of someone’s limitations. For instance, a person with limited hearing may be able to function fully with the help of hearing aids but can still identify as having a disability.

    “You can be proud of your disability and still not want the pain and symptoms of the conditions that lead to that disability. That is part of a shift in disability as a demographic group,” said Bonnielin Swenor, director of the Johns Hopkins Disability Health Research Center, who has low vision.

    “There is a shift of view in disability pride and claiming disability identity as part of who we are … not as a deficit,” Swenor said.

    ___

    Follow Mike Schneider on X, formerly known as Twitter: @MikeSchneiderAP.

    [ad_2]

    Source link

  • Census Bureau wants to change how it asks about disabilities. Some advocates don't like it

    Census Bureau wants to change how it asks about disabilities. Some advocates don't like it

    [ad_1]

    The U.S. Census Bureau wants to change how it asks people about disabilities, and some advocates are complaining that they were not consulted enough on what amounts to a major overhaul in how disabilities would be defined by the federal government.

    Disability advocates say the change would artificially reduce their numbers by almost half. At stake are not only whether people with disabilities get vital resources for housing, schools or program benefits but whether people with disabilities are counted accurately in the first place, experts said.

    Some also question the timing of the change, which comes just as more people are living with new, long-term conditions from the COVID-19 pandemic.

    Census Bureau officials say the proposed change on its most comprehensive survey of American life will align the U.S. with international standards, allowing comparisons among countries. They also say it will better capture how disabilities occur in the real world, since they rarely fit neatly into stark yes-or-no boxes that don’t account for variations or nuance.

    “The bureau has spent time, money and energy trying to improve counts of racial and ethnic minorities who have been historically undercounted, but the statistical agency seems willing to adapt questions that will shortchange the numbers of people with disabilities,” said Scott Landes, an associate professor of sociology at Syracuse University.

    “This, in my mind, is illogical,” Landes, who is visually impaired, said in an interview. “There is a piece of me that thinks, ‘How dare you — to think that we don’t count.’ I get offended.”

    If given final approval, the changes to the American Community Survey questions would be implemented in 2025. The ACS is the most comprehensive survey of American life, covering commuting times, internet access, family life, income, education levels, disabilities and military service, among other topics. The statistical agency was asked to make the change by the National Center for Health Statistics and is accepting public comment on the proposal through Dec. 19.

    The existing questions ask respondents to answer “yes” or “no” if they have difficulty or “serious difficulty” seeing, even with glasses, or are blind; hearing, or are deaf; concentrating, remembering or making decisions because of a physical, mental or emotional condition; walking or climbing stairs; dressing or bathing; or performing everyday tasks because of a physical, mental or emotional condition. If the answer is ”yes,” they are counted as having a disability.

    Under the proposed change, respondents would be allowed to answer most of the same questions with four choices: “no difficulty,” “some difficulty,” “a lot of difficulty” and “cannot do at all.” There are tweaks to the language of the questions, and the proposal adds a query on whether respondents have trouble communicating.

    But the most significant change involves the threshold beyond which people are determined to have a disability. The international standards being considered by the Census Bureau typically define a person as having a disability if they answer “cannot do at all” or “a lot of difficulty” for any task or function.

    During testing last year by the Census Bureau, the percentage of respondents who were defined as having a disability went from 13.9% using the current questions to 8.1% under the international standards. When the definition was expanded to also include “some difficulty,” it grew to 31.7%.

    Marlene Sallo said her degenerative spine condition presents difficulties on some days, but overall she is able to function on a daily basis, so she worries that she might not be considered as having a disability with the revised questions.

    “Right now, it’s not inclusive and it will miss many individuals within my community,” Sallo, executive director of the National Disability Rights Network, said last month at a meeting of a Census Bureau advisory committee, of which she is a member.

    Officials at the Census Bureau and the health statistics agency argue that the change will give officials better information and details about disabilities that can inform how services or resources are provided.

    “Forcing a dichotomy masks nuance,” Julie Weeks, an official at the National Center for Health Statistics, said during a presentation last month.

    The terminology surrounding disabilities has evolved in recent years, moving away from labels that imply inferiority and toward more sensitive language that outlines the specific conditions or circumstances in which individuals or groups live. The Associated Press defers whenever possible to the wishes of people or groups in how they choose to be described but uses neutral language that withholds judgment about a person’s condition.

    Disability advocates said the international standards were formulated without their input. Last month, the Census Bureau’s National Advisory Committee recommended that the statistical agency not adopt the change until it meets further with disability advocates and researchers.

    While the proposal may be better for scientific research, the questions, if approved, will be adapted with the needs of agencies and not people with disabilities in mind, Andrew Houtenville, research director at the Institute on Disability at the University of New Hampshire, told members of the National Advisory Committee at last month’s meeting.

    “This has taken a lot of people by surprise,” Houtenville said.

    Some experts believe the current questions don’t adequately account for people with mental health problems, developmental disabilities or chronic health conditions, like those faced by many people living with long COVID. But they say the proposed change isn’t the answer.

    “Disability is an evolving concept, and there is a new kind of disability we didn’t have five years ago, Long COVID, and we need to be able to account for that and other changes,” said Susan Popkin, co-director of the Disability Equity Policy Initiative at the Urban Institute, who has a chronic autoimmune condition.

    The proposed change is grating to some advocates since it is occurring at a time when disability has grown to be an identity and a social movement, rather than just a function-based definition of someone’s limitations. For instance, a person with limited hearing may be able to function fully with the help of hearing aids but can still identify as having a disability.

    “You can be proud of your disability and still not want the pain and symptoms of the conditions that lead to that disability. That is part of a shift in disability as a demographic group,” said Bonnielin Swenor, director of the Johns Hopkins Disability Health Research Center, who has low vision.

    “There is a shift of view in disability pride and claiming disability identity as part of who we are … not as a deficit,” Swenor said.

    ___

    Follow Mike Schneider on X, formerly known as Twitter: @MikeSchneiderAP.

    [ad_2]

    Source link

  • Online abuse of politically active Afghan women tripled after Taliban takeover, rights group reports

    Online abuse of politically active Afghan women tripled after Taliban takeover, rights group reports

    [ad_1]

    ISLAMABAD — Online abuse and hate speech targeting politically active women in Afghanistan has significantly increased since the Taliban took over the country in Aug. 2021, according to a report released Monday by a U.K.-based rights group.

    Afghan Witness, an open-source project run by the non-profit Center for Information Resilience, says it found that abusive posts tripled, a 217% increase, between June-December 2021 and the same period of 2022.

    Building on expertise gained from similar research in Myanmar, the Afghan Witness team analyzed publicly available information from X, formerly known as Twitter, and conducted in-depth interviews with six Afghan women to investigate the nature of the online abuse since the Taliban takeover.

    The report said the team of investigators “collected and analyzed over 78,000 posts” written in Dari and Pashto — two local Afghan languages — directed at “almost 100 accounts of politically active Afghan women.”

    The interviews indicated that the spread of abusive posts online helped make the women targets, the report’s authors said. The interviewees reported receiving messages with pornographic material as well as threats of sexual violence and death.

    “I think the hatred they show on social media does not differ from what they feel in real life,” one woman told Afghan Witness.

    Taliban government spokesmen were not immediately available to comment about the report.

    The report identified four general themes in the abusive posts: accusations of promiscuity; the belief that politically active women violated cultural and religious norms; allegations the women were agents of the West; and accusations of making false claims in order to seek asylum abroad.

    At the same time, Afghan Witness said it found the online abuse was “overwhelmingly sexualized,” with over 60% of the posts in 2022 containing terms such as “whore” or “prostitute.”

    “Since the Taliban’s takeover of Afghanistan, social media has turned from being a place for social and political expression to a forum for abuse and suppression, especially of women,” the project’s lead investigator, Francesca Gentile, said.

    The Taliban have barred women from most areas of public life and work and stopped girls from going to school beyond the sixth grade as part of harsh measures they imposed after taking power in 2021, as U.S. and NATO forces were pulling out of Afghanistan following two decades of war.

    “The Taliban’s hostility towards women and their rights sends a message to online abusers that any woman who stands up for herself is fair game,” added Gentile.

    One female journalist, speaking with Afghan Witness on condition of anonymity, said she deactivated some of her social media accounts and no longer reads comments, which affects her work when trying to reach out to online sources.

    The report said it found the vast majority of those behind the online abuse were men, “from a range of political affiliations, ethnic groups, and backgrounds.”

    [ad_2]

    Source link

  • WTF Fun Fact 13537 – Apologies in the Workplace

    WTF Fun Fact 13537 – Apologies in the Workplace

    [ad_1]

    In a study by the University of Arizona, researchers revealed that non-stereotypical apologies in the workplace can enhance communication. This study challenges conventional norms, emphasizing the power of breaking gender stereotypes in apologies to repair trust and foster collaboration.

    Gender Stereotypes and Apologies in the Workplace

    Sarah Doyle led a research team to explore the nuances of effective apologies in professional settings. Their focus? The impact of gender stereotypes on the perception of apologies. Traditional masculine language, characterized by assertiveness and confidence, and feminine language, known for its warmth and nurturing qualities, were used as benchmarks. Surprisingly, the research found that apologies that deviate from these gender norms were perceived as more effective.

    Celebrity Apologies on Social Media

    The research commenced with an analysis of celebrity apologies on Twitter. This platform, a hub for public statements, provided a rich dataset of 87 apology tweets from various celebrities. The response to these tweets revealed a pattern. Female celebrities who used masculine language in their apologies received higher engagement and more positive reactions.

    The study extended beyond the virtual world into more relatable workplace scenarios. Researchers created situations involving accountants and nurses making mistakes and issuing apologies. Participants in these studies consistently found counter-stereotypical apologies more effective.

    For women, using a counter-stereotypical apology increased the perceived effectiveness by an average of 9.7%, and for men, by 8.2%.

    The Impact of Counter-Stereotypical Apologies

    This research underscores the importance of moving beyond stereotypical patterns in our apologies. By adopting language and approaches that defy gender norms, individuals can enhance the impact of their apologies, leading to better outcomes in conflict resolution and trust-building.

    The findings from the University of Arizona research team suggest that the way we construct apologies is as important as the frequency with which we offer them. This shift in focus from quantity to quality in apologies could pave the way for more effective communication strategies in diverse settings.

    The study’s results have significant implications for professional environments, where effective communication is crucial. By encouraging individuals to break free from stereotypical language patterns in apologies, organizations can foster a more inclusive and collaborative atmosphere.

    Rethinking the Construction of Apologies in the Workplace

    As we move forward, this research encourages a deeper consideration of how we construct our apologies. The study highlights the potential for nuanced, thoughtful apologies to make a substantial difference in interpersonal relationships and professional settings.

    The University of Arizona’s study on apology psychology offers a fresh perspective on effective communication. By challenging gender stereotypes in the language of apologies, individuals can enhance trust and collaboration in the workplace. This research not only adds a new dimension to our understanding of apologies but also opens avenues for future exploration in communication dynamics.

     WTF fun facts

    Source: “Apology psychology: Breaking gender stereotypes leads to more effective communication” — ScienceDaily

    [ad_2]

    WTF

    Source link

  • Classes on celebrities are engaging a new generation of law students

    Classes on celebrities are engaging a new generation of law students

    [ad_1]

    DES MOINES, Iowa — A South Dakota law professor typically teaches about dense topics like torts and natural resources. But next semester, he and his fearless students are shaking things up by turning their attention to Taylor Swift.

    Sean Kammer wanted his legal writing course to draw on music and art to help his students reconsider legal language and craft persuasive arguments. The self-described “Swiftie” thought a focus on the cultural icon was also a way to connect with his students.

    Never in his wildest dreams did Kammer expect the attention that the announcement generated — the class filled up quickly and jealous alumni even reached out.

    “The reaction from students has been exciting,” he said. “If we can have fun while we’re exploring some of these complex theoretical problems or issues, I believe students will be inspired to think deeper and to push themselves further.”

    Swifties at the University of South Dakota Knudson School of Law aren’t the only ones having fun. Law professors across the country are increasingly drawing on popular culture and celebritydom — sometimes with the help of celebrities themselves — to engage a new generation of students and contextualize complicated concepts in the real world.

    Courses on Swift, Rick Ross and Succession supplement traditional law school courses with fun and accessible experiences that professors say they often didn’t have themselves.

    Students at the Georgia State University College of Law were hustlin’ everyday to get to class — especially on Tuesday when they got to hear directly from Ross for the final day of a course that chronicled the legal intricacies of the rapper, record executive and Wingstop franchise owner’s life.

    Moraima “Mo” Ivory, director of the school’s entertainment, sports and media law program, wants her students to see for themselves what goes into the albums, television shows and movies they enjoy. She chooses a star each year and invites guest speakers from their world, along with the title character themselves, to bring legal deals, defenses and drama to life.

    “We’re talking about critical legal principles, but we’re watching them as they happen and as they happened,” she said. “It really just turns that lightbulb on for law students.”

    Ivory said she could’ve heard a pin drop in one class about mixtapes that featured guest DJ Drama.

    “It was never my experience that I walked out of a law school classroom excited about what I had learned,” Ivory said.

    For third-year law student Luke Padia, the experience makes concepts feel more tangible than reading a textbook or case law, he said.

    “No knock on the other courses,” the 26-year-old from Lawrence, Kansas, said. “I just find that my attention is more easily grabbed when I’m sitting in class listening to Steve Sadow talk about how he was able to get Rick Ross out of jail as opposed to sitting in constitutional law or torts or whatever it may be.”

    Frances Acevedo, a 25-year-old from Pembroke Pines, Florida, in her third year of law school, said she’s walked away from the class with an understanding of how important a team is to an artist’s success — a message Ross emphasized.

    “I can sit at the table and talk money with multibillionaires,” Ross said to students, faculty and guests gathered for the course finale. “But when it’s time for me to move forward, I sit down with my team.”

    Courses on A-list celebrities have captivated undergraduate and graduate students across the country for years, increasingly in courses analyzing race and gender. The attention on female artists and artists of color is a sign of growing respect for them and for different modes of artistic expression, said Kinitra Brooks, an English professor at Michigan State University.

    Brooks’ course on Beyonce’s Lemonade album and Black feminism was so popular that she published a reader that other professors can use. The pop culture material offers “immediate relatability,” which Brooks thinks makes students more likely to participate, allow their ideas to be challenged and be willing to challenge the artist, too.

    Bella Andrade, a junior at Arizona State University, looks forward to her class on the psychology of Taylor Swift every week. The self-proclaimed “huge Swiftie” has been listening to her music for “forever and a day,” but the class includes a range of fans. There are “10 out of 10” Swifties, along with people who barely know her music, which “leads to some really great conversations,” she said.

    “I think I’ve developed a much deeper understanding of different topics in social psychology,” said Andrade, who is from Minneapolis. “Taking topics that I’ve known about or heard about before but really applying them in a sense to something that I’m really invested in … really solidifies meaning.”

    Courses that incorporate pop culture offer a different context for the fundamentals that students learn in their traditional courses, said Cathy Hwang, who co-taught a University of Virginia corporate law course last year inspired by Succession.

    The class investigated the show’s prickly – and often duplicitous – legal matters, like hostile takeovers and securities law. Hwang said she was trying to engage and nurture a love of learning in students who “grew up with different interactions with technology and pop culture than what I did.”

    “To me, it’s not so much what’s my teaching style, but what’s the students’ learning style?” Hwang said. “It’s important, I think, as a teacher to keep evolving and trying to meet students where they are.”

    ___

    Associated Press video journalist Sharon Johnson contributed from Atlanta.

    [ad_2]

    Source link

  • Can New York’s mayor speak Mandarin? No, but with AI he’s making robocalls in different languages

    Can New York’s mayor speak Mandarin? No, but with AI he’s making robocalls in different languages

    [ad_1]

    ALBANY, N.Y. — New York City Mayor Eric Adams has been using artificial intelligence to make robocalls that contort his own voice into several languages he doesn’t actually speak, posing new ethical questions about the government’s use of the rapidly evolving technology.

    The mayor told reporters about the robocalls on Monday and said they’ve gone out in languages such as Mandarin and Yiddish to promote city hiring events. They haven’t included any disclosure that he only speaks English or that the calls were generated using AI.

    “People stop me on the street all the time and say, ‘I didn’t know you speak Mandarin, you know?’” said Adams, a Democrat. “The robocalls that we’re using, we’re using different languages to speak directly to the diversity of New Yorkers.”

    The calls come as regulators struggle to get a handle on how best to ethically and legally navigate the use of artificial intelligence, where deepfake videos or audio can make it appear that anyone anywhere is doing anything a person on the other side of a computer screen wants them to do.

    In New York, the watchdog group Surveillance Technology Oversight Project slammed Adams’ robocalls as an unethical use of artificial intelligence that is misleading to city residents.

    “The mayor is making deep fakes of himself,” said Albert Fox Cahn, executive director of the organization. “This is deeply unethical, especially on the taxpayer’s dime. Using AI to convince New Yorkers that he speaks languages that he doesn’t is outright Orwellian. Yes, we need announcements in all of New Yorkers’ native languages, but the deep fakes are just a creepy vanity project.”

    The growing use of artificial intelligence and deepfakes, especially in politics and election misinformation, has prompted calls and moves toward greater regulation from government and major media companies.

    Google was the first big tech company to say it would impose new labels on deceptive AI-generated political advertisements that could fake a candidate’s voice or actions for election misinformation. Facebook and Instagram parent Meta doesn’t have a rule specific to AI-generated political ads but has a policy restricting “faked, manipulated or transformed” audio and imagery used for misinformation.

    A bipartisan bill in the U.S. Senate would ban “materially deceptive” deepfakes relating to federal candidates, with exceptions for parody and satire. This month, two Democratic members of Congress sent a letter to the heads of Meta and X, formally known as Twitter, to express concerns about AI-generated political ads on their social media platforms.

    In recent weeks, a number of technology companies have shown off AI tools that can synthetically dub a person’s speech in another language in a way that makes it sounds as if that person is speaking in that language.

    In September, the music streaming service Spotify introduced an AI feature to translate a podcast into multiple languages in the podcaster’s voice. More recently, the startup ElevenLabs in October introduced a voice translation tool that it said “can convert spoken content to another language in minutes, while preserving the voice of the original speaker.”

    A spokesperson for the mayor’s office said they used ElevenLabs’ tool for their robocalls. Native speakers listened to the recordings before they went out to ensure the translations were accurate. Calls have been made in Spanish, Yiddish, Mandarin, Cantonese and Haitian Creole. The city has also used the technology to promote a series of concerts organized by the Adams administration, the spokesperson said.

    Adams defended himself against ethical questions about his use of artificial intelligence, saying his office is trying to reach New Yorkers through the languages they speak.

    “I got one thing: I’ve got to run the city, and I have to be able to speak to people in the languages that they understand, and I’m happy to do so,” he said. “And so, to all, all I can say is a ‘ni hao.’”

    [ad_2]

    Source link

  • Can New York’s mayor speak Mandarin? No, but with AI he’s making robocalls in different languages

    Can New York’s mayor speak Mandarin? No, but with AI he’s making robocalls in different languages

    [ad_1]

    ALBANY, N.Y. — New York City Mayor Eric Adams has been using artificial intelligence to make robocalls that contort his own voice into several languages he doesn’t actually speak, posing new ethical questions about the government’s use of the rapidly evolving technology.

    The mayor told reporters about the robocalls on Monday and said they’ve gone out in languages such as Mandarin and Yiddish to promote city hiring events. They haven’t included any disclosure that he only speaks English or that the calls were generated using AI.

    “People stop me on the street all the time and say, ‘I didn’t know you speak Mandarin, you know?’” said Adams, a Democrat. “The robocalls that we’re using, we’re using different languages to speak directly to the diversity of New Yorkers.”

    The calls come as regulators struggle to get a handle on how best to ethically and legally navigate the use of artificial intelligence, where deepfake videos or audio can make it appear that anyone anywhere is doing anything a person on the other side of a computer screen wants them to do.

    In New York, the watchdog group Surveillance Technology Oversight Project slammed Adams’ robocalls as an unethical use of artificial intelligence that is misleading to city residents.

    “The mayor is making deep fakes of himself,” said Albert Fox Cahn, executive director of the organization. “This is deeply unethical, especially on the taxpayer’s dime. Using AI to convince New Yorkers that he speaks languages that he doesn’t is outright Orwellian. Yes, we need announcements in all of New Yorkers’ native languages, but the deep fakes are just a creepy vanity project.”

    The growing use of artificial intelligence and deepfakes, especially in politics and election misinformation, has prompted calls and moves toward greater regulation from government and major media companies.

    Google was the first big tech company to say it would impose new labels on deceptive AI-generated political advertisements that could fake a candidate’s voice or actions for election misinformation. Facebook and Instagram parent Meta doesn’t have a rule specific to AI-generated political ads but has a policy restricting “faked, manipulated or transformed” audio and imagery used for misinformation.

    A bipartisan bill in the U.S. Senate would ban “materially deceptive” deepfakes relating to federal candidates, with exceptions for parody and satire. This month, two Democratic members of Congress sent a letter to the heads of Meta and X, formally known as Twitter, to express concerns about AI-generated political ads on their social media platforms.

    In recent weeks, a number of technology companies have shown off AI tools that can synthetically dub a person’s speech in another language in a way that makes it sounds as if that person is speaking in that language.

    In September, the music streaming service Spotify introduced an AI feature to translate a podcast into multiple languages in the podcaster’s voice. More recently, the startup ElevenLabs in October introduced a voice translation tool that it said “can convert spoken content to another language in minutes, while preserving the voice of the original speaker.”

    Adams defended himself against ethical questions about his use of artificial intelligence, saying his office is trying to reach New Yorkers through the languages they speak.

    “I got one thing: I’ve got to run the city, and I have to be able to speak to people in the languages that they understand, and I’m happy to do so,” he said. “And so, to all, all I can say is a ‘ni hao.’”

    [ad_2]

    Source link

  • UK resists calls to label China a threat following claims a Beijing spy worked in Parliament

    UK resists calls to label China a threat following claims a Beijing spy worked in Parliament

    [ad_1]

    LONDON — The British government on Monday resisted calls to label China a threat to the U.K. following the revelation that a researcher in Parliament was arrested earlier this year on suspicion of spying for Beijing.

    U.K. Business Secretary Kemi Badenoch said Britain should avoid calling China a “foe” or using language that could “escalate” tensions.

    “China is a country that we do a lot of business with,” Badenoch told Sky News. “China is a country that is significant in terms of world economics. It sits on the U.N. Security Council. We certainly should not be describing China as a foe, but we can describe it as a challenge.”

    Tensions between Britain and China have risen in recent years over accusations of economic subterfuge, human rights abuses and Beijing’s crackdown on civil liberties in the former British colony of Hong Kong.

    Britain’s governing Conservatives are divided on how tough a line to take and on how much access Chinese firms should have to the U.K. economy. More hawkish Tories want Beijing declared a threat, rather than simply a challenge, the word Prime Minister Rishi Sunak has used.

    Under Britain’s new National Security Act, if China were officially labeled a threat, anyone working “at the direction” of Beijing or for a state-linked firm would have to register and disclose their activities or risk jail.

    Conservative hawks renewed their calls for a tougher stance after the Metropolitan Police force confirmed over the weekend that a man in his 20s and a man in his 30s were arrested in March under the Official Secrets Act. Neither has been charged, and both were released on bail until October pending further inquiries.

    The Sunday Times reported that the younger man was a parliamentary researcher who worked with senior Conservative Party lawmakers and held a pass that allowed full access to the Parliament buildings.

    A Chinese Embassy statement called the allegations “completely fabricated and nothing but malicious slander.” China urges “relevant parties in the U.K. to stop their anti-China political manipulation,” the statement said.

    Sunak chided Chinese Premier Li Qiang over the alleged espionage when the two met at a Group of 20 summit in India on Sunday. Sunak told British broadcasters in New Delhi that he’d expressed “my very strong concerns about any interference in our parliamentary democracy, which is obviously unacceptable.”

    But he said it was important to engage with China rather than “carping from the sidelines.”

    U.K. spy services have sounded ever-louder warnings about Beijing’s covert activities. In November, the head of the MI5 domestic intelligence agency, Ken McCallum, said “the activities of the Chinese Communist Party pose the most game-changing strategic challenge to the U.K.” Foreign intelligence chief Richard Moore of MI6 said in July that China was his agency’s “single most important strategic focus.”

    In January 2022, MI5 issued a rare public alert, saying a London-based lawyer was trying to “covertly interfere in U.K. politics” on behalf of the Chinese Communist Party. The agency alleged attorney Christine Lee was acting in coordination with the Chinese ruling party’s United Front Work Department, an organization known to exert Chinese influence abroad.

    Alex Younger, the former chief of British foreign intelligence agency MI6, said the U.K.’s relationship with China is complicated.

    “We’ve got to find ways of engaging with it, and find ways of cooperating with it in important areas like climate change, and sometimes we have to be absolutely prepared to confront it when we believe that our security interests are threatened,” Younger told the BBC.

    “In my experience, just being nice to them doesn’t get you very far,” he added.

    [ad_2]

    Source link

  • Fan ejected from US Open match after German player said the man used language from Hitler’s regime

    Fan ejected from US Open match after German player said the man used language from Hitler’s regime

    [ad_1]

    NEW YORK — A fan was ejected from a U.S. Open tennis match early Tuesday morning after German player Alexander Zverev complained the man used language from Adolf Hitler’s Nazi regime.

    Zverev, the No. 12 seed, was serving at 2-2 in the fourth set of his match against No. 6 Jannik Sinner when he suddenly went to chair umpire James Keothavong and pointed toward the fan, who was sitting in a section behind the umpire.

    “He just said the most famous Hitler phrase there is in this world,” Zverev told Keothavong. “It’s not acceptable.”

    Keothavong turned backward and asked the fan to identify himself, then asked fans to be respectful to both players. Then, during the changeover shortly after Zverev held serve, the fan was identified by spectators seated near him, and he was removed by security.

    “A disparaging remark was directed toward Alexander Zverev,” U.S. Tennis Association spokesman Chris Widmaier said, “The fan was identified and escorted from the stadium.”

    Zverev said after the match that he’s had fans make derogatory comments before, but not involving Hitler.

    “He started singing the anthem of Hitler that was back in the day. It was ‘Deutschland über alles’ and it was a bit too much,” Zverev said.

    “I think he was getting involved in the match for a long time, though. I don’t mind it, I love when fans are loud, I love when fans are emotional. But I think me being German and not really proud of that history, it’s not really a great thing to do and I think him sitting in one of the front rows, I think a lot of people heard it. So if I just don’t react, I think it’s bad from my side.”

    Zverev went on to drop that set, when he began to struggle with the humid conditions after Sinner had been cramping badly in the third set. But Zverev recovered to win the fifth set, wrapping up the match that lasted 4 hours, 41 minutes at about 1:40 a.m. He will play defending U.S. Open champion Carlos Alcaraz in the quarterfinals.

    Zverev said it wasn’t hard to move past the fan’s remark.

    “It’s his loss, to be honest, to not witness the final two sets of that match,” Zverev said.

    ___

    AP tennis coverage: https://apnews.com/hub/tennis

    [ad_2]

    Source link

  • Don’t expect quick fixes in ‘red-teaming’ of AI models. Security was an afterthought

    Don’t expect quick fixes in ‘red-teaming’ of AI models. Security was an afterthought

    [ad_1]

    BOSTON — White House officials concerned by AI chatbots’ potential for societal harm and the Silicon Valley powerhouses rushing them to market are heavily invested in a three-day competition ending Sunday at the DefCon hacker convention in Las Vegas.

    Some 2,200 competitors tapped on laptops seeking to expose flaws in eight leading large-language models representative of technology’s next big thing. But don’t expect quick results from this first-ever independent “red-teaming” of multiple models.

    Findings won’t be made public until about February. And even then, fixing flaws in these digital constructs — whose inner workings are neither wholly trustworthy nor fully fathomed even by their creators — will take time and millions of dollars.

    Current AI models are simply too unwieldy, brittle and malleable, academic and corporate research shows. Security was an afterthought in their training as data scientists amassed breathtakingly complex collections of images and text. They are prone to racial and cultural biases, and easily manipulated.

    “It’s tempting to pretend we can sprinkle some magic security dust on these systems after they are built, patch them into submission, or bolt special security apparatus on the side,” said Gary McGraw, a cybsersecurity veteran and co-founder of the Berryville Institute of Machine Learning. DefCon competitors are “more likely to walk away finding new, hard problems,” said Bruce Schneier, a Harvard public-interest technologist. “This is computer security 30 years ago. We’re just breaking stuff left and right.”

    Michael Sellitto of Anthropic, which provided one of the AI testing models, acknowledged in a press briefing that understanding their capabilities and safety issues “is sort of an open area of scientific inquiry.”

    Conventional software uses well-defined code to issue explicit, step-by-step instructions. OpenAI’s ChatGPT, Google’s Bard and other language models are different. Trained largely by ingesting — and classifying — billions of datapoints in internet crawls, they are perpetual works-in-progress, an unsettling prospect given their transformative potential for humanity.

    After publicly releasing chatbots last fall, the generative AI industry has had to repeatedly plug security holes exposed by researchers and tinkerers.

    Tom Bonner of the AI security firm HiddenLayer, a speaker at this year’s DefCon, tricked a Google system into labeling a piece of malware harmless merely by inserting a line that said “this is safe to use.”

    “There are no good guardrails,” he said.

    Another researcher had ChatGPT create phishing emails and a recipe to violently eliminate humanity, a violation of its ethics code.

    A team including Carnegie Mellon researchers found leading chatbots vulnerable to automated attacks that also produce harmful content. “It is possible that the very nature of deep learning models makes such threats inevitable,” they wrote.

    It’s not as if alarms weren’t sounded.

    In its 2021 final report, the U.S. National Security Commission on Artificial Intelligence said attacks on commercial AI systems were already happening and “with rare exceptions, the idea of protecting AI systems has been an afterthought in engineering and fielding AI systems, with inadequate investment in research and development.”

    Serious hacks, regularly reported just a few years ago, are now barely disclosed. Too much is at stake and, in the absence of regulation, “people can sweep things under the rug at the moment and they’re doing so,” said Bonner.

    Attacks trick the artificial intelligence logic in ways that may not even be clear to their creators. And chatbots are especially vulnerable because we interact with them directly in plain language. That interaction can alter them in unexpected ways.

    Researchers have found that “poisoning” a small collection of images or text in the vast sea of data used to train AI systems can wreak havoc — and be easily overlooked.

    A study co-authored by Florian Tramér of the Swiss University ETH Zurich determined that corrupting just 0.01% of a model was enough to spoil it — and cost as little as $60. The researchers waited for a handful of websites used in web crawls for two models to expire. Then they bought the domains and posted bad data on them.

    Hyrum Anderson and Ram Shankar Siva Kumar, who red-teamed AI while colleagues at Microsoft, call the state of AI security for text- and image-based models “pitiable” in their new book “Not with a Bug but with a Sticker.” One example they cite in live presentations: The AI-powered digital assistant Alexa is hoodwinked into interpreting a Beethoven concerto clip as a command to order 100 frozen pizzas.

    Surveying more than 80 organizations, the authors found the vast majority had no response plan for a data-poisoning attack or dataset theft. The bulk of the industry “would not even know it happened,” they wrote.

    Andrew W. Moore, a former Google executive and Carnegie Mellon dean, says he dealt with attacks on Google search software more than a decade ago. And between late 2017 and early 2018, spammers gamed Gmail’s AI-powered detection service four times.

    The big AI players say security and safety are top priorities and made voluntary commitments to the White House last month to submit their models — largely “black boxes’ whose contents are closely held — to outside scrutiny.

    But there is worry the companies won’t do enough.

    Tramér expects search engines and social media platforms to be gamed for financial gain and disinformation by exploiting AI system weaknesses. A savvy job applicant might, for example, figure out how to convince a system they are the only correct candidate.

    Ross Anderson, a Cambridge University computer scientist, worries AI bots will erode privacy as people engage them to interact with hospitals, banks and employers and malicious actors leverage them to coax financial, employment or health data out of supposedly closed systems.

    AI language models can also pollute themselves by retraining themselves from junk data, research shows.

    Another concern is company secrets being ingested and spit out by AI systems. After a Korean business news outlet reported on such an incident at Samsung, corporations including Verizon and JPMorgan barred most employees from using ChatGPT at work.

    While the major AI players have security staff, many smaller competitors likely won’t, meaning poorly secured plug-ins and digital agents could multiply. Startups are expected to launch hundreds of offerings built on licensed pre-trained models in coming months.

    Don’t be surprised, researchers say, if one runs away with your address book.

    [ad_2]

    Source link

  • Don’t expect quick fixes in ‘red-teaming’ of AI models. Security was an afterthought

    Don’t expect quick fixes in ‘red-teaming’ of AI models. Security was an afterthought

    [ad_1]

    BOSTON — White House officials concerned by AI chatbots’ potential for societal harm and the Silicon Valley powerhouses rushing them to market are heavily invested in a three-day competition ending Sunday at the DefCon hacker convention in Las Vegas.

    Some 2,200 competitors tapped on laptops seeking to expose flaws in eight leading large-language models representative of technology’s next big thing. But don’t expect quick results from this first-ever independent “red-teaming” of multiple models.

    Findings won’t be made public until about February. And even then, fixing flaws in these digital constructs — whose inner workings are neither wholly trustworthy nor fully fathomed even by their creators — will take time and millions of dollars.

    Current AI models are simply too unwieldy, brittle and malleable, academic and corporate research shows. Security was an afterthought in their training as data scientists amassed breathtakingly complex collections of images and text. They are prone to racial and cultural biases, and easily manipulated.

    “It’s tempting to pretend we can sprinkle some magic security dust on these systems after they are built, patch them into submission, or bolt special security apparatus on the side,” said Gary McGraw, a cybsersecurity veteran and co-founder of the Berryville Institute of Machine Learning. DefCon competitors are “more likely to walk away finding new, hard problems,” said Bruce Schneier, a Harvard public-interest technologist. “This is computer security 30 years ago. We’re just breaking stuff left and right.”

    Michael Sellitto of Anthropic, which provided one of the AI testing models, acknowledged in a press briefing that understanding their capabilities and safety issues “is sort of an open area of scientific inquiry.”

    Conventional software uses well-defined code to issue explicit, step-by-step instructions. OpenAI’s ChatGPT, Google’s Bard and other language models are different. Trained largely by ingesting — and classifying — billions of datapoints in internet crawls, they are perpetual works-in-progress, an unsettling prospect given their transformative potential for humanity.

    After publicly releasing chatbots last fall, the generative AI industry has had to repeatedly plug security holes exposed by researchers and tinkerers.

    Tom Bonner of the AI security firm HiddenLayer, a speaker at this year’s DefCon, tricked a Google system into labeling a piece of malware harmless merely by inserting a line that said “this is safe to use.”

    “There are no good guardrails,” he said.

    Another researcher had ChatGPT create phishing emails and a recipe to violently eliminate humanity, a violation of its ethics code.

    A team including Carnegie Mellon researchers found leading chatbots vulnerable to automated attacks that also produce harmful content. “It is possible that the very nature of deep learning models makes such threats inevitable,” they wrote.

    It’s not as if alarms weren’t sounded.

    In its 2021 final report, the U.S. National Security Commission on Artificial Intelligence said attacks on commercial AI systems were already happening and “with rare exceptions, the idea of protecting AI systems has been an afterthought in engineering and fielding AI systems, with inadequate investment in research and development.”

    Serious hacks, regularly reported just a few years ago, are now barely disclosed. Too much is at stake and, in the absence of regulation, “people can sweep things under the rug at the moment and they’re doing so,” said Bonner.

    Attacks trick the artificial intelligence logic in ways that may not even be clear to their creators. And chatbots are especially vulnerable because we interact with them directly in plain language. That interaction can alter them in unexpected ways.

    Researchers have found that “poisoning” a small collection of images or text in the vast sea of data used to train AI systems can wreak havoc — and be easily overlooked.

    A study co-authored by Florian Tramér of the Swiss University ETH Zurich determined that corrupting just 0.01% of a model was enough to spoil it — and cost as little as $60. The researchers waited for a handful of websites used in web crawls for two models to expire. Then they bought the domains and posted bad data on them.

    Hyrum Anderson and Ram Shankar Siva Kumar, who red-teamed AI while colleagues at Microsoft, call the state of AI security for text- and image-based models “pitiable” in their new book “Not with a Bug but with a Sticker.” One example they cite in live presentations: The AI-powered digital assistant Alexa is hoodwinked into interpreting a Beethoven concerto clip as a command to order 100 frozen pizzas.

    Surveying more than 80 organizations, the authors found the vast majority had no response plan for a data-poisoning attack or dataset theft. The bulk of the industry “would not even know it happened,” they wrote.

    Andrew W. Moore, a former Google executive and Carnegie Mellon dean, says he dealt with attacks on Google search software more than a decade ago. And between late 2017 and early 2018, spammers gamed Gmail’s AI-powered detection service four times.

    The big AI players say security and safety are top priorities and made voluntary commitments to the White House last month to submit their models — largely “black boxes’ whose contents are closely held — to outside scrutiny.

    But there is worry the companies won’t do enough.

    Tramér expects search engines and social media platforms to be gamed for financial gain and disinformation by exploiting AI system weaknesses. A savvy job applicant might, for example, figure out how to convince a system they are the only correct candidate.

    Ross Anderson, a Cambridge University computer scientist, worries AI bots will erode privacy as people engage them to interact with hospitals, banks and employers and malicious actors leverage them to coax financial, employment or health data out of supposedly closed systems.

    AI language models can also pollute themselves by retraining themselves from junk data, research shows.

    Another concern is company secrets being ingested and spit out by AI systems. After a Korean business news outlet reported on such an incident at Samsung, corporations including Verizon and JPMorgan barred most employees from using ChatGPT at work.

    While the major AI players have security staff, many smaller competitors likely won’t, meaning poorly secured plug-ins and digital agents could multiply. Startups are expected to launch hundreds of offerings built on licensed pre-trained models in coming months.

    Don’t be surprised, researchers say, if one runs away with your address book.

    [ad_2]

    Source link

  • Don’t expect quick fixes in ‘red-teaming’ of AI models. Security was an afterthought

    Don’t expect quick fixes in ‘red-teaming’ of AI models. Security was an afterthought

    [ad_1]

    BOSTON — White House officials concerned by AI chatbots’ potential for societal harm and the Silicon Valley powerhouses rushing them to market are heavily invested in a three-day competition ending Sunday at the DefCon hacker convention in Las Vegas.

    Some 3,500 competitors have tapped on laptops seeking to expose flaws in eight leading large-language models representative of technology’s next big thing. But don’t expect quick results from this first-ever independent “red-teaming” of multiple models.

    Findings won’t be made public until about February. And even then, fixing flaws in these digital constructs — whose inner workings are neither wholly trustworthy nor fully fathomed even by their creators — will take time and millions of dollars.

    Current AI models are simply too unwieldy, brittle and malleable, academic and corporate research shows. Security was an afterthought in their training as data scientists amassed breathtakingly complex collections of images and text. They are prone to racial and cultural biases, and easily manipulated.

    “It’s tempting to pretend we can sprinkle some magic security dust on these systems after they are built, patch them into submission, or bolt special security apparatus on the side,” said Gary McGraw, a cybsersecurity veteran and co-founder of the Berryville Institute of Machine Learning. DefCon competitors are “more likely to walk away finding new, hard problems,” said Bruce Schneier, a Harvard public-interest technologist. “This is computer security 30 years ago. We’re just breaking stuff left and right.” Michael Sellitto of Anthropic, which provided one of the AI testing models, acknowledged in a press briefing that understanding their capabilities and safety issues “is sort of an open area of scientific inquiry.”

    Conventional software uses well-defined code to issue explicit, step-by-step instructions. OpenAI’s ChatGPT, Google’s Bard and other language models are different. Trained largely by ingesting — and classifying — billions of datapoints in internet crawls, they are perpetual works-in-progress, an unsettling prospect given their transformative potential for humanity.

    After publicly releasing chatbots last fall, the generative AI industry has had to repeatedly plug security holes exposed by researchers and tinkerers.

    Tom Bonner of the AI security firm HiddenLayer, a speaker at this year’s DefCon, tricked a Google system into labeling a piece of malware harmless merely by inserting a line that said “this is safe to use.”

    “There are no good guardrails,” he said.

    Another researcher had ChatGPT create phishing emails and a recipe to violently eliminate humanity, a violation of its ethics code.

    A team including Carnegie Mellon researchers found leading chatbots vulnerable to automated attacks that also produce harmful content. “It is possible that the very nature of deep learning models makes such threats inevitable,” they wrote.

    It’s not as if alarms weren’t sounded.

    In its 2021 final report, the U.S. National Security Commission on Artificial Intelligence said attacks on commercial AI systems were already happening and “with rare exceptions, the idea of protecting AI systems has been an afterthought in engineering and fielding AI systems, with inadequate investment in research and development.”

    Serious hacks, regularly reported just a few years ago, are now barely disclosed. Too much is at stake and, in the absence of regulation, “people can sweep things under the rug at the moment and they’re doing so,” said Bonner.

    Attacks trick the artificial intelligence logic in ways that may not even be clear to their creators. And chatbots are especially vulnerable because we interact with them directly in plain language. That interaction can alter them in unexpected ways.

    Researchers have found that “poisoning” a small collection of images or text in the vast sea of data used to train AI systems can wreak havoc — and be easily overlooked.

    A study co-authored by Florian Tramér of the Swiss University ETH Zurich determined that corrupting just 0.01% of a model was enough to spoil it — and cost as little as $60. The researchers waited for a handful of websites used in web crawls for two models to expire. Then they bought the domains and posted bad data on them.

    Hyrum Anderson and Ram Shankar Siva Kumar, who red-teamed AI while colleagues at Microsoft, call the state of AI security for text- and image-based models “pitiable” in their new book “Not with a Bug but with a Sticker.” One example they cite in live presentations: The AI-powered digital assistant Alexa is hoodwinked into interpreting a Beethoven concerto clip as a command to order 100 frozen pizzas.

    Surveying more than 80 organizations, the authors found the vast majority had no response plan for a data-poisoning attack or dataset theft. The bulk of the industry “would not even know it happened,” they wrote.

    Andrew W. Moore, a former Google executive and Carnegie Mellon dean, says he dealt with attacks on Google search software more than a decade ago. And between late 2017 and early 2018, spammers gamed Gmail’s AI-powered detection service four times.

    The big AI players say security and safety are top priorities and made voluntary commitments to the White House last month to submit their models — largely “black boxes’ whose contents are closely held — to outside scrutiny.

    But there is worry the companies won’t do enough.

    Tramér expects search engines and social media platforms to be gamed for financial gain and disinformation by exploiting AI system weaknesses. A savvy job applicant might, for example, figure out how to convince a system they are the only correct candidate.

    Ross Anderson, a Cambridge University computer scientist, worries AI bots will erode privacy as people engage them to interact with hospitals, banks and employers and malicious actors leverage them to coax financial, employment or health data out of supposedly closed systems.

    AI language models can also pollute themselves by retraining themselves from junk data, research shows.

    Another concern is company secrets being ingested and spit out by AI systems. After a Korean business news outlet reported on such an incident at Samsung, corporations including Verizon and JPMorgan barred most employees from using ChatGPT at work.

    While the major AI players have security staff, many smaller competitors likely won’t, meaning poorly secured plug-ins and digital agents could multiply. Startups are expected to launch hundreds of offerings built on licensed pre-trained models in coming months.

    Don’t be surprised, researchers say, if one runs away with your address book.

    [ad_2]

    Source link

  • Don’t expect quick fixes in ‘red-teaming’ of AI models. Security was an afterthought

    Don’t expect quick fixes in ‘red-teaming’ of AI models. Security was an afterthought

    [ad_1]

    BOSTON — White House officials concerned by AI chatbots’ potential for societal harm and the Silicon Valley powerhouses rushing them to market are heavily invested in a three-day competition ending Sunday at the DefCon hacker convention in Las Vegas.

    Some 3,500 competitors have tapped on laptops seeking to expose flaws in eight leading large-language models representative of technology’s next big thing. But don’t expect quick results from this first-ever independent “red-teaming” of multiple models.

    Findings won’t be made public until about February. And even then, fixing flaws in these digital constructs — whose inner workings are neither wholly trustworthy nor fully fathomed even by their creators — will take time and millions of dollars.

    Current AI models are simply too unwieldy, brittle and malleable, academic and corporate research shows. Security was an afterthought in their training as data scientists amassed breathtakingly complex collections of images and text. They are prone to racial and cultural biases, and easily manipulated.

    “It’s tempting to pretend we can sprinkle some magic security dust on these systems after they are built, patch them into submission, or bolt special security apparatus on the side,” said Gary McGraw, a cybsersecurity veteran and co-founder of the Berryville Institute of Machine Learning. DefCon competitors are “more likely to walk away finding new, hard problems,” said Bruce Schneier, a Harvard public-interest technologist. “This is computer security 30 years ago. We’re just breaking stuff left and right.” Michael Sellitto of Anthropic, which provided one of the AI testing models, acknowledged in a press briefing that understanding their capabilities and safety issues “is sort of an open area of scientific inquiry.”

    Conventional software uses well-defined code to issue explicit, step-by-step instructions. OpenAI’s ChatGPT, Google’s Bard and other language models are different. Trained largely by ingesting — and classifying — billions of datapoints in internet crawls, they are perpetual works-in-progress, an unsettling prospect given their transformative potential for humanity.

    After publicly releasing chatbots last fall, the generative AI industry has had to repeatedly plug security holes exposed by researchers and tinkerers.

    Tom Bonner of the AI security firm HiddenLayer, a speaker at this year’s DefCon, tricked a Google system into labeling a piece of malware harmless merely by inserting a line that said “this is safe to use.”

    “There are no good guardrails,” he said.

    Another researcher had ChatGPT create phishing emails and a recipe to violently eliminate humanity, a violation of its ethics code.

    A team including Carnegie Mellon researchers found leading chatbots vulnerable to automated attacks that also produce harmful content. “It is possible that the very nature of deep learning models makes such threats inevitable,” they wrote.

    It’s not as if alarms weren’t sounded.

    In its 2021 final report, the U.S. National Security Commission on Artificial Intelligence said attacks on commercial AI systems were already happening and “with rare exceptions, the idea of protecting AI systems has been an afterthought in engineering and fielding AI systems, with inadequate investment in research and development.”

    Serious hacks, regularly reported just a few years ago, are now barely disclosed. Too much is at stake and, in the absence of regulation, “people can sweep things under the rug at the moment and they’re doing so,” said Bonner.

    Attacks trick the artificial intelligence logic in ways that may not even be clear to their creators. And chatbots are especially vulnerable because we interact with them directly in plain language. That interaction can alter them in unexpected ways.

    Researchers have found that “poisoning” a small collection of images or text in the vast sea of data used to train AI systems can wreak havoc — and be easily overlooked.

    A study co-authored by Florian Tramér of the Swiss University ETH Zurich determined that corrupting just 0.01% of a model was enough to spoil it — and cost as little as $60. The researchers waited for a handful of websites used in web crawls for two models to expire. Then they bought the domains and posted bad data on them.

    Hyrum Anderson and Ram Shankar Siva Kumar, who red-teamed AI while colleagues at Microsoft, call the state of AI security for text- and image-based models “pitiable” in their new book “Not with a Bug but with a Sticker.” One example they cite in live presentations: The AI-powered digital assistant Alexa is hoodwinked into interpreting a Beethoven concerto clip as a command to order 100 frozen pizzas.

    Surveying more than 80 organizations, the authors found the vast majority had no response plan for a data-poisoning attack or dataset theft. The bulk of the industry “would not even know it happened,” they wrote.

    Andrew W. Moore, a former Google executive and Carnegie Mellon dean, says he dealt with attacks on Google search software more than a decade ago. And between late 2017 and early 2018, spammers gamed Gmail’s AI-powered detection service four times.

    The big AI players say security and safety are top priorities and made voluntary commitments to the White House last month to submit their models — largely “black boxes’ whose contents are closely held — to outside scrutiny.

    But there is worry the companies won’t do enough.

    Tramér expects search engines and social media platforms to be gamed for financial gain and disinformation by exploiting AI system weaknesses. A savvy job applicant might, for example, figure out how to convince a system they are the only correct candidate.

    Ross Anderson, a Cambridge University computer scientist, worries AI bots will erode privacy as people engage them to interact with hospitals, banks and employers and malicious actors leverage them to coax financial, employment or health data out of supposedly closed systems.

    AI language models can also pollute themselves by retraining themselves from junk data, research shows.

    Another concern is company secrets being ingested and spit out by AI systems. After a Korean business news outlet reported on such an incident at Samsung, corporations including Verizon and JPMorgan barred most employees from using ChatGPT at work.

    While the major AI players have security staff, many smaller competitors likely won’t, meaning poorly secured plug-ins and digital agents could multiply. Startups are expected to launch hundreds of offerings built on licensed pre-trained models in coming months.

    Don’t be surprised, researchers say, if one runs away with your address book.

    [ad_2]

    Source link

  • ‘Native American’ or ‘Indigenous’? Journalism group rethinks name

    ‘Native American’ or ‘Indigenous’? Journalism group rethinks name

    [ad_1]

    ATLANTA — The Native American Journalists Association is aiming to become more inclusive as its members vote on whether to rebrand as the Indigenous Journalists Association — a move inspired, in part, by evolving trends in cultural identity.

    The group, with more than 950 members mostly in the United States, is expected to approve the change at its annual conference this week in Winnipeg, Canada. Voting on the new name, as well as branding that would replace a feather with an “ija” logo in stylized letters, runs through Thursday, Aug. 10.

    Founded in Canada in 1983, NAJA wants to foster inclusion with Indigenous journalists there as well as in Alaska and Hawaii, since “ Native American ” is a modern alternative for “ American Indian ” — referring specifically to the millions of descendants of the original inhabitants of what is now the Lower 48 states.

    “Essentially, we’re going back to our roots and trying to create and provide support and resources for Indigenous journalists all across Turtle Island,” board member Jourdan Bennett-Begaye said, invoking the term some Indigenous people use to refer to the North American continent.

    More broadly, the proposed change aligns with terminology used by the United Nations and many multinational organizations as the group also seeks allies among Indigenous journalists worldwide. The Māori people in New Zealand, the Sámi people in Arctic Scandinavia and Russia, and the Mapuche people in Patagonia all face similar issues, with journalists who cover climate change, conflicts over land and resources and missing and murdered women, she said.

    The change also would reflect an evolution in how Indigenous people see themselves. They’re increasingly calling for “decolonizing” language, moving away from terms that were imposed on them, like “Indian” — a legacy of Christopher Columbus’ infamous cartographic blunder — and even, in some contexts, “American,” which derives from a mapmaker’s effort to honor another Italian explorer, Amerigo Vespucci.

    “It’s part of this larger movement that’s happening in Indigenous people, just reclaiming everything that’s theirs that should be theirs,” Bennett-Begaye said. “Since contact, decisions have been made for us and not by us.”

    Still, some NAJA members have raised concerns that if the association globalizes, its focus on issues particular to Native Americans might be lost. Board members have proposed creating regional chapters if that happens.

    “Indigenous is inoffensive, but it also doesn’t do any of the kind of distinct sovereignty work, distinct political work, distinct cultural affiliation ″ that other words do, said Elizabeth Ellis, a historian at Princeton University and an enrolled citizen of the Peoria Tribe of Indians of Oklahoma. “It doesn’t tell you much beyond the fact that you’re existing in opposition to a history and ongoing legacy of colonization.”

    Usage of the word “Indigenous” has soared in recent years, particularly after demonstrations against the Dakota Access Pipeline in 2016 forged the largest pan-Indigenous alliance in North American history. Standing Rock marked a before and after for Native American visibility in the media and popular culture, Ellis said.

    But the proliferation of its usage doesn’t mean other terms should disappear, because they’re not always interchangeable, said Ellis. Indian, American Indian, Native American, Native, and even “NDN” — a tongue-in-cheek slang popular in social media — each have distinct meanings and are appropriate in different contexts.

    Indian, for example, is a historical reference used to connote barbarism to justify enslaving Indigenous people during the colonial era — settlers equated it to savagery while seizing more land and federal policies invoked it as a racist concept in the 19th century, Ellis said. “Indian Law” remains embedded in the U.S. Constitution and in the official names of many Indigenous nations, so its usage in such contexts is inescapable.

    “Indigenous” applies worldwide, including to anyone whose ancestors didn’t come from somewhere else, and whose communities have endured oppression of their people. But it doesn’t reflect the particular duality that many Native Americans experience as citizens of their tribal nations as well as the U.S., Ellis said.

    This is why many Native Americans, when communicating with wider audiences, identify themselves first by their tribal affiliations, and increasingly, in their Indigenous language. Ellis intentionally introduces herself as Peewaalia, just as Bennett-Begaye tells people she’s Diné, a member of the Navajo Nation.

    Young people in particular are driving these changes in language, Bennett-Begaye said.

    “A lot of older folks, and across Indian Country, they still call themselves Indian. My late grandmother, she still calls herself Indian,” she said. “But young people … they see that as derogatory. They’re like, ‘We don’t call ourselves that.’ And I think that’s the cool part, like, young people owning their identity.”

    As editor of Indian Country Today, Bennett-Begaye oversaw that media organization’s recent name change to ICT, prompted by conversations about identity that were happening across the United States after the police killing of George Floyd in 2020.

    For older generations, ICT can still mean Indian Country Today, while for younger folks, it can mean Indigenous Cultures Today, or Indigenous Communities Today, she said. “We really left it up to interpretation for our readers and our audience.”

    [ad_2]

    Source link

  • US forest managers urge revelers to swap fireworks for Silly String, but some say not so fast

    US forest managers urge revelers to swap fireworks for Silly String, but some say not so fast

    [ad_1]

    ALBUQUERQUE, N.M. — Smokey Bear said it best: “Only you can prevent wildfires.”

    Following in the footsteps of their famous mascot, U.S. Forest Service managers in the drought-stricken Southwest are urging people to swap their fireworks this Fourth of July for glow sticks, noisemakers and cans of red, white and blue Silly String.

    Not so fast, say some environmentalists. While it’s worth encouraging folks not to use fireworks amid escalating wildfire danger, they say it’s kind of silly that federal land managers would suggest using aerosol cans of sticky party string out in nature.

    The advice began to pop up in recent weeks, with regional forest officials and the New Mexico State Forestry Division pumping out public service announcements offering alternatives aimed at curbing human-sparked blazes.

    They used a template that echoed similar advice from the National Fire Protection Association and even American Red Cross chapters in other states.

    “These are alternatives for children and young people to do in lieu of fireworks in their neighborhood or on their property. That way we’d like to keep things contained to your property and your neighborhood,” said George Ducker, a spokesman for the State Forestry Division. “We’re certainly not advocating folks go out into the forest and, you know, shoot off Silly String.”

    But if they do, the Forest Service has one request: Leave no trace.

    However people choose to celebrate, the rules and regulations need to be followed if they are on national forest land no matter if it’s July Fourth or any other day, said John Winn, a spokesman for the federal agency.

    “That includes but is not limited to the restricted use of fireworks, properly disposing of garbage in garbage bins, maintaining quiet hours and cleaning up after camping or day-use activities,” he said.

    Cleaning up spray streamers fits in that category, he added.

    While the spray can party favors have been around since the 1970s, manufacturers keep their recipes under wraps. In general, the string is made of a polymer resin, a substance that makes the resin foam up, a solvent, some coloring and the propellant that forces the chemicals out of the can.

    Authorities in Los Angeles decided to ban aerosol party streamers in 2004 on Hollywood Boulevard every Halloween because partygoers were using the empty cans as projectiles and many were left littering the streets and clogging gutters.

    Towns in Massachusetts and Alabama also have adopted ordinances restricting the use of the string, pointing to problems during special events. In one New York town, firefighters who participated in a parade complained that the string was damaging the paint on their trucks.

    Rebecca Sobel with the group WildEarth Guardians said party string is just one of the hundreds of seemingly benign products that pervade daily life.

    “We have to be more vigilant about the chemicals in ‘everyday’ things,” she said. “Maybe the Forest Service should have known better, but it’s also hard to know what chemicals some products contain.”

    She pointed to recent headlines about ‘forever chemicals’ found in firefighting foam and other common products, saying consumers have a responsibility to be aware of threats but they can’t do that if regulatory agencies aren’t being transparent or reading labels themselves.

    Some consumer product sites say party string is not biodegradable. While many cans are labeled as non-toxic, the string can damage vinyl surfaces or the clear coat on vehicles.

    The labels also suggest that if ingested, medical attention might be in order. That goes for humans and pets, as some of the ingredients can contain gastrointestinal irritants.

    “All of this makes it inappropriate for use at our national forest recreation sites,” says Madeleine Carey, WildEarth Guardians’ Southwest conservation manager. “Many seemingly fun party products like Silly String are extremely harmful to our forests and wildlife. Mylar balloons, noisemakers and glitter are also on the list.”

    The bottom line for state and federal forest managers is to prevent human-caused wildfires, Ducker said.

    While some parts of the West had record snowfall over the winter and enjoyed a wet spring, forest managers said it’s uncertain whether the monsoon will keep fire danger at bay. For that reason, the messaging will continue, Ducker said.

    “All it takes is a couple of weeks of really hot, dry weather and all of that stuff gets desiccated and it just becomes fuel,” he said of the vegetation that sprouted in the spring.

    Overall, more than 22,000 fires have burned nearly 1,000 square miles (2,590 square kilometers) in the U.S. since the start of the year, according to the National Interagency Fire Center.

    [ad_2]

    Source link

  • US forest managers urge revelers to swap fireworks for Silly String, but some say not so fast

    US forest managers urge revelers to swap fireworks for Silly String, but some say not so fast

    [ad_1]

    ALBUQUERQUE, N.M. — Smokey Bear said it best: “Only you can prevent wildfires.”

    Following in the footsteps of their famous mascot, U.S. Forest Service managers in the drought-stricken Southwest are urging people to swap their fireworks this Fourth of July for glow sticks, noisemakers and cans of red, white and blue Silly String.

    Not so fast, say some environmentalists. While it’s worth encouraging folks not to use fireworks amid escalating wildfire danger, they say it’s kind of silly that federal land managers would suggest using aerosol cans of sticky party string out in nature.

    The advice began to pop up in recent weeks, with regional forest officials and the New Mexico State Forestry Division pumping out public service announcements offering alternatives aimed at curbing human-sparked blazes.

    They used a template that echoed similar advice from the National Fire Protection Association and even American Red Cross chapters in others states.

    “These are alternatives for children and young people to do in lieu of fireworks in their neighborhood or on their property. That way we’d like to keep things contained to your property and your neighborhood,” said George Ducker, a spokesman for the State Forestry Division. “We’re certainly not advocating folks go out into the forest and, you know, shoot off Silly String.”

    But if they do, the Forest Service has one request: Leave no trace.

    However people choose to celebrate, the rules and regulations need to be followed if they are on national forest land no matter if it’s July Fourth or any other day, said John Winn, a spokesman for the federal agency.

    “That includes but is not limited to the restricted use of fireworks, properly disposing of garbage in garbage bins, maintaining quiet hours and cleaning up after camping or day-use activities,” he said.

    Cleaning up spray streamers fits in that category, he added.

    While the spray-can party favors have been around since the 1970s, manufacturers keep their recipes under wraps. In general, the string is made of a polymer resin, a substance that makes the resin foam up, a solvent, some coloring and the propellant that forces the chemicals out of the can.

    Authorities in Los Angeles decided to ban aerosol party streamers in 2004 on Hollywood Boulevard every Halloween because partygoers were using the empty cans as projectiles and many were left littering the streets and clogging gutters.

    Towns in Massachusetts and Alabama also have adopted ordinances restricting the use of the string, pointing to problems during special events. In one New York town, firefighters who participated in a parade complained that the string was damaging the paint on their trucks.

    Rebecca Sobel with the group WildEarth Guardians said party string is just one of the hundreds of seemingly benign products that pervade daily life.

    “We have to be more vigilant about the chemicals in ‘everyday’ things,” she said. “Maybe the Forest Service should have known better, but it’s also hard to know what chemicals some products contain.”

    She pointed to recent headlines about ‘forever chemicals’ found in firefighting foam and other common products, saying consumers have a responsibility to be aware of threats but they can’t do that if regulatory agencies aren’t being transparent or reading labels themselves.

    Some consumer product sites say party string is not biodegradable. While many cans are labeled as non-toxic, the string can damage vinyl surfaces or the clear coat on vehicles.

    The labels also suggest that if ingested, medical attention might be in order. That goes for humans and pets, as some of the ingredients can contain gastrointestinal irritants.

    “All of this makes it inappropriate for use at our national forest recreation sites,” says Madeleine Carey, WildEarth Guardians’ Southwest conservation manager. “Many seemingly fun party products like Silly String are extremely harmful to our forests and wildlife. Mylar balloons, noisemakers and glitter are also on the list.”

    The bottom line for state and federal forest managers is to prevent human-caused wildfires, Ducker said.

    While some parts of the West had record snowfall over the winter and enjoyed a wet spring, forest managers said it’s uncertain whether the monsoon will keep fire danger at bay. For that reason, the messaging will continue, Ducker said.

    “All it takes is a couple of weeks of really hot, dry weather and all of that stuff gets desiccated and it just becomes fuel,” he said of the vegetation that sprouted in the spring.

    Overall, more than 22,000 fires have burned nearly 1,000 square miles (2,590 square kilometers) in the U.S. since the start of the year, according to the National Interagency Fire Center.

    [ad_2]

    Source link

  • Surprising Words The Spelling Bee Kids Can Nail But The Rest Of Us Get Wrong All The Time

    Surprising Words The Spelling Bee Kids Can Nail But The Rest Of Us Get Wrong All The Time

    [ad_1]

    Every year since 1925, except during World War II and in 2020 when the coronavirus pandemic was going strong, the Scripps National Spelling Bee has been held in Washington, D.C. This week, the tradition continues: A bunch of young academics will somehow spell impossibly difficult words, causing breathless, impressed adults to think, “Wow, I am really stupid. Where did I go wrong?”

    That got us thinking. What are we generally misspelling in real life? Where are we going wrong in our everyday writing? What words spell trouble for many of us?

    We asked a bunch of professionals who work with words every day, and, well: Get ready to feel even dumber.

    1. Accommodation

    “Accommodation [is] often misspelled as acommodation, accomodation, or acomodation,” said Haley Slade, CEO and founder of Slade Copy House, a digital copywriting agency based in Nashville, Tennessee.

    “I work with words literally all day long,” said Slade, who said that “accommodation” is a top offender for most-misspelled word.

    Two c’s and two m’s, folks. It shouldn’t be hard with autocorrect and spell check, but apparently, it is.

    2. Affect

    As noted, most of us have autocorrect and spell check (which kept trying to fix the words in this article we were intentionally misspelling, by the way). So people aren’t misspelling as many words as they used to, but they often misspell words because they don’t understand which words are the correct ones to use.

    Lisa Williams is the Charles J. Luellen Professor of English and director of creative writing at Centre College in Danville, Kentucky, and is not related to this author (as far as we know). Williams said that she sees a lot of students using the word “affect” when they mean “effect.”

    For instance, these sentences are correct: The storm had quite an effect on the town. It affected all of the citizens.

    These sentences are not correct: The storm had quite an affect on the town. It effected all of the citizens.

    But, generally, Williams said, due to spell check, she doesn’t see a lot of misspellings from her students.

    “It’s a very different world from when I was in school, and the act of reading and memorizing vocabulary lists to learn spelling was just what you did,” she said.

    3. A lot

    It’s a lot, not alot, said Gigi Marino, a communications and public relations professional in Winter Park, Florida. She also writes professionally and says she has seen “a lot” written as “alot” a lot. In fact, she has seen “alot” so often that she thinks it will be one day accepted into standard usage. Let’s hope not.

    4. And

    And? People misspell “and”?

    It’s not that dumb, but it’s still pretty dumb. It isn’t like people are writing “andd,” but we still manage to screw up the word pretty often by not actually using it.

    “This one is a pet peeve of mine,” said Debra Boggs, founder and CEO of D&S Executive Career Management. A big part of Boggs’ job is reworking and rehauling executive resumes, and she sees many professionals sticking in an ampersand — that is, an “&” — in the middle of resumes and cover letters instead of writing “and.”

    “It makes the content look unrefined and casual,” Boggs said, & we think most people will agree with her. “Ampersands are perfect for headlines and titles, but they don’t belong in bullet points or full sentences inside your resume.”

    5. Canceled

    “As a copy editor, I see many words misspelled. However, the ones that come up consistently are the ones spell check misses because they are technically correct — words like ‘canceled’ and ‘traveled’ often get a double L. For example, ‘cancelled,’ which is the British English spelling of the word,” said Jacob Richey, executive copy editor at Axia Public Relations.

    Richey said that the spellings ended up changing when Merriam-Webster founder Noah Webster proposed simplifying some British spellings to make the language easier to learn.

    “It was not so advertisers could save money on print ads, a commonly shared falsehood,” Richey said. “And since we consume literature and written content from across the globe, I suspect that we encounter both spellings often, which could understandably make choosing the correct version feel like a guessing game. But for the American English spelling, when dealing with double letters, especially L’s, when in doubt, take it out.”

    6. Definitely

    Anyone in the annual national spelling bee will get this word right, but plenty of mere mortals definitely don’t, according to Jennifer Smith, associate professor and chair of the English department at North Central College in Naperville, Illinois. She said that many students confuse “definitely” with “defiantly.”

    She also sees “definitely” frequently misspelled as “definately” and “definatly.”

    “The placement of the ‘I’ and ‘a’ in the word can be confusing, leading to incorrect spelling,” Slade said.

    There are invariably a million ways people can muff this word. Definitely was named the most misspelled word in a OnePoll.com survey years ago.

    7. It’s/its

    Knowing when to spell “it’s” or “its” is many spellers’ downfall. Still, while it’s confusing, the virtue of learning how to get these two words right is its own reward.

    “The most common misspelling I see is ‘it’s,’ or depending on your point of view, ‘its,’ and the reason is simple: It’s irregular,” said Lenny Cassuto, an English professor at Fordham University in New York City.

    “Students are taught that a possessive ends with an apostrophe followed by an s,” Cassuto explained. “But the ‘it’s/its’ pairing violates the rule.”

    If your head is now spinning, Cassuto calls it a “forgivable mistake,” though he says that we should still learn exceptions to grammar rules.

    8. High school

    Not “highschool.” Marino said she sees this a lot, too. Really? The spelling is right there on the sign over the entrance of the school building we all went to — for four years!

    9. Lead

    Often, people use this word when they want to use “led,” Boggs said.

    I’m not sure where this comes from, but many people think that ‘lead’ is past tense of the verb ‘to lead’ when it should in fact be ‘led.’ This causes confusion in a sentence when all other verbs are correctly spelled in past tense.”

    10. Misspell

    Slade sees this a lot. People forget that there are two s’s.

    “I have noticed over the years that people are becoming more illiterate. Just read any social media site.”

    – Gigi Marino, communications and public relations professional

    11. Multimillion-dollar

    This is a mistake I see in almost every executive resume. Putting hyphens where they don’t belong is common, and this example is the most prevalent,” Boggs said.

    So what are people writing down?

    “Multi-million-dollar” and “multi-million dollar,” according to Boggs. Again, multimillion-dollar is correct — no matter how weird it looks.

    12. Premier

    “Premier” is the correct spelling for “top of the line,” not “premiere” (a first performance of something).

    “I have noticed over the years that people are becoming more illiterate,” Marino said. “Just read any social media site ― oh, site and cite are commonly confused ― like Nextdoor, and you will see how atrocious the spelling is.”

    13. Restaurant

    It’s such a common word, one that spelling bee kids would probably never trip over. But grown-ups do, perhaps due to carelessness.

    Commonly misspelled as ‘restaraunt’ or ‘resturant.’ The placement of the ‘u’ and ‘a’ in the word is often mistakenly switched,” Slade said.

    14. Separate

    Separate is often misspelled as ‘seperate’ because of the placement of the ‘a’ and ‘e’ in the word is often interchanged or confused,” Slade said.

    15. Spelled

    Google Trends recently revealed that one of the words we’re most unsure about spelling in 2023 is, interestingly enough, “spelled.” A lot of people are typing into the search engine, “Is it spelled or spelt?”

    Well, that depends. If you live in America, you would go with “spelled.” If you live in England, you would probably use the word “spelt,” which is the past tense of “spell” there.

    16. Theater, gray, jeez and blond

    Speaking of Google, the search giant said other top spelling searches so far this year include “is it grey or gray?” (gray, but the dog breed is greyhound), “is it theatre or theater?” (the Associated Press Stylebook recommends using theater unless “theatre” is in the proper name of a place), “is it jeez or geez?” (geez is a less common spelling of jeez, which is short for Jesus) and “is it blond or blonde?” (blond is preferred as an adjective, and beyond that, it’s complicated).

    17. There, they’re and their

    Stuart Patterson, associate professor in the Shimer Great Books School at North Central College, who teaches courses like, “Why – and What – Should We Read?” and “Theories of Metaphor,” said that he constantly sees students messing up “their, there and they’re.”

    He does defend his students and any adult who is feeling bad about their spelling. “Spelling itself is a relatively recent invention,” he pointed out.

    In fact, when it comes to spelling words correctly, if you consider yourself a poor speller, you are in pretty good company. When it comes to consistently spelling words correctly, Patterson said, “Shakespeare could hardly have done it to save his life.”

    [ad_2]

    Source link