ReportWire

Tag: computer science and information technology

  • I’m a parent with an active social media brand: Here’s what you need to check on your child’s social media right now | CNN

    I’m a parent with an active social media brand: Here’s what you need to check on your child’s social media right now | CNN

    [ad_1]

    Editor’s Note: Sign up for CNN’s Stress, But Less newsletter. Our six-part mindfulness guide will inform and inspire you to reduce stress while learning how to harness it.



    CNN
     — 

    If you follow me on Twitter or Instagram, you’ll know I wear a lot of hats: romance author, parent of funny tweenagers, part-time teacher, amateur homesteader, grumbling celiac and the wife of a seriously outdoorsy guy.

    Because I’m an author with a major publisher in today’s competitive market, I’ve been tasked with stepping up my social media brand: participation, creation and all. The more transparent and likable I am online, the better my books sell. Therefore, to social media I go.

    It’s rare to find someone with no social media presence these days, but there’s a marked difference between posting a few pictures for family and friends and actively creating social media content as part of your daily life.

    With a whopping 95% of teens polled having access to smartphones (and 98% of teens over 15), according to an August Pew Research Center survey on teens, social media and technology, it doesn’t look like social media platforms are going away anytime soon.

    Not only are they key social tools, but they also allow teens to feel more a part of things in their communities. Many teens like being online, according to a November Pew Research Center survey on teen life on social media. Eighty percent of the teens surveyed felt more connected to what is happening in their friends’ lives, while 71% felt social media allows them to showcase their creativity.

    So, while posting online is work for me, it’s a way of life for the tweens and teens I see creating and publishing content online. As a parent of two middle schoolers, I know how important social media is to them, and I also know what’s out there. I see the good, the bad and the viral, and I’ve have put together some guidelines, based on what I’ve seen, for my fellow parents to watch for.

    Here are eight questions to ask yourself as you check out your children’s social media accounts.

    If you don’t, it’s time to start. It’s like when I had to look up the term “situationship,” I saw that ignorance is not bliss in this case. Or really any case when it comes to your children. Both of my children have smartphones, but even if your children don’t have smartphones, if they have any sort of device — phone, tablet, school laptop — it’s likely they have some sort of social media account out there. Every app our children wish to add to their smart devices comes through my husband’s and my phone notifications for approval. Before I approve any apps, I’ll read the reviews, run an internet search and text my mom friends for their experience.

    Most tweens and teens use social media for socializing with local friends.

    If I’m still uncertain about an app, I’ll hold off on approving it until I can sit down with my children and ask them why they want it. Sometimes just waiting and forcing a short discussion is enough to convince them they no longer want it. In our household, I avoid any apps that run social surveys, allow anonymous feedback or require the individual to use location services.

    If you don’t have your family phone plan all hooked together with parental controls, I’d advise setting that up ASAP. Because different devices and apps have different ways to monitor and set up parental controls, it’s impossible to link all the options here. However, a quick search will give you exactly the coverage you are comfortable with, including apps that track your child’s text messages and changing the settings on your child’s phone to lock down at a certain time every night.

    The top social media platforms teens use today are YouTube (95% of teens polled), TikTok (67%), Instagram (62%) and Snapchat (59%), according to the Pew Research Center survey on teens and social media tech. Other social media platforms teens use less frequently are Twitter, Reddit, WhatsApp and Facebook. Most notably, Facebook is seeing a significant downturn in teen users. This list isn’t exhaustive, however. I would check out your children’s devices for group chat apps (such as Slack or Discord) and also scroll through their sport or activity apps where group chat capabilities exist.

    I’ve seen preteens and teens using their real names, birthdate, home address, pets’ names, locker numbers or their school baseball team. Any of that information could be used to identify your child and location in real life or using a quick Google search. All of that is an absolute “no” in our house.

    I also tell my kids not to answer the fun surveys and quizzes that invite children to share their unique information and repost it for others to see. These can be useful tools for predators and people trying to steal your children’s identity.

    What I do: I made the choice a long ago to withhold the names of my children and partner. It’s not an exact science, and I know some clever digging could find them. For my husband, it’s for the sake of his privacy and also the protection of his professionalism. Just because he’s married to a romance author doesn’t mean he should have to answer for my online antics, whatever they may be. For my children, I want to avoid anything embarrassing that could be traced back to them during their college application season.

    Even if your children keep their social media profiles private (more on that later), their biographical information, screen name and avatar or profile picture are public information.

    Do an internet search of your child’s name to see what’s out there and scroll through images to make sure there isn’t anything you wouldn’t want to be made public. In our household, I’ve asked my children to use generic items or illustrated avatars in their social media bios.

    What I do: Parents who do have active social media accounts may want to do a search of their own names. When my first book was published in 2019, I did a search of my name and images and found many photos of my children that came directly from my social media pages. I hadn’t posted pictures of them, but I did use a family photo as my profile photo and those are public record. Once I deleted them, the photos disappeared.

    Another “no” in our household is posting videos or photos of our home or bedrooms. Something that feels innocent and innocuous to your middle schooler may not feel that way to an adult seeking out inappropriate content.

    I learned this from one of my children’s Pinterest accounts. My kid loves to create themed videos using her own photos and stock pictures, and she’s gained over 500 followers in a short period of time. She has completely followed our rules and I know, because I check and follow her myself — but it hasn’t stopped the influx of adult men following her content.

    What we do: Over the holidays, I sat with her and went through each follower one by one and blocked anyone we decided was there for the wrong reasons. In the end, we blocked close to 30 adult men on her account. (I also know that some predators cleverly disguise themselves as children or teens, and we may not catch them all, but this is still a worthy exercise.)

    We also talk to our children about how to protect themselves. They wouldn’t want those strangers standing in their bedroom; therefore, they don’t want to post videos of their bedroom or bathroom or classroom for strangers to view.

    This is a tricky one for lots of reasons. For content creators to build their following, they need to remain public on social media. If your child is an entrepreneur or artist hoping to grab attention, locking down their account will prevent that from happening.

    That said, a way around this is to have two accounts. First, a private one, locked down and only used for family and close friends, and second, a public one that lacks identifiers but showcases whatever branding the child is hoping to grow. I’ve come across some well-managed public accounts for children who have giant followings and noticed they are usually run by parents, who state that right in the profile. I like this. If your children want public profiles because they are hoping to catch the attention of a talent scout, having the accounts monitored by a responsible adult who has their best interest in mind is a healthy compromise.

    This is the exception, however. Most tweens and teens today use their social media for socializing with local friends. The benefit of keeping their account as private (or as private as can be) is threefold. It allows them to screen who follows their content, thus preventing our Pinterest fiasco. It prevents strangers from accessing their content and making it viral without their permission. And it protects them from unsolicited contact with strangers.

    Not all social media platforms have the option to make your account “private.” For example, YouTube has parental controls that can be adjusted at any time. TikTok and Instagram can be made private (which means users must approve followers) by making the change in the account settings. Once the account is private, a little padlock will show next to the username.

    Snapchat allows users to approve followers on a case-by-case basis as well as turn off features that disclose a user’s location. Notably, Snapchat also informs users when another user takes a screenshot of their story, which is a feature other social media platforms don’t have yet.

    Most group chat apps don’t have the ability to go private so much as they ask users to approve of follower requests. Take time to discuss with your children who they allow to follow them and what personal information they allow those followers to know. It’s also a great time to teach them the art of “blocking” those individuals who are unsafe or unkind.

    My suggestion is to log in, scroll around and even ask your children to teach you about the platforms they use. Then, when they roll their eyes at you, go ahead and tell them about your first Hotmail email address and the way you picked the perfect emo playlist on your Myspace page … and when they’re bent over laughing, sneak a peek at their follower list. Trust me, it’ll be worth it.

    [ad_2]

    Source link

  • Record $3.8 billion stolen in crypto hacks last year, report says | CNN Business

    Record $3.8 billion stolen in crypto hacks last year, report says | CNN Business

    [ad_1]


    New York
    CNN
     — 

    A record $3.8 billion worth of cryptocurrency was stolen from various services last year, with much of those thefts driven by North Korean-linked hackers, according to a report Wednesday from blockchain analytics firm Chainalysis.

    The increase in crypto heists, from $3.3 billion in 2021, came as the overall market for cryptocurrencies suffered significant declines. The value of Bitcoin, for example, fell by more than 60% last year.

    North Korea was a key driver for the surge in thefts, according to the report. Hackers linked to the country stole an estimated $1.7 billion worth of crytopcurrency through various hacks in 2022, up from $429 million in the prior year, Chainalysis said.

    Some of the biggest crypto hacks of the year have since been attributed to North Korea. The FBI has blamed hackers linked to the North Korean government for more than $600 million hack of video game Axie Infinity’s Ronin network in March and a $100 million Harmony, a cryptocurrency firm, in June.

    “North Korea’s total exports in 2020 totalled $142 million worth of goods, so it isn’t a stretch to say that cryptocurrency hacking is a sizable chunk of the nation’s economy,” Chainalysis noted in the report.

    US officials worry Pyongyang will use money stolen from crypto hacks to fund its illicit nuclear and ballistic weapons program. North Korean hackers have stolen the equivalent of billions of dollars in recent years by raiding cryptocurrency exchanges, according to the United Nations.

    In addition to hacking cryptocurrency firms, suspected North Koreans have posed as other nationalities to apply for work at such firms and send money back to Pyongyang, US agencies have publicly warned.

    In general, decentralized finance (DeFi) protocols were the main target of hackers, accounting for more than 80% of all cryptocurrency stolen for the year, according to the report. These protocols are used to replace traditional financial institutions with software that allows users to transact directly with each other via the blockchain, the digital ledger that underpins cryptocurrencies.

    Of the attacks on DeFi systems, 64% targeted cross-chain bridge protocols, which allow users to exchange assets between different blockchains. Bridge services typically hold large reserves of various coins, making them targets for hackers. (The thefts on Axie Infinity and Harmony were both bridge hacks.)

    While crypto hacks continued to rise last year, there is some cause for hope. Law enforcement and national security agencies are expanding their abilities to combat digital criminals, such as the FBI’s recovery of $30 million worth of cryptocurrency stolen in the Axie Infinity hack.

    Those efforts, combined with other agencies cracking down on money laundering techniques, “means that these hacks will get harder and less fruitful with each passing year,” according to Chainalysis.

    [ad_2]

    Source link

  • ChatGPT creator rolls out ‘imperfect’ tool to help teachers spot potential cheating | CNN Business

    ChatGPT creator rolls out ‘imperfect’ tool to help teachers spot potential cheating | CNN Business

    [ad_1]



    CNN
     — 

    Two months after OpenAI unnerved some educators with the public release of ChatGPT, an AI chatbot that can help students and professionals generate shockingly convincing essays, the company is unveiling a new tool to help teachers adapt.

    OpenAI on Tuesday announced a new feature, called an “AI text classifier,” that allows users to check if an essay was written by a human or AI. But even OpenAI admits it’s “imperfect.”

    The tool, which works on English AI-generated text, is powered by a machine learning system that takes an input and assigns it to several categories. In this case, after pasting a body of text such as a school essay into the new tool, it will give one of five possible outcomes, ranging from “likely generated by AI” to “very unlikely.”

    Lama Ahmad, policy research director at OpenAI, told CNN that educators have been asking for a ChatGPT feature like this, but warns it should be “taken with a grain of salt.”

    “We really don’t recommend taking this tool in isolation because we know that it can be wrong and will be wrong at times – much like using AI for any kind of assessment purposes,” Ahmad said. “We are emphasizing how important it is to keep a human in the loop … and that it’s just one data point among many others.”

    Ahmad notes that some teachers have referenced past examples of student work and writing style to gauge whether it was written by the student. While the new tool might provide another reference point, Ahmad said “teachers need to be really careful in how they include it in academic dishonesty decisions.”

    Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists. It even recently passed law exams in four courses at the University of Minnesota, another exam at University of Pennsylvania’s Wharton School of Business and a US medical licensing exam.

    In the process, it has raised alarms among some educators. Public schools in New York City and Seattle have already banned students and teachers from using ChatGPT on the district’s networks and devices. Some educators are now moving with remarkable speed to rethink their assignments in response to ChatGPT, even as it remains unclear how widespread use is of the tool among students and how harmful it could really be to learning.

    OpenAI now joins a small but growing list of efforts to help educators detect when a written work is generated by ChatGPT. Some companies such as Turnitin are actively working on ChatGPT plagiarism detection tools that could help teachers identify when assignments are written by the tool. Meanwhile, Princeton student Edward Tuan told CNN more than 95,000 people have already tried the beta version of his own ChatGPT detection feature, called ZeroGPT, noting there has been “incredible demand among teachers” so far.

    Jan Leike – a lead on the OpenAI alignment team, which works to make sure the AI tool is aligned with human values – listed several reasons for why detecting plagiarism via ChatGPT may be a challenge. People can edit text to avoid being identified by the tool, for example. It will also “be best at identifying text that is very similar to the kind of text that we’ve trained it on.”

    In addition, the company said it’s impossible to determine if predictable text – such as a list of the first 1,000 prime numbers – was written by AI or a human because the correct answer is always the same, according to a company blog post. The classifier is also “very unreliable” on short texts below 1,000 characters.

    During a demo with CNN ahead of Tuesday’s launch, ChatGPT successfully labeled several bodies of work. An excerpt from the book “Peter Pan,” for example, was deemed “unlikely” to be AI generated. In the company blog post, however, OpenAI said it incorrectly labeled human-written text as AI-written 5% of the time.

    Despite the possibility of false positives, Leike said the company aims to use the tool to spark conversations around AI literacy and possibly deter people from claiming that AI-written text was created by a human. He said the decision to release the new feature also stems from the debate around whether humans have a right to know if they’re interacting with AI.

    “This question is much bigger than what we are doing here; society as a whole has to grapple with that question,” he said.

    OpenAI said it encourages the general public to share their feedback on the AI check feature. Ahmad said the company continues to talk with K-12 educators and those at the collegiate level and beyond, such as Harvard University and the Stanford Design School.

    The company sees its role as “an educator to the educators,” according to Ahmad, in the sense that OpenAI wants to make them more “aware about the technologies and what they can be used for and what they should not be used for.”

    “We’re not educators ourselves – we’re very aware of that – and so our goals are really to help equip teachers to deploy these models effectively in and out of the classroom,” Ahmad said. “That means giving them the language to speak about it, help them understand the capabilities and the limitations, and then secondarily through them, equip students to navigate the complexities that AI is already introducing in the world.”

    [ad_2]

    Source link

  • Apparent cyberattack forces Florida hospital system to divert some emergency patients to other facilities | CNN Politics

    Apparent cyberattack forces Florida hospital system to divert some emergency patients to other facilities | CNN Politics

    [ad_1]



    CNN
     — 

    An apparent cyberattack has forced a network of Florida health care organizations to send some emergency patients to other facilities and to cancel some non-emergency surgeries, the health care network said Friday.

    Tallahassee Memorial HealthCare, which operates a 772-bed hospital and multiple specialty care centers, said an “IT security issue” late Thursday night forced it to take down its computer system.

    “We are also diverting EMS [emergency medical services] patients and will only be accepting Level 1 traumas from our immediate service area,” the hospital system said in a statement. Level 1 trauma refers to the most acute injuries and illnesses.

    Tallahassee Memorial HealthCare spokesperson Tori Lynn Schneider told CNN “some” emergency patients were being diverted to facilities outside of the organization’s network, but declined to say how many patients. All non-emergency and elective procedures scheduled for Monday were canceled because of the hacking incident, Schneider said.

    It’s the latest in a series of cyberattacks that have continued to hit resource-strapped US health care providers in the nearly three years of the Covid-19 pandemic. In another case, hackers accessed the personal data of nearly 270,000 patients in an attempted ransomware attack on a Louisiana health care system in October.

    The FBI last month shut down the computer infrastructure used by a notorious ransomware gang to attack multiple US hospitals, according to the bureau. But the threat remains as multiple ransomware groups are known to target the health sector.

    It’s unclear who was responsible for the apparent hack of Tallahassee Memorial. Tallahassee Memorial did not specify whether it had suffered a ransomware attack, but the organization’s statement described activity, including the need to shut down computer networks, consistent with a ransomware attack.

    Staff have been unable to access digital patient records and lab results because of the shutdown, a hospital source told CNN.

    Mark O’Bryant, Tallahassee Memorial’s CEO, notified staff in person Friday morning that the system had suffered a “cyberattack,” according to the source.

    “To help us contain the issue, please completely turn off all PCs connected to TMH’s network immediately and leave them off until notified otherwise,” Tallahassee Memorial leadership said in a memo sent to employees Friday morning and obtained by CNN.

    Max Henderson, a Tallahassee native and cybersecurity specialist who focuses on health care, said the effects of a shutting down a hospital’s computer network can last for weeks or months.

    “Immediate, unplanned shutdowns can lead to a loss of recently gathered data regarding diagnosis, clinical notes, shift handovers and other various setbacks for the medical staff,” Henderson, who is senior manager for incident response at security firm Pondurance, told CNN.

    “Nearly all hospitals rely on the internet for connectivity with vendors and remote offices for processing information in critical departments such as radiology, pharmacy, medical device maintenance, patient document scanning and payment processing,” Henderson added.

    [ad_2]

    Source link

  • Democratic senator urges Apple and Google to ban TikTok from their app stores | CNN Business

    Democratic senator urges Apple and Google to ban TikTok from their app stores | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    A member of the Senate Intelligence Committee is calling on Apple and Google to remove TikTok from their app stores over concerns about national security, in the latest indication of mounting scrutiny on the short-form video app from members of Congress.

    In a letter sent to the two tech giants on Thursday, Colorado Democratic Sen. Michael Bennet calls TikTok “an unacceptable threat to the national security of the United States” and cites the same concerns that have prompted the federal government and more than half of US states to restrict TikTok from official devices and networks.

    Writing to Apple CEO Tim Cook and Google CEO Sundar Pichai, Bennet highlighted fears that China could use its national security laws to force TikTok or its parent, ByteDance, to hand over the personal information of the app’s US users. The laws in question, Bennet wrote, require organizations in the country to “cooperate with state intelligence work” and to allow the government to access company resources. ByteDance’s founder is Chinese and the company has offices in China. TikTok has also disclosed to European users that their data may be accessed by employees based in China.

    China could potentially try to shape what US users see on the app, Bennet warned, with possible implications for foreign policy and democracy.

    “We should accept the very real possibility that [China] could compel TikTok, via ByteDance, to use its influence to advance Chinese government interests,” Bennet wrote, “for example, by tweaking its algorithm to present Americans content to undermine U.S. democratic institutions or muffle criticisms” of China’s handling of Hong Kong, Taiwan or ethnic minorities.

    Apple, Google and TikTok didn’t immediately respond to a request for comment. TikTok CEO Shou Zi Chew is expected to testify before a House committee in March to discuss the company’s data security practices.

    There is no evidence that the type of spying or manipulation US officials fear has actually occurred, but security experts have warned that it is a possibility.

    TikTok has denied that it would ever hand over US user data to the Chinese government. It has increasingly moved to wall off its US operations from the rest of its business, technologically and organizationally — part of what the company has described as a good-faith effort to address the national security concerns.

    TikTok has also spent years negotiating a potential national security deal with the US government that would seek to resolve some of the concerns, but the talks have been mired by delays, leading to frustration among some members of Congress. In recent months, multiple US lawmakers have introduced bills that would ban TikTok from all US devices, including personal ones.

    Some other US officials have also called on Apple and Google to voluntarily remove TikTok from their app stores.

    Last year, Brendan Carr, a commissioner at the Federal Communications Commission, wrote a letter to the companies urging them to de-list TikTok. The FCC does not regulate app stores, but Carr has said that his agency’s experience dealing with Chinese telecom companies has informed his views on the matter. The FCC has moved to block Chinese firms including Huawei and ZTE from the US market, over fears that their wireless networking equipment could be used to collect information on US communications.

    Although the leading members of the Senate Intelligence Committee, Virginia Democrat Mark Warner and Florida Republican Marco Rubio, have also been outspoken critics of TikTok, the two lawmakers had not been invited to co-sign Bennet’s letter before it was sent, according to a spokesperson for Bennet. Rubio is an author of one of the bills seeking to ban TikTok from the United States, while Warner has said he would prefer to see a bill that targets a broader category of worrisome apps, rather than a single app such as TikTok.

    [ad_2]

    Source link

  • Apple and Google’s app stores wield ‘gatekeeper’ power and should be reined in, Commerce Department says | CNN Business

    Apple and Google’s app stores wield ‘gatekeeper’ power and should be reined in, Commerce Department says | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    The Biden administration on Wednesday took its biggest swipe yet at app stores run by Apple and Google, with a new report accusing the two tech giants of exercising “gatekeeper” power that has led to “suboptimal” levels of competition in digital markets.

    The report published by the Commerce Department finds that Apple

    (AAPL)
    and Google

    (GOOG)
    “play a significant gatekeeping role by controlling (and restricting) how apps are distributed,” and that the various fees and rules they impose on app developers has created an uneven playing field.

    “All of these factors translate to potential losses for consumers: prices that are inflated due to the fees collected by gatekeepers, innovation that is hampered by policy decisions to limit access to smartphone capabilities, and the loss of choice of apps that are not featured or even accessible for smartphone users,” the report said.

    Adobe Stock

    The 48-page report throws the White House’s weight behind mounting public criticism of dominant app stores, which in recent years has led to multiple private lawsuits against Apple and Google as well as investigations by antitrust regulators in Europe and reports of a probe by the Justice Department.

    In a statement, Apple said its app store has benefited developers and supports hundreds of thousands of jobs. In the past, Apple has argued that its control over iOS app distribution helps promote users’ privacy and security.

    “We respectfully disagree with a number of conclusions reached in the report, which ignore the investments we make in innovation, privacy and security,” an Apple spokesperson said, “all of which contribute to why users love iPhone and create a level playing field for small developers to compete on a safe and trusted platform.”

    Google has said its Android operating system, unlike Apple, allows for competing app stores.

    “We disagree with how this report characterizes Android, which enables more choice and competition than any other mobile operating system,” a Google spokesperson said. “[The report] recognizes the importance of interoperability, multiple app stores and sideloading, which Android’s open system already supports – all while ensuring privacy and security.”

    Wednesday’s report, published by a Commerce Department office charged with advising the president on technology issues, does not launch a regulatory process. Instead, it provides policy recommendations, such as limits on the apps Apple and Google can pre-install or set as defaults on their respective operating systems, or giving users the right to install apps from any source.

    The report also called for boosting budgets for US antitrust enforcers; a ban on some app store restrictions surrounding in-app payments; and a federal privacy law establishing clear standards for data privacy.

    Many of the report’s recommendations echo provisions in federal legislation that received bipartisan support last Congress, but that failed to become law.

    The findings had been informed by public comments submitted to the Department in the months leading up to the report.

    [ad_2]

    Source link

  • Ransomware attack closes schools in Nantucket | CNN Politics

    Ransomware attack closes schools in Nantucket | CNN Politics

    [ad_1]



    CNN
     — 

    A ransomware attack forced the closure Tuesday of four public schools serving 1,700 students on the island of Nantucket, Massachusetts, the school district’s superintendent said in an email to parents.

    The hacking incident shut down all student and staff devices, as well as safety and security systems at Nantucket Public Schools, forcing an early dismissal at noon on Tuesday, Superintendent Elizabeth Hallett said in the email, which she shared with CNN.

    The news came as Tucson Unified School District (TUSD), which calls itself the largest pre-K-12 school district in southern Arizona, also suffered a ransomware attack in recent days, according to local news reports. Representatives of TUSD did not respond to emails seeking comment. There was no evidence that the two incidents were related.

    Ransomware – malicious software that locks computers and holds them for ransom – has for years plagued US schools and other organizations that can be short on money and personnel to defend themselves from hacks.

    The hacks often force schools to temporarily close, further disrupting learning during the coronavirus pandemic. The lack of cybersecurity budgeting at primary schools is a “major constraint to implementing effective cybersecurity programs across all K–12 entities,” the federal US Cybersecurity and Infrastructure Security Agency warned in a report this month.

    Nantucket Public Schools includes an elementary, middle and high school, and serves Nantucket, which is about 30 miles south of Cape Cod, Massachusetts.

    Athletic events at the school were still scheduled to proceed. “No school issued devices should be used at home until further notice, as it could compromise home networks,” Hallett said in her email to parents.

    “We do not have any updates yet on when we will return,” Hallett told CNN in a separate email.

    There have already been five ransomware attacks on US school districts in January, according to a tally from Brett Callow, threat analysts at cybersecurity firm Emsisoft. Forty-five US school districts operating 1,981 schools were hit by ransomware in 2022, according to Emsisoft.

    A year ago, New Mexico’s largest public school district had to close temporarily after a cyberattack hit computer systems that could affect learning and student safety.

    “The ransomware attacks on school districts across the country are a stark reminder that as a country we need to ensure our citizens are cyber literate,” Kevin Nolten, vice president of Cyber Innovation Center, a not-for-profit supported by federal grant money that promotes cybersecurity curricula in K-12 schools, told CNN.

    “Cybersecurity education is a national security issue and we must educate our country on protecting our most critical infrastructure from malicious attacks,” Nolten said in an email pointing to the high demand for cybersecurity skills in the workforce.

    [ad_2]

    Source link

  • New US ransomware strategy prioritizes victims but could make it harder to catch cybercriminals | CNN Politics

    New US ransomware strategy prioritizes victims but could make it harder to catch cybercriminals | CNN Politics

    [ad_1]


    Washington
    CNN
     — 

    US and European law enforcement’s disruption last week of a $100-million ransomware gang is the clearest public example yet of a new high-stakes strategy from the Biden administration to prioritize protecting victims of cybercrime – even if it means tipping off suspects and potentially make it harder to arrest them.

    The extent to which the FBI and Justice Department can carry out similar operations on other ransomware groups – and get the balance right between when to collect intelligence on hackers’ operations and when to shut down computer networks – could affect how acute the threat of ransomware attacks is to US critical infrastructure for years to come.

    In the case revealed last week, the FBI says it had extraordinary access for six months to the computer infrastructure of a Russian-speaking ransomware group known as Hive, which had extorted more than $100 million from victims worldwide, including hospitals. That covert access, officials said, allowed the FBI to pass “keys” to victims so that they could decrypt their systems and thwart $130 million in ransom payments.

    Justice officials are still trying to arrest the people behind Hive and know where some of them are located, a senior Justice Department official told CNN. But sometimes waiting for an arrest before seizing hacking infrastructure “may mean waiting for a very long time – perhaps an unacceptably long time,” the official said in an interview granted on the condition of anonymity to discuss the case.

    The decision to go public with a splashy news conference, fronted by FBI Director Christopher Wray and Attorney General Merrick Garland, before making any arrests is evidence of a new approach to ransomware attacks which cost the US hundreds of millions of dollars, if not billions, annually.

    The strategy shift toward doing more to help victims of cybercrime – announced a year ago – is loosely based on the US government’s approach to counterterrorism, which centers around disrupting plots and thwarting attacks.

    “I was preparing for this to be public long, long ago and was kind of surprised that we were able to do this for this long,” the senior Justice Department official said of US officials’ covert access to Hive computer servers.

    After multiple ransomware attacks hobbled US critical infrastructure firms in 2021, pressure grew on US law enforcement from Congress, the White House and the public to do more to disrupt the hackers’ operations.

    Still, the FBI announcement raised questions about why the bureau decided to go public with the action now rather than continuing to lurk in the Hive hackers’ networks and collect intelligence. And it is possible or even likely, US officials concede, that Hive’s operators will set up new infrastructure to try to resume their extortion attempts.

    One law enforcement source told CNN the timing made sense because US officials may have exhausted the intelligence they were going to glean from Hive’s servers.

    The senior Justice Department official explained the decision this way: “We saw significant value in the reputational damage we were going to incur against Hive by announcing this.”

    Like in other businesses, customers of ransomware gangs have a choice of who they buy hacking tools from. One goal of the operation, the senior Justice official said, was to “discredit” Hive in the eyes of other ransomware criminals and have a psychological effect on their operations.

    “Other [ransomware] groups will watch this and have to spend more time and money securing their infrastructure,” said Bill Siegel, CEO of Coveware, a cybersecurity firm that works closely with victims and the FBI.

    The spate of significant ransomware attacks in the US in 2021 brought more scrutiny to how quickly the FBI and its partners can mitigate the impact the attacks.

    After a July 2021 ransomware attack on a Florida-based software firm compromised up to 1,500 businesses, multiple US government agencies, including the FBI, deliberated about how and when to get the decryptor to victims. At least one victim organization, a Maryland tech firm, complained that they could have used the decryption key earlier to save on recovery costs, the Washington Post reported.

    US officials weigh a number of factors when considering law enforcement operations to disrupt cybercriminal groups, a senior FBI official told CNN, including how the disruption will impact the broader cybercriminal ecosystem, how the FBI can help victims of the hackers recover, and the long-term “pursuit of justice” for the victims.

    “Each case is different as far as what access [to the hackers’ infrastructure] looks like … what can be done quietly versus noisily,” the senior FBI official said. “Those all go into it.”

    John Riggi, a former senior FBI official who is now national adviser for cybersecurity and risk at the American Hospital Association, applauded the disruption of Hive and hoped the crackdown on ransomware groups would continue. But ransomware attacks on health care organizations will likely continue as long as the hackers are getting paid off and are willing to tolerate the risk of carrying out the attacks, Riggi said.

    Some cybercriminals “still view their attacks on hospitals as primarily data and financially motivated,” he told CNN.

    One lingering problem for the FBI: Not enough victims are reporting ransomware attacks, leaving the bureau in the dark about the scope of the threat. Just 20% of Hive’s victim reported an incident to the FBI, Director Christopher Wray said last week.

    “I still think that people have concerns that when they call the FBI that we’re going to come in with coats and we’re going to take their servers and they’re going to lose control of their business,” the senior FBI official told CNN. “And that’s so far from the truth, but most people are not interacting with the FBI on a daily basis.”

    [ad_2]

    Source link

  • ASML says ‘rules are being finalized’ on chip export controls to China | CNN Business

    ASML says ‘rules are being finalized’ on chip export controls to China | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    ASML, a Dutch maker of semiconductor equipment, says “rules are being finalized” on export controls, amid reports that the Netherlands and Japan have joined the United States in restricting sales of some computer chip machinery to China.

    “It is our understanding that steps have been made towards an agreement between governments which, to our understanding, will be focused on advanced chip manufacturing technology, including but not limited to advanced lithography tools,” the company told CNN late Friday in response to questions about export controls to China.

    “Before it will come into effect it has to be detailed out and implemented into legislation which will take time.”

    ASML is known for its prowess in making lithography machines, which uses light to print patterns on silicon. The firm says that step is crucial in the mass production of microchips.

    The company’s response came as Bloomberg, the Wall Street Journal and the Financial Times reported over the weekend that the United States had persuaded the Netherlands and Japan to agree to curb exports of certain chipmaking equipment to China, citing anonymous sources.

    A deal was reached at the White House on Friday, though it was not officially announced, partly due to “concerns by Japan and Netherlands about potential retaliation by China,” according to the Journal, which cited a person familiar with the matter.

    Bloomberg reported that the deal “would extend some export controls the US adopted in October” to Dutch and Japanese companies, including ASML

    (ASML)
    , Nikon

    (NINOY)
    and Tokyo Electron.

    The Biden administration had banned Chinese companies from buying advanced chips and chipmaking equipment without a license. It also restricted the ability of American citizens to provide support for the development or production of chips at certain manufacturing facilities in China.

    The White House did not immediately respond to a request for comment outside US business hours. Nikon and Tokyo Electron declined to comment.

    On Saturday, Japan’s Economy and Trade Minister Yasutoshi Nishimura told reporters that he would “refrain from commenting on diplomatic negotiations.”

    Asked about the three-way talks in Washington, Nishimura said “we would like to respond appropriately while taking into consideration the regulatory trends in each country.”

    Because of its dominance in the market, ASML has been cited by experts as a bellwether of the growing rift between China and the West over access to advanced technology.

    In recent months, the Dutch government has faced pressure from the United States to limit chip-related exports to China, particularly from ASML, according to Xiaomeng Lu, director of geo-technology at the Eurasia Group.

    In its Friday statement, the company said that based on what has been said by government officials and current market conditions, it did not expect any material impact on its financial projections for 2023.

    But ASML said its knowledge of the new rules was still limited, making it difficult to map out “the medium and long-term financial, organizational and global industry-wide impact of new export control rules.”

    “While these rules are being finalized, ASML will continue to engage with the authorities to inform them about the potential impact of any proposed rule in order to assess the impact on the global semiconductor supply chain,” it said.

    It noted that it mainly sold “mature” products to China, and its most advanced lithography technology had already been restricted since 2019.

    Those machines had been prohibited from being sent to China because the Dutch government had “refused to grant it a license under US pressure,” Lu previously told CNN.

    — CNN’s Emiko Jozuka contributed to this report.

    [ad_2]

    Source link

  • Manhunt continues for ‘extremely dangerous’ kidnapping suspect who may be using dating apps to evade capture, police say | CNN

    Manhunt continues for ‘extremely dangerous’ kidnapping suspect who may be using dating apps to evade capture, police say | CNN

    [ad_1]



    CNN
     — 

    A sweeping multi-day manhunt continues for a suspect accused of brutally beating and kidnapping a woman in Oregon who remains in critical condition, according to police.

    While Benjamin Obadiah Foster, 36, has evaded capture since Tuesday, police say he is still active on dating apps. The Grants Pass Police Department warns he may be using the apps to find potential new victims or manipulate them into helping him escape.

    State and local investigators have been working “around the clock” to find Foster, who is wanted on suspicion of attempted murder, kidnapping and assault, Grants Pass Police Chief Warren Hensman has said.

    Investigators have been searching for Foster since Tuesday after they found a woman bound and beaten into unconsciousness in a residence in Grants Pass, police said. The suspect, identified by investigators as Foster, had already fled by the time police arrived, the department said.

    Prosecutors accuse Foster of trying to kill the victim while “intentionally torturing” her, according to charging documents obtained by CNN affiliate KDRV. Hensman said Thursday that the victim had been enduring the alleged abuses for a “protracted amount of time.”

    “I’m disgusted by what I know happened. This was an evil act,” Hensman said Thursday.

    The victim was brought to a local hospital where she remains in critical condition, police said Sunday. As of Thursday, police were providing security for the victim, according to Hensman.

    Police said Foster “likely received assistance in fleeing the area.” A 68-year-old woman has been arrested for “Hindering Prosecution” as authorities searched for Foster, the department has said.

    Police are urging the public to send in tips on the suspect’s whereabouts or any potential sightings. In a statement Sunday, the department said people should pay particular attention to his eyes and facial structure, as they believe he may try to alter his appearance by changing the cut or color of his hair and beard.

    In the statement, police said people should not approach the “extremely dangerous suspect” and should instead call 911 immediately. Authorities have said Foster could be armed.

    The department has set up a tip line and is offering a $2,500 reward for information leading to Foster’s capture and prosecution.

    “This is an all hands on deck operation and we won’t rest until we capture this man,” Hensman said on Thursday.

    During a Thursday press conference, Hensman said he is “troubled” by Foster’s history of domestic violence and assault charges, which are detailed in court records.

    Between 2017 and 2019, Foster was charged in two separate cases in which he was accused of attacking women in Las Vegas, according to Clark County court records.

    In the first case, Foster was charged with felony battery constituting domestic violence, the records show. Foster’s ex-girlfriend testified in a preliminary hearing that he tried to strangle her on Christmas Eve of 2017 after he saw that another man had texted her.

    While that case was still pending, Foster was charged with felony assault, battery and kidnapping for alleged abuses against his then-girlfriend in 2019, according to charging documents.

    The victim told police “Foster strangled (her) to the point of unconsciousness several times” and kept her tied up for most of the next two weeks. She said she was only able to escape after convincing Foster they needed to go shopping for food and water, and ran away when he got out of the car to let their dog use the bathroom, the court records show.

    The woman was able to run through a store and into a nearby apartment complex, where somebody offered to take her to a hospital, according to a Las Vegas police report. There, she was found to have seven broken ribs, two black eyes and abrasions to her wrists and ankles from being tied up, the report said.

    Foster accepted plea deals in both cases. In the first case, he was sentenced to a maximum of 30 months in prison but given credit for 729 days served.

    [ad_2]

    Source link

  • Man suspected of kidnapping and beating a woman in Oregon may be using dating apps to evade police | CNN

    Man suspected of kidnapping and beating a woman in Oregon may be using dating apps to evade police | CNN

    [ad_1]



    CNN
     — 

    Authorities in southwestern Oregon are warning that a man suspected of kidnapping a woman and beating her unconscious may now be using dating apps to evade capture or find potential new victims, according to police.

    The suspect, 36-year-old Benjamin Obadiah Foster, has so far evaded capture but he appears active on online dating services, the Grants Pass Police Department said in a statement Friday.

    “The investigation has revealed that the suspect is actively using online dating applications to contact unsuspecting individuals who may be lured into assisting with the suspect’s escape or potentially as additional victims,” Grants Pass Police said.

    The search for Foster began Tuesday after officers found a woman who had been bound and severely beaten into unconsciousness, Grants Pass Police said. She was taken to a hospital in critical condition and is being guarded while the suspect remains at large, police said.

    The man fled the scene before officers arrived, but investigators identified Foster as the suspect and asked members of the public to call 911 immediately if they see him, warning he “should be considered extremely dangerous.”

    Police said Foster “likely received assistance in fleeing the area.” A 68-year-old woman was arrested “for Hindering Prosecution” as authorities searched for the suspect, according to the department.

    As the search continues, a $2,500 reward has been offered for information leading to Foster’s capture. Police said he is wanted on suspicion of kidnapping, attempted murder and assault.

    Prosecutors accused Foster of attempting to kill the victim “in the course of intentionally torturing” the woman, according to charging documents filed in court and obtained by CNN affiliate KDRV.

    “This is a very serious offense – a brutal assault on one of our residents that we take extremely serious and we will not rest until we capture this individual,” Grants Pass Police Chief Warren Hensman said in a news conference Thursday.

    This is not the first time Foster has been accused by authorities of violence against women.

    Court records in Clark County, Nevada, show that Foster was charged in two different cases years earlier, accusing him of attacking women.

    In the first case, Foster was charged with felony battery constituting domestic violence, court documents show. Foster’s ex-girlfriend testified in a preliminary hearing that he had attempted to strangle her in a rage in 2017 after another man texted her.

    While that case was still pending in court, Foster was charged with felony assault, battery and kidnapping for allegedly attacking another woman – his girlfriend at the time – in 2019, charging documents show.

    The victim told police “Foster strangled (her) to the point of unconsciousness several times” and kept her tied up for most of the next two weeks. She said she was only able to gain her freedom after convincing Foster they needed to go shopping for provisions, and escaped while in a store, according to the court records.

    The woman was left with seven broken ribs, two black eyes and abrasions to her wrists and ankles from being tied up, according to a Las Vegas police report.

    Foster ultimately agreed to plea deals in the cases, the documents read. He was sentenced to a maximum of 30 months in prison but given credit for 729 days served in the first case.

    “Am I troubled by what I know already? The answer is yes,” Hensman said when asked about the previous charges in Nevada.

    “We’re laser focused on capturing this man and bringing him to justice,” Hensman said.

    [ad_2]

    Source link

  • Madison Square Garden CEO doubles down on use of facial recognition tech | CNN Business

    Madison Square Garden CEO doubles down on use of facial recognition tech | CNN Business

    [ad_1]



    CNN
     — 

    The chief executive of the Madison Square Garden Entertainment Corporation has doubled down on using facial recognition at its venues to bar lawyers suing the group from attending events.

    Speaking to Fox 5 on Thursday, MSG Executive Chairman and CEO James Dolan said Madison Square Garden is a private company and therefore entitled to determine who is allowed to enter its venues for events.

    “At Madison Square Garden, if you’re suing us, we’re just asking of you – please don’t come until you’re done with your argument with us,” he said. “And yes, we’re using facial recognition to enforce that.”

    His comments come after New York Attorney General Letitia James on Wednesday sent a letter to MSG Entertainment requesting information regarding its use of facial recognition technology to prohibit legitimate ticketholders from entering venues. The letter said the attorney general’s office has reviewed reports MSG Entertainment has used facial recognition to identify and deny entry to multiple lawyers affiliated with law firms involved in ongoing litigation with the company. The letter indicates thousands of attorneys from around 90 law firms may have been impacted by the policy, and said the ban includes those holding season tickets.

    The attorney general’s letter raised the concern that banning individuals from accessing venues over ongoing litigation may violate local, state, and federal human rights laws, including laws prohibiting retaliation. The letter also questions whether the facial recognition software used by MSG Entertainment is reliable and what safeguards are in place to avoid bias and discrimination.

    In a press release, James said, “MSG Entertainment cannot fight their legal battles in their own arenas. Madison Square Garden and Radio City Music Hall are world-renowned venues and should treat all patrons who purchased tickets with fairness and respect. Anyone with a ticket to an event should not be concerned that they may be wrongfully denied entry based on their appearance, and we’re urging MSG Entertainment to reverse this policy.”

    MSG Entertainment owns and operates several venues in New York, including Madison Square Garden, Radio City Music Hall, the Hulu Theater, and the Beacon Theatre. Madison Square Garden is the home of the New York Knicks, Rangers, professional boxing, and college basketball teams.

    In a statement Thursday, an MSG spokesperson told CNN, “To be clear, our policy does not unlawfully prohibit anyone from entering our venues and it is not our intent to dissuade attorneys from representing plaintiffs in litigation against us. We are merely excluding a small percentage of lawyers only during active litigation.”

    “Most importantly,” the spokesperson added, “to even suggest anyone is being excluded based on the protected classes identified in state and federal civil rights laws is ludicrous. Our policy has never applied to attorneys representing plaintiffs who allege sexual harassment or employment discrimination.”

    In the Fox 5 interview Thursday, Dolan said when the attorneys suing MSG finish their litigation, they will be welcome back to the venues. “If your next door neighbor sues you, if somebody sues you, right, that’s confrontational. It’s adversarial and it’s fine, people are allowed to sue,” he said. “But at the same time, if you’re being sued, right, you don’t have to welcome the person into your home, right?”

    Dolan defended the use of facial recognition technology, saying it’s useful for security and noting that he believes Madison Square Garden to be one of the safest venues in the country. “Basically, anytime that you go out in public, you’re on camera,” he said. “Believe me, you walk down the street, you’re on camera, you’re on 10 cameras. What facial recognition does is looks at, you know, recognizes your face, and says you know, are you someone who’s on this list.”

    Dolan claimed the State Liquor Authority has threatened MSG’s license over its use of facial recognition technology. The New York State Liquor Authority told CNN it issued a “letter of advice” to MSG, after receiving a complaint in mid-November over attorneys engaged in litigation against the company not being allowed to enter its premises.

    “After receiving a complaint, the State Liquor Authority followed standard procedure and issued a Letter of Advice explaining this business’ obligation to keep their premises open to the public, as required by the Alcoholic Beverage Control Law,” Joshua Heller, a State Liquor Authority spokesperson, told CNN.

    The SLA told CNN an investigation into the matter is “ongoing”.

    During the Fox interview, Dolan apparently threatened to shut down sales of liquor during an unspecified upcoming New York Rangers game, and said he would direct any upset patrons to the liquor authority to complain.

    Dolan also pushed back at the suggestion that he’s being “too sensitive.”

    “The Garden has to defend itself,” Dolan said. “If you sue us, right, you know we’re going to tell you not to come.”

    [ad_2]

    Source link

  • How Google’s long period of online dominance could end | CNN Business

    How Google’s long period of online dominance could end | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    For the better part of 15 years, Google has seemed like an unstoppable force, powered by the strength of its online search engine and digital advertising business. But both now look increasingly vulnerable.

    This week, the Justice Department accused Google of running an illegal monopoly in its online advertising business and called for parts of it to be broken up. The case comes a couple of years after the Trump administration filed a similar suit going after the tech giant’s dominance in search.

    Google said the Justice Department is “doubling down on a flawed argument” and that the latest suit “attempts to pick winners and losers in the highly competitive advertising technology sector.” If successful, however, both blockbuster cases could upend a business model that’s made Google the most powerful advertising company on the internet. It would be the most consequential antitrust victory against a tech giant since the US government took on Microsoft more than 20 years ago.

    But even though the lawsuits drive at the heart of Google’s revenue machine, they could take years to play out. In the meantime, two other thorny issues are poised to determine Google’s future on a potentially shorter timeframe: The rise of generative artificial intelligence and what appears to be an accelerating decline in Google’s online ad marketshare.

    Just days before the DOJ suit, Google announced plans to cut 12,000 employees amid a dramatic slowdown in its revenue growth, and as it works to refocus its efforts partly around AI.

    Google has long been synonymous with online searches; it was one of the first modern tech companies whose name would become a verb. But a new threat emerged late last year when OpenAI, an artificial intelligence research company, publicly released a viral new AI chatbot tool called ChatGPT.

    Users of ChatGPT have showcased the bot’s ability to create poetry, draft legal documents, write code and explain complex ideas, with little more than a simple prompt. Trained on a vast amount of online data, ChatGPT can generate lengthy responses to open-ended questions, though it’s prone to some errors, or answer simple questions – “Who was the 25th president of the United States?” – which one might have previously had to scroll through search results on Google to find.

    ChatGPT is trained on vast amounts of data and uses this to generate responses to user prompts. While ChatGPT’s underlying technology has existed for some time, the fact that anyone can create an account and experiment with the tool has led to loads of hype for generative AI and made the technology’s potential instantly understandable to millions in a way that was only abstract before. It has also reportedly prompted Google’s management to declare a “code red” situation for its search business.

    “Google may be only a year or two away from total disruption. AI will eliminate the Search Engine Result Page, which is where they make most of their money,” Paul Buchheit, one of the creators of Gmail, tweeted last year. “Even if they catch up on AI, they can’t fully deploy it without destroying the most valuable part of their business!”

    If more users begin to rely on AI for their information needs, the argument goes, it could undercut Google’s search advertising, which is part of a $149 billion business segment at the company. Media coverage of ChatGPT has doubled down on this notion, with some outlets pitting ChatGPT against Google in head-to-head tests.

    There are some reasons to doubt this nightmare scenario might play out for Google.

    For one thing, Google operates at a vastly different scale. In November, Google’s website received more than 86 billion visits, compared to less than 300 million for ChatGPT, according to the traffic analysis website SimilarWeb. (ChatGPT was released publicly in late November.) For another, even in a world where Google provides specific, AI-generated responses to user queries, it could still analyze the queries to provide search advertising, just as it does today.

    Google has its own investments in highly sophisticated artificial intelligence. One of its AI-driven chat programs, LaMDA, even became a flashpoint last year after an engineer at the company claimed it had achieved sentience. (Google has disputed the claim and fired the engineer for breaches of company policy.)

    Google CEO Sundar Pichai has reportedly told employees that even though Google has similar capabilities to ChatGPT, the company has yet to commit to giving out AI-generated search responses because of the risk of providing inaccurate information, which could be detrimental to Google in the long run.

    Google’s stance highlights both its incredible influence, as the most trusted search engine on earth, and one of the core problems of generative AI: Due to the technology’s black-box design, it’s virtually impossible to find out how the technology arrived at a specific result. For many people, and for many years to come, being able to evaluate different sources of information for themselves may trump the convenience of receiving a single answer.

    All this has taken place against the backdrop of what seems to be an extended, multi-year decline in Google’s online advertising marketshare. Google’s position in digital advertising peaked in 2017 with 34.7% of the US market, according to third-party industry estimates, and is on pace to account for 28.8% this year.

    Google isn’t the only advertising giant to experience this trend. One-off factors like the pandemic and the war in Ukraine, as well as fears of a looming recession, have broadly affected the online advertising industry. Others, like Facebook-parent Meta, have been particularly susceptible to systemic changes such as Apple’s app privacy updates restricting the amount of information marketers can access about iOS users.

    But the decline also comes as Google faces new competition in the market. Rivals including Amazon, TikTok and even Apple have been attracting an increasing share of the digital advertising pie.

    Whatever the cause, Google’s advertising business, which is still massive, seems to face growing headwinds. And those headwinds could be exacerbated if some of the predictions about generative AI come to pass, or if the Justice Department’s lawsuits ultimately weaken Google’s grip on digital advertising.

    As part of the case, the US government has asked a federal court to unwind two acquisitions that allegedly helped cement a Google monopoly in advertising. Dismantling Google’s tightly integrated ads machine will restore competition and make it harder for Google to extract monopoly profits, according to the US government.

    This and other antitrust suits — though threatening in their own right — simply add pressure to the broader dilemma facing Google as it stares down a new era of potentially tumultuous technological change.

    [ad_2]

    Source link

  • How Microsoft could use ChatGPT to supercharge its products | CNN Business

    How Microsoft could use ChatGPT to supercharge its products | CNN Business

    [ad_1]



    CNN
     — 

    Is ChatGPT the new Clippy?

    Shortly after Microsoft confirmed plans this week to invest billions in OpenAI, the company behind the viral new AI chatbot tool ChatGPT, some people began joking on social media that the technology would help supercharge the much-hated, wide-eyed, paperclip-shaped virtual assistant.

    While Clippy may mostly be a thing of the past, the company’s move to double down on AI tools offers the promise of doing what Clippy never quite achieved: transforming how we work.

    “There is a kernel of truth to the Clippy comparison,” David Lobina, an artificial intelligence analyst at ABI Research. “Clippy was not based on AI – or machine learning – but ChatGPT is a rather sophisticated auto-completion tool, and in that sense it is a much better version of Clippy.”

    Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists. Some CEOs have even used it to write emails or do accounting work.

    For Microsoft, integrating the chatbot tool could make its core software products more powerful. Some potential use cases include writing lines of text for a PowerPoint presentation, drafting an essay in Word or doing automatic data entry in Excel spreadsheets. For Microsoft’s search engine Bing, ChatGPT could provide more personalized search results and better summarize web pages.

    All of the above suggestions were generated by asking ChatGPT various forms of the question, “How could Microsoft integrate ChatGPT into its products?” Microsoft, for is part, has said little on possible integrations beyond recently announcing plans to add ChatGPT features to its cloud computing service.

    “Microsoft will deploy OpenAI’s models across our consumer and enterprise products and introduce new categories of digital experiences built on OpenAI’s technology,” Microsoft said in a press release this week, announcing the expanded partnership.

    When Microsoft first invested in OpenAI in 2019, CEO Satya Nadella said he believed artificial intelligence would be “one of the most transformative technologies of our time.” But it arguably wasn’t until last year, with multiple new releases from OpenAI, including ChatGPT and the powerful image generator DALL-E, that the significant potential of the partnership became widely apparent.

    Suddenly, Microsoft appears to be in a frontrunner position in Silicon Valley’s high-stakes AI race. It is now working closely with a company, OpenAI, and a product, ChatGPT, that have reportedly caught Google off guard and seemingly sparked some frustration from Meta’s chief AI scientist.

    “Microsoft is not a leader in AI research at present, but with this exclusive deal with OpenAI, they are going to be catapulted into the heart of things,” Lobina said.

    The OpenAI investment was announced days after Microsoft confirmed plans to lay off 10,000 employees as part of broader cost-cutting measures. Nadella said the company will continue to invest in “strategic areas for our future” and pointed to advances in AI as “the next major wave” of computing.

    Jason Wong, an analyst at market research firm Gartner, told CNN it makes sense why Microsoft is aggressively pursuing AI, calling it “the secret sauce for applications built and running on the cloud.”

    But there could be risks for Microsoft in using and being associated with OpenAI’s technology. Both ChatGPT and DALL-E are trained on vast amounts of data in order to generate content. That has raised some concerns about the potential of these tools to perpetuate biases found in that data and to spread misinformation. For Microsoft, that could make integrating the tool into specific products problematic.

    “Systems such as ChatGPT can be rather unreliable, making up stuff as they go and giving different answers to the same questions – not to mention the sexist and racist biases,” Lobina said. Microsoft, he said, will likely want to “wait before letting GPT systems answer online search queries.”

    While ChatGPT has gained traction among users, a growing number of schools and teachers are also concerned about the immediate impact of ChatGPT on students and their ability to cheat on assignments. Integrating ChatGPT too quickly into Microsoft’s products could run the risk of schools rethinking their use of that software.

    Despite issues that could potentially create negative publicity for the companies associated with these tools, Microsoft clearly recognizes its opportunity to become an AI leader.

    “Microsoft continues to spend significant research and development on AI and innovations that require AI behind it, such as computer vision technologies, but [these technologies] are not as apparent to its users,” said Wong from Gartner. “This is the phenomenon of ‘everyday AI’ where AI is just in the background and customers take it for granted.”

    With the unveiling of ChatGPT, he said, OpenAI’s potential has been shown “to the masses.” The same may be true of Microsoft.

    [ad_2]

    Source link

  • Southwest Airlines is testing a software fix it developed after the Christmas travel meltdown | CNN Business

    Southwest Airlines is testing a software fix it developed after the Christmas travel meltdown | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Southwest Airlines said it is testing software fixes that the company developed after its Christmas travel meltdown, as the airline faces multiple federal investigations.

    The software fixes are an “upgrade,” rather than a replacement of the crew scheduling system, Southwest executives said on a conference call with reporters Thursday. The airline and its employees have said the scheduling software left the company unable to recover from winter storms on some of the busiest travel days of the year and caused it to cancel more than 16,700 flights between December 21 and 29, roughly half its schedule during that period.

    The company decided to keep the underlying software system because it “generally worked as designed” even during the meltdown, CEO Bob Jordan said. The software’s shortcoming, he said, is “solving past problems.”

    The company is currently testing the software and expects to begin using it “in a few weeks’ time.”

    Southwest’s cancellations dwarfed other airlines during the Christmas storm because crew members had to call in to the airline, rather than notify the company electronically, to let them know of their availability.

    “That was a problem,” said Andrew Watterston, Southwest’s chief operating officer Thursday. “It wasn’t the problem for the situation. It was a symptom of the problem.”

    Switching to electronic notification would require a change in the labor contracts with pilots and flight attendants, said Jordan. Negotiations are now taking place on replacing the existing contracts covering all issues, including pay and benefits.

    Other changes stemming from the company’s review of its winter meltdown include a new team in its command center, telephone system improvements, and better preparedness for bitterly cold weather.

    “We’re looking at de-icing procedures top to bottom, we’re buying more engine covers for extremely cold weather, we’re looking at fuel mixes for ground equipment when you have sub-zero temperatures,” Jordan said.

    The company said it doesn’t have a cost estimate for the fix.

    “We haven’t even talked cost, so I don’t know if it’s going to cost us anything or not,” said Southwest Chief Operations Officer Andrew Watterson.

    The airline’s executives also pushed back on the Department of Transportation’s announcement late Wednesday that it is investigating whether Southwest “engaged in unrealistic scheduling of flights” by selling more tickets than it could handle.

    If that were the case, “then you’d expect to see poor on time performance, poor reliability” even on good weather days, Watterson told reporters on a conference call Thursday.

    “You don’t see the signs of a schedule that is out of whack with the resources’ ability to operate, given our strong operating performance over the last three months,” Watterson said.

    In addition to the DOT investigation, the ongoing reviews include an internal probe, one led by its board of directors, and an external inquiry conducted by a consultancy firm. That external report should be delivered in the coming weeks and “we will attack it with a sense of urgency,” Jordan said.

    – CNN’s Chris Isidore contributed to this report

    [ad_2]

    Source link

  • BuzzFeed’s CEO says AI could usher in a ‘new model for digital media,’ but warns against a ‘dystopian’ path | CNN Business

    BuzzFeed’s CEO says AI could usher in a ‘new model for digital media,’ but warns against a ‘dystopian’ path | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Over the holidays, while most media executives were perhaps looking to get a reprieve from work, Jonah Peretti was online, fully immersed in experimenting with artificial intelligence.

    The BuzzFeed co-founder and chief executive, who has always raced to test out the latest technologies, was familiar with AI and predictions of how it could one day revolutionize the media industry. In fact, BuzzFeed had dabbled in using it over the years.

    A version of this article first appeared in the “Reliable Sources” newsletter. Sign up for the daily digest chronicling the evolving media landscape here.

    But Peretti, sitting in his California home in late December, started probing how the developing robot writing technology could quickly be infused into the very DNA of BuzzFeed.

    In a phone interview Thursday, Peretti said that as he and a handful of colleagues prototyped how the technology could be used to enhance the site’s hallmark quizzes, interactive articles, and other types of content, he found himself genuinely having fun. “It started to feel like we were all playing,” Peretti recalled.

    That “playful work,” as he described it, soon “led to multiple Google docs full of the implications of the technology and how [BuzzFeed] could build this into our platform and how we could extend it to other formats.”

    Those efforts culminated in Peretti’s formal announcement on Thursday: That BuzzFeed will work with ChatGPT creator OpenAI to assist in the creation of content for its audience and move artificial intelligence into the “core business.”

    Peretti said that he understood people might read the news and conclude that BuzzFeed was, in short, moving to replace humans with robots. But Peretti insisted that is not his vision for the technology, even as he predicted other companies will likely go down that dark path.

    “I think that there are two paths for AI in digital media,” Peretti said. “One path is the obvious path that a lot of people will do — but it’s a depressing path — using the technology for cost savings and spamming out a bunch of SEO articles that are lower quality than what a journalist could do, but a tenth of the cost. That’s one vision, but to me, that’s a depressing vision and a shortsighted vision because in the long run it’s not going to work.”

    “The other path,” Peretti continued, “which is the one that gets me really excited, is the new model for digital media that is more personalized, more creative, more dynamic — where really talented people who work at our company are able to use AI together and entertain and personalize more than you could ever do without AI.”

    Put more simply, Peretti said he envisions artificial intelligence being used to enhance the work of his employees, not replace them.

    The example the company provided is the BuzzFeed quiz. Typically, a human would write the questions and perhaps a dozen responses that would be delivered to the user based on their inputs. But, with AI, the staffer could write the questions and the software could spit out a highly personalized response for the user. In the supplied example, a user would take a quick quiz and the AI would write a short RomCom using the data provided.

    “We don’t have to train the AI to be as good as the BuzzFeed writers because we have the BuzzFeed writers, so they can inject language, ideas, cultural currency and write them into prompts and the format,” Peretti said. “And then the AI pulls it together and creates a new piece of content.”

    Peretti indicated that he had no interest in utilizing artificial intelligence to replace human journalists for authoring news articles, as the technology outlet CNET recently did with disastrous consequences (dozens of the outlet’s stories written by AI were riddled with errors that required correcting.)

    “There’s the CNET path, and then there is the path that BuzzFeed is focused on,” Peretti said. “One is about costs and volume of content, and one is about ability.”

    “Even if there are a lot of bad actors who try to use AI to make content farms, it won’t win in the long run,” Peretti predicted. “I think the content farm model of AI will feel very depressing and dystopian.”

    [ad_2]

    Source link

  • Video: How Elon Musk’s Twitter drama impacts Tesla and how ChatGPT can be useful to students on CNN Nightcap | CNN Business

    Video: How Elon Musk’s Twitter drama impacts Tesla and how ChatGPT can be useful to students on CNN Nightcap | CNN Business

    [ad_1]

    CNN’s Allison Morrow tells “Nightcap’s” Jon Sarlin that Elon Musk’s Twitter antics are damaging Tesla’s brand. Plus, high school teacher Cherie Shields argues that ChatGPT is an excellent teaching tool and schools are making a mistake if they ban the AI technology. To get the day’s business headlines sent directly to your inbox, sign up for the Nightcap newsletter.

    [ad_2]

    Source link

  • ChatGPT passes exams from law and business schools | CNN Business

    ChatGPT passes exams from law and business schools | CNN Business

    [ad_1]



    CNN
     — 

    ChatGPT is smart enough to pass prestigious graduate-level exams – though not with particularly high marks.

    The powerful new AI chatbot tool recently passed law exams in four courses at the University of Minnesota and another exam at University of Pennsylvania’s Wharton School of Business, according to professors at the schools.

    To test how well ChatGPT could generate answers on exams for the four courses, professors at the University of Minnesota Law School recently graded the tests blindly. After completing 95 multiple choice questions and 12 essay questions, the bot performed on average at the level of a C+ student, achieving a low but passing grade in all four courses.

    ChatGPT fared better during a business management course exam at Wharton, where it earned a B to B- grade. In a paper detailing the performance, Christian Terwiesch, a Wharton business professor, said ChatGPT did “an amazing job” at answering basic operations management and process-analysis questions but struggled with more advanced prompts and made “surprising mistakes” with basic math.

    “These mistakes can be massive in magnitude,” he wrote.

    The test results come as a growing number of schools and teachers express concerns about the immediate impact of ChatGPT on students and their ability to cheat on assignments. Some educators are now moving with remarkable speed to rethink their assignments in response to ChatGPT, even as it remains unclear how widespread use is of the tool among students and how harmful it could really be to learning.

    Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists. Some CEOs have even used it to write emails or do accounting work.

    ChatGPT is trained on vast amounts of online data in order to generate responses to user prompts. While it has gained traction among users, it has also raised some concerns, including about inaccuracies and its potential to perpetuate biases and spread misinformation.

    Jon Choi, one of the University of Minnesota law professors, told CNN the goal of the tests was to explore ChatGPT’s potential to assist lawyers in their practice and to help students in exams, whether or not it’s permitted by their professors, because the questions often mimic the writing lawyers do in real life.

    “ChatGPT struggled with the most classic components of law school exams, such as spotting potential legal issues and deep analysis applying legal rules to the facts of a case,” Choi said. “But ChatGPT could be very helpful at producing a first draft that a student could then refine.”

    He argues human-AI collaboration is the most promising use case for ChatGPT and similar technology.

    “My strong hunch is that AI assistants will become standard tools for lawyers in the near future, and law schools should prepare their students for that eventuality,” he said. “Of course, if law professors want to continue to test simple recall of legal rules and doctrines, they’ll need to put restrictions in place like banning the internet during exams to enforce that.”

    Likewise, Wharton’s Terwiesch found the chatbot was “remarkably good” at modifying its answers in response to human hints, such as reworking answers after pointing out an error, suggesting the potential for people to work together with AI.

    In the short-term, however, discomfort remains with whether and how students should use ChatGPT. Public schools in New York City and Seattle, for example, have already banned students and teachers from using ChatGPT on the district’s networks and devices.

    Considering ChatGPT performed above average on his exam, Terwiesch told CNN he agrees restrictions should be put in place for students while they’re taking tests.

    “Bans are needed,” he said. “After all, when you give a medical doctor a degree, you want them to know medicine, not how to use a bot. The same holds for other skill certification, including law and business.”

    But Terwiesch believes this technology still ultimately has a place in the classroom. “If all we end up with is the same educational system as before, we have wasted an amazing opportunity that comes with ChatGPT,” he said.

    [ad_2]

    Source link

  • One news publication had an AI tool write articles. It didn’t go well | CNN Business

    One news publication had an AI tool write articles. It didn’t go well | CNN Business

    [ad_1]


    New York
    CNN
     — 

    News outlet CNET said Wednesday it has issued corrections on a number of articles, including some that it described as “substantial,” after using an artificial intelligence-powered tool to help write dozens of stories.

    The outlet has since hit pause on using the AI tool to generate stories, CNET’s editor-in-chief Connie Guglielmo said in an editorial on Wednesday.

    The disclosure comes after CNET was previously called out publicly for quietly using AI to write articles and later for errors. While using AI to automate news stories is not new – the Associated Press began doing so nearly a decade ago – the issue has gained new attention amid the rise of ChatGPT, a viral new AI chatbot tool that can quickly generate essays, stories and song lyrics in response to user prompts.

    Guglielmo said CNET used an “internally designed AI engine,” not ChatGPT, to help write 77 published stories since November. She said this amounted to about 1% of the total content published on CNET during the same period, and was done as part of a “test” project for the CNET Money team “to help editors create a set of basic explainers around financial services topics.”

    Some headlines from stories written using the AI tool include, “Does a Home Equity Loan Affect Private Mortgage Insurance?” and “How to Close A Bank Account.”

    “Editors generated the outlines for the stories first, then expanded, added to and edited the AI drafts before publishing,” Guglielmo wrote. “After one of the AI-assisted stories was cited, rightly, for factual errors, the CNET Money editorial team did a full audit.”

    The result of the audit, she said, was that CNET identified additional stories that required correction, “with a small number requiring substantial correction.” CNET also identified several other stories with “minor issues such as incomplete company names, transposed numbers, or language that our senior editors viewed as vague.”

    One correction, which was added to the end of an article titled “What Is Compound Interest?” states that the story initially gave some wildly inaccurate personal finance advice. “An earlier version of this article suggested a saver would earn $10,300 after a year by depositing $10,000 into a savings account that earns 3% interest compounding annually. The article has been corrected to clarify that the saver would earn $300 on top of their $10,000 principal amount,” the correction states.

    Another correction suggests the AI tool plagiarized. “We’ve replaced phrases that were not entirely original,” according to the correction added to an article on how to close a bank account.

    Guglielmo did not state how many of the 77 published stories required corrections, nor did she break down how many required “substantial” fixes versus more “minor issues.” Guglielmo said the stories that have been corrected include an editors’ note explaining what was changed.

    CNET did not immediately respond to CNN’s request for comment.

    Despite the issues, Guglielmo left the door open to resuming use of the AI tool. “We’ve paused and will restart using the AI tool when we feel confident the tool and our editorial processes will prevent both human and AI errors,” she said.

    Guglielmo also said that CNET has more clearly disclosed to readers which stories were compiled using the AI engine. The outlet took some heat from critics on social media for not making overtly clear to its audience that “By CNET Money Staff” meant it was written using AI tools. The new byline is just: “By CNET Money.”

    [ad_2]

    Source link

  • Classic ‘GoldenEye 007’ game is coming to Nintendo Switch and Xbox | CNN Business

    Classic ‘GoldenEye 007’ game is coming to Nintendo Switch and Xbox | CNN Business

    [ad_1]



    CNN
     — 

    James Bond fans may be waiting on the next actor who will play the British spy onscreen, but a beloved Bond adventure of yore is making its return.

    “GoldenEye 007,” a classic first-person shooter made for Nintendo 64 in 1997, is being revived for Nintendo Switch and Xbox more than 25 years later. For fans who subscribe to additional content on both gaming systems, the game will be available on Friday.

    Based on the 1995 film “GoldenEye,” the game follows a block-like version of Pierce Brosnan’s 007 as he shoots his way through various locales, all while a synthy version of the signature Bond theme plays. The Xbox version has been “faithfully recreated and enhanced,” said one ad for the re-release, while the Switch game features an online multiplayer mode.

    “GoldenEye 007” was a hit upon its release: IGN gave it a 9.7/10 in 1997, praising its graphics as “superb.” Contemporary players used to the lifelike visuals of popular games like “The Last of Us” and “Red Dead Redemption” may beg to differ, but the game still holds a nostalgic appeal for fans who spent their youths lasering their way through surfaces using Bond’s watch. Not to mention, its soundtrack remains iconic.

    To access the game, Switch users will have to subscribe to its Online membership plus its expansion pack, which includes some Nintendo 64 games and downloadable content for popular games like “Mario Kart 8 Deluxe” and “Animal Crossing: New Horizons.” Xbox players must subscribe to Xbox Game Pass, a service that allows players to access hundreds of games from its server.

    The return of “GoldenEye 007,” often referred to as one of the greatest video games of all time, has been years in the making. The Verge reported last year that rights issues blocked developers from releasing it on newer consoles, including Xbox, since at least 2008. Undeterred N64 fans even attempted to remake the game themselves on several occasions, though the original rights holders usually shut them down. Now, Rare, the game’s original developer, has recreated it for Xbox with “a few modern touches,” while Nintendo is re-releasing the original on its Switch console.

    [ad_2]

    Source link