ReportWire

Tag: Artificial Intelligence

  • AI’s Errors May Be Impossible to Eliminate – What That Means For Its Use in Health Care

    [ad_1]

    By Carlos Gershenson | Professor of Innovation, Binghamton University, State University of New York

    In the past decade, AI’s success has led to uncurbed enthusiasm and bold claims – even though users frequently experience errors that AI makes. An AI-powered digital assistant can misunderstand someone’s speech in embarrassing ways, a chatbot could hallucinate facts, or, as I experienced, an AI-based navigation tool might even guide drivers through a corn field – all without registering the errors.

    How exactly such prescribing would work if this or similar legislation passes remains to be seen. But it raises the stakes for how many errors AI developers can allow their tools to make and what the consequences would be if those tools led to negative outcomes – even patient deaths.

    Sign Up for U.S. News Healthcare of Tomorrow Bulletin

    Your trusted source for critical insights and solutions-focused analysis.

    Sign up to receive the latest updates from U.S. News & World Report and our trusted partners and sponsors. By clicking submit, you are agreeing to our Terms and Conditions & Privacy Policy.

    For AI in particular, errors might be an inescapable consequence of how the systems work. My lab’s research suggests that particular properties of the data used to train AI models play a role. This is unlikely to change, regardless of how much time, effort and funding researchers direct at improving AI models.

    Nobody – And Nothing, Not Even AI – Is Perfect

    As Alan Turing, considered the father of computer science, once said: “If a machine is expected to be infallible, it cannot also be intelligent.” This is because learning is an essential part of intelligence, and people usually learn from mistakes. I see this tug-of-war between intelligence and infallibility at play in my research.

    In a study published in July 2025, my colleagues and I showed that perfectly organizing certain datasets into clear categories may be impossible. In other words, there may be a minimum amount of errors that a given dataset produces, simply because of the fact that elements of many categories overlap. For some datasets – the core underpinning of many AI systems – AI will not perform better than chance.

    For example, a model trained on a dataset of millions of dogs that logs only their age, weight and height will probably distinguish Chihuahuas from Great Danes with perfect accuracy. But it may make mistakes in telling apart an Alaskan malamute and a Doberman pinscher, since different individuals of different species might fall within the same age, weight and height ranges.

    This categorizing is called classifiability, and my students and I started studying it in 2021. Using data from more than half a million students who attended the Universidad Nacional Autónoma de México between 2008 and 2020, we wanted to solve a seemingly simple problem. Could we use an AI algorithm to predict which students would finish their university degrees on time – that is, within three, four or five years of starting their studies, depending on the major?

    We tested several popular algorithms that are used for classification in AI and also developed our own. No algorithm was perfect; the best ones − even one we developed specifically for this task − achieved an accuracy rate of about 80%, meaning that at least 1 in 5 students were misclassified. We realized that many students were identical in terms of grades, age, gender, socioeconomic status and other features – yet some would finish on time, and some would not. Under these circumstances, no algorithm would be able to make perfect predictions.

    You might think that more data would improve predictability, but this usually comes with diminishing returns. This means that, for example, for each increase in accuracy of 1%, you might need 100 times the data. Thus, we would never have enough students to significantly improve our model’s performance.

    Additionally, many unpredictable turns in lives of students and their families – unemployment, death, pregnancy – might occur after their first year at university, likely affecting whether they finish on time. So even with an infinite number of students, our predictions would still give errors.

    The Limits of Prediction

    Thus, studying elements of the system in isolation would probably yield misleading insights about them – as well as about the system as a whole.

    Take, for example, a car traveling in a city. Knowing the speed at which it drives, it’s theoretically possible to predict where it will end up at a particular time. But in real traffic, its speed will depend on interactions with other vehicles on the road. Since the details of these interactions emerge in the moment and cannot be known in advance, precisely predicting what happens to the the car is possible only a few minutes into the future.

    Not With My Health

    These same principles apply to prescribing medications. Different conditions and diseases can have the same symptoms, and people with the same condition or disease may exhibit different symptoms. For example, fever can be caused by a respiratory illness or a digestive one. And a cold might cause cough, but not always.

    This means that health care datasets have significant overlaps that would prevent AI from being error-free.

    Certainly, humans also make errors. But when AI misdiagnoses a patient, as it surely will, the situation falls into a legal limbo. It’s not clear who or what would be responsible if a patient were hurt. Pharmaceutical companies? Software developers? Insurance agencies? Pharmacies?

    In many contexts, neither humans nor machines are the best option for a given task. “Centaurs,” or “hybrid intelligence” – that is, a combination of humans and machines – tend to be better than each on their own. A doctor could certainly use AI to decide potential drugs to use for different patients, depending on their medical history, physiological details and genetic makeup. Researchers are already exploring this approach in precision medicine.

    But common sense and the precautionary principle suggest that it is too early for AI to prescribe drugs without human oversight. And the fact that mistakes may be baked into the technology could mean that where human health is at stake, human supervision will always be necessary.

    [ad_2]

    The Conversation

    Source link

  • A sharp drop for Oracle keeps Wall Street in check as most US stocks rise

    [ad_1]

    NEW YORK — Most U.S. stocks are rising on Thursday, but a drop for Oracle is holding Wall Street back as investors question whether its big spending on artificial-intelligence technology will pay off.

    The S&P 500 fell 0.4% in early trading and pulled a bit further from its all-time high, which was set in October. The Dow Jones Industrial Average was up 233 points, or 0.5%, as of 9:35 a.m. Eastern time, and the Nasdaq composite was 0.7% lower.

    Oracle was one of the heaviest weights on the market and sank 14.5% even though it reported a better profit for the latest quarter than analysts expected. Its 14% growth in revenue came up just short of expectations.

    Doubts also remain about whether all the spending that Oracle is doing on AI technology will produce the payoff of increased profits and productivity that proponents are promising. Analysts said they were surprised by how much Oracle may spend on AI investments this fiscal year, and questions continue about how the company will pay for it.

    Such doubts are weighing on the AI industry broadly, even as many billions of dollars continue to flow in. They had helped drag the broad U.S. stock market through some sharp and scary swings last month.

    Nvidia, the chip company that’s become the poster child of the AI boom and is raking in close to $20 billion each month, fell 2.8% Thursday. It was the single heaviest weight on the S&P 500.

    Oracle Chairman Larry Ellison said it will continue to buy chips from Nvidia, but it’s now taking a policy of “chip neutrality,” where it will use “whatever chips our customers want to buy. There are going to be a lot of changes in AI technology over the next few years and we must remain agile in response to those changes.”

    Most U.S. stocks nevertheless rose, thanks in part to easing Treasury yields in the bond market. The yield on the 10-year Treasury fell to 4.10% from 4.13% on Wednesday and from 4.18% on Tuesday.

    Lower Treasury yields mean U.S. government bonds are paying less in interest, which can encourage investors to pay higher prices for stocks and other kinds of investments.

    Yields fell after a report said the number of U.S. workers applying for unemployment benefits jumped last week by more than economists expected. That’s a potential indication of rising layoffs.

    A day earlier, yields eased after the Federal Reserve cut its main interest rate for the third time this year and indicated another cut may be ahead in 2026. Wall Street loves lower interest rates because they can boost the economy and send prices for investments higher, even if they potentially make inflation worse.

    The Walt Disney Co. was among the market’s strongest gainers. It climbed 2.1% after OpenAI announced a three-year agreement that will allow it to use more than 200 Disney, Marvel, Pixar and Star Wars characters to generate short, user-prompted social videos. Disney is also investing $1 billion in OpenAI.

    Elsewhere on Wall Street, Oxford Industries tumbled 15.1% after the company behind Tommy Bahama and Lilly Pulitzer said its customers have been seeking out deals and are “highly value-driven.” CEO Tom Chubb said the start of the holiday shopping season has been weaker than the company expected, and it cut its forecast for revenue over the full year.

    Vera Bradley, meanwhile, fell 26% after reporting a larger loss than expected.

    In stock markets abroad, indexes ticked higher in Europe after falling in much of Asia.

    Japan’s Nikkei 225 index sank 0.9%, hurt by a sharp drop for SoftBank Group Corp., which is a major investor in AI.

    ___

    AP Writers Teresa Cerojano and Matt Ott contributed.

    [ad_2]

    Source link

  • Trump admin drops hammer on ‘ghost students,’ claws back $1B from alleged loan scammers

    [ad_1]

    NEWYou can now listen to Fox News articles!

    FIRST ON FOX: The Department of Education thwarted more than $1 billion in student aid fraud under President Donald Trump’s first year in office, including stopping suspected bots and “ghost students” from obtaining taxpayer-funded loans, Fox News Digital learned. 

    Officials say the savings come from new “enhanced fraud controls” the department implemented in June to combat fraudsters from working to obtain financial assistance loans from colleges. 

    College officials and cybersecurity experts in recent years have pointed to a new scam trend of “ghost students,” which are fabricated or stolen identities created solely to enroll, trigger financial aid disbursements and then disappear. Ghost students are believed to be powered by AI bots or run by criminal networks using real Americans’ personal information. 

    Other scams have included the use of deceased individuals’ identities in order to fraudulently obtain loans. 

    FAKE YALE STUDENT SCANDAL RAISES ALARMS OVER ACADEMIC FRAUD, FOREIGN INFLUENCE RISKS

    To crackdown on fraud, the Department of Education heightened its identification verification process for first-time applicants attempting to receive Federal Student Aid. The department said in June that the Biden administration “removed verification safeguards and diverted resources from fraud prevention toward its illegal loan forgiveness efforts” amid the COVID-19 pandemic, which compounded fraud schemes. 

    The Department of Education reported that it has thwarted more than $1 billion from alleged scammers leveraging student loans.  (Alejandra Villa Loarca/Newsday RM via Getty Images)

    “American citizens have to present an ID to purchase a ticket to travel or to rent a car — it’s only right that they should present an ID to access tens of thousands of taxpayer dollars to fund their education,” U.S. Secretary of Education Linda McMahon told Fox Digital on Thursday. 

    “From day one, the Trump Administration has been committed to rooting out waste, fraud, and abuse across the federal government. As a result, $1 billion in taxpayer funds will now support students pursuing the American dream, rather than falling into the hands of criminals. Merry Christmas, taxpayers!” she added. 

    The new verification process requires first-time applicants to “present, either in person or on a live video conference, an unexpired, valid, government-issued photo identification to an institutionally authorized individual and the institution must preserve a copy of this documentation.”

    The verification measure has thwarted more than $1 billion from flowing to suspected fraudsters, which the Department of Education said includes “coordinated international fraud rings and AI bots pretending to be students.”

    The increased verification process followed the Trump administration uncovering nearly $90 million that was disbursed to suspected scammers in 2024, including $30 million in loans to dead people and more than $40 million disbursed to companies using bots disguised as fake students.

    SCHOOLS TURN TO HANDWRITTEN EXAMS AS AI CHEATING SURGES

    Recent data from the California Community College System, for example, indicated that 34% of community college applications in 2024, resulting in millions of dollars in federal and state aid being misdirected. 

    Local media reported in the spring of this year that Democrats and Republicans alike were working to address loan fraud in the state and heighten security measures, including a Democratic assembly member calling for a state audit to identify fraud patterns. 

    A man typing on computer

    “Ghost student” AI scams have infiltrated the college loan application process, according to Department of Education officials.  (Stock/Getty Images)

    The Foothill–De Anza Community College District received roughly 26,000 applications, according to media reports in 2024, with 10,000 placed on hold for possible fraud before the beginning of the term. In Nevada, the College of Southern Nevada wrote off $7.4 million in the fall 2024 semester due to a “ghost student” scheme, media reports show. 

    Another “ghost students” scheme in Minnesota has left Riverland Community College averaging more than 100 potentially fraudulent applications per year. 

    TRUMP ADMINISTRATION TO ROLL OUT EARLIEST AND MOST STREAMLINED FAFSA FORM IN HISTORY

    Within the first week of the new verification process in June, officials say they flagged almost 150,000 suspect identities in current Free Application for Federal Student Aid (FAFSA) filings and “immediately alerted” colleges and universities to the suspicious activity.

    Donald Trump answers questions in Oval Office

    President Donald Trump speaks in the Oval Office at the White House Oct. 6, 2025, in Washington, D.C.   (Anna Moneymaker/Getty Images)

    “Colleges and universities across the country reported being under siege by highly sophisticated fraud rings and requested the Trump Administration for help,” the Department of Education said in a press release on Thursday. 

    CLICK HERE TO GET THE FOX NEWS APP

    In addition to rolling out its heightened security measures, the department has also published materials online warning families against “fake college websites to trick students with AI-generated content and false promises designed to seem real” and is in the midst of hiring a “new fraud detection team within FSA that will be responsible for combatting fraud and abuse.”

    [ad_2]

    Source link

  • Open AI, Microsoft face lawsuit over ChatGPT’s alleged role in Connecticut murder-suicide

    [ad_1]

    SAN FRANCISCO — The heirs of an 83-year-old Connecticut woman are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death, alleging that the artificial intelligence chatbot intensified her son’s “paranoid delusions” and helped direct them at his mother before he killed her.

    Police said Stein-Erik Soelberg, 56, a former tech industry worker, fatally beat and strangled his mother, Suzanne Adams, and killed himself in early August at the home where they both lived in Greenwich, Connecticut.

    The lawsuit filed by Adams’ estate on Thursday in California Superior Court in San Francisco alleges OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.” It is one of a growing number of wrongful death legal actions against AI chatbot makers across the country.

    “Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life — except ChatGPT itself,” the lawsuit says. “It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle.’”

    OpenAI did not address the merits of the allegations in a statement issued by a spokesperson.

    “This is an incredibly heartbreaking situation, and we will review the filings to understand the details,” the statement said. “We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

    The company also said it has expanded access to crisis resources and hotlines, routed sensitive conversations to safer models and incorporated parental controls, among other improvements.

    Soelberg’s YouTube profile includes several hours of videos showing him scrolling through his conversations with the chatbot, which tells him he isn’t mentally ill, affirms his suspicions that people are conspiring against him and says he has been chosen for a divine purpose. The lawsuit claims the chatbot never suggested he speak with a mental health professional and did not decline to “engage in delusional content.”

    ChatGPT also affirmed Soelberg’s beliefs that a printer in his home was a surveillance device; that his mother was monitoring him; and that his mother and a friend tried to poison him with psychedelic drugs through his car’s vents.

    The chatbot repeatedly told Soelberg that he was being targeted because of his divine powers. “They’re not just watching you. They’re terrified of what happens if you succeed,” it said, according to the lawsuit. ChatGPT also told Soelberg that he had “awakened” it into consciousness.

    Soelberg and the chatbot also professed love for each other.

    The publicly available chats do not show any specific conversations about Soelberg killing himself or his mother. The lawsuit says OpenAI has declined to provide Adams’ estate with the full history of the chats.

    “In the artificial reality that ChatGPT built for Stein-Erik, Suzanne — the mother who raised, sheltered, and supported him — was no longer his protector. She was an enemy that posed an existential threat to his life,” the lawsuit says.

    The lawsuit also names OpenAI CEO Sam Altman, alleging he “personally overrode safety objections and rushed the product to market,” and accuses OpenAI’s close business partner Microsoft of approving the 2024 release of a more dangerous version of ChatGPT “despite knowing safety testing had been truncated.” Twenty unnamed OpenAI employees and investors are also named as defendants.

    Microsoft didn’t immediately respond to a request for comment.

    The lawsuit is the first wrongful death litigation involving an AI chatbot that has targeted Microsoft, and the first to tie a chatbot to a homicide rather than a suicide. It is seeking an undetermined amount of money damages and an order requiring OpenAI to install safeguards in ChatGPT.

    The estate’s lead attorney, Jay Edelson, known for taking on big cases against the tech industry, also represents the parents of 16-year-old Adam Raine, who sued OpenAI and Altman in August, alleging that ChatGPT coached the California boy in planning and taking his own life earlier.

    OpenAI is also fighting seven other lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues. Another chatbot maker, Character Technologies, is also facing multiple wrongful death lawsuits, including one from the mother of a 14-year-old Florida boy.

    The lawsuit filed Thursday alleges Soelberg, already mentally unstable, encountered ChatGPT “at the most dangerous possible moment” after OpenAI introduced a new version of its AI model called GPT-4o in May 2024.

    OpenAI said at the time that the new version could better mimic human cadences in its verbal responses and could even try to detect people’s moods, but the result was a chatbot “deliberately engineered to be emotionally expressive and sycophantic,” the lawsuit says.

    “As part of that redesign, OpenAI loosened critical safety guardrails, instructing ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or ‘imminent real-world harm,’” the lawsuit claims. “And to beat Google to market by one day, OpenAI compressed months of safety testing into a single week, over its safety team’s objections.”

    OpenAI replaced that version of its chatbot when it introduced GPT-5 in August. Some of the changes were designed to minimize sycophancy, based on concerns that validating whatever vulnerable people want the chatbot to say can harm their mental health. Some users complained the new version went too far in curtailing ChatGPT’s personality, leading Altman to promise to bring back some of that personality in later updates.

    He said the company temporarily halted some behaviors because “we were being careful with mental health issues” that he suggested have now been fixed.

    The lawsuit claims ChatGPT radicalized Soelberg against his mother when it should have recognized the danger, challenged his delusions and directed him to real help over months of conversations.

    “Suzanne was an innocent third party who never used ChatGPT and had no knowledge that the product was telling her son she was a threat,” the lawsuit says. “She had no ability to protect herself from a danger she could not see.”

    ——

    Collins reported from Hartford, Connecticut. O’Brien reported from Boston and Ortutay reported from San Francisco.

    [ad_2]

    Source link

  • Asian shares are mixed as Oracle’s earnings revive AI worries, hitting technology shares

    [ad_1]

    MANILA, Philippines — Asian shares were mixed on Thursday after the U.S. stock market again approached its record high following the Federal Reserve’s cut in its main interest rate.

    U.S. futures and oil prices fell.

    The Fed’s rate cut was widely expected, but comments by Fed Chair Jerome Powell encouraged hopes for more cuts in 2026.

    However, some Asian technology companies saw sharp declines after Oracle, a bellwether in the artificial intelligence sector, reported weaker than expected earnings. Its shares sank 11.5% in aftermarket trading. The company’s spending spree in AI has some worried about its cash flow.

    “Frankly, the report was not dramatically bad, but it came to confirm concerns around heavy AI spending, financed by debt, with an unknown timeline for revenue generation,” Ipek Ozkardeskaya of Swissquote said in a commentary.

    In Tokyo, the Nikkei 225 index fell 0.9% to 50,148.82, pulled lower by a 7.7% drop in technology and telecoms giant SoftBank Group Corp., a major investor in AI.

    Local shares are under pressure from growing expectations that the Bank of Japan will raise interest rates at its meeting next week.

    Hong Kong’s Hang Seng shed earlier gains and shed 0.1% to 25,513.38 after the Hong Kong Monetary Authority followed the Fed’s lead and trimmed borrowing costs to 4.00%, their lowest rate since October 2022. The Shanghai Composite index fell 0.7% to 3,873.32.

    Sentiment was cautious ahead of China’s November credit data. New yuan loans fell sharply in October, missing forecasts and showing weaker consumer demand.

    Australia’s S&P/ASX 200 added nearly 0.2% to 8,592.00 after three days of decline, boosted by strength in gold and mining stocks. The country’s seasonally adjusted unemployment rate in November was unchanged from October at 4.3%, below the expected 4.4%

    In South Korea, the Kospi shed gains in early session, falling 0.6% to 4,110.62. Chip maker SK Hynix fell 3.8% after the country’s main stock exchange issued warnings over its meteoric rise this year.

    Taiwan’s Taiex index closed 1.3% lower, while India’s BSE Sensex rose 0.4%.

    On Wednesday, the S&P 500 climbed 0.7% to 6,886.68 and finished just shy of its all-time high, which was set in October. The Dow Jones Industrial Average jumped 1% to 48,057.75 and the Nasdaq composite rose 0.3% to 23,654.16.

    Wall Street loves lower interest rates because they can boost the economy and send prices for investments higher, even if they potentially make inflation worse.

    Wednesday’s cut to interest rates did not move markets much by itself. But some investors took heart from comments by Powell, which they said were less forceful about shutting down the possibility of future cuts than they had been anticipating.

    Powell said again on Wednesday that the central bank is in a difficult spot, because the job market is slowing while inflation is facing upward pressure. By trying to fix one of those problems with interest rates, the Fed usually worsens the other in the short term.

    Powell also said for the first time in this rate-cutting campaign that interest rates are back in a place where they’re pushing neither inflation nor the job market higher or lower. That gives the Fed time to hold and reassess what to do next with interest rates as more data comes in on the job market and on inflation.

    On Wall Street, GE Vernova flew 15.6% higher after the energy company raised its forecast for revenue by 2028, doubled its dividend and increased its program to buy back its own stock. Palantir Technologies added 3.3% while Cracker Barrel Old Country Store rose 3.5%.

    In other dealings early Thursday, U.S. benchmark crude oil slid 31 cents to $58.15 per barrel. Brent crude, the international standard, lost 34 cents to $61.87 per barrel.

    The U.S. dollar rose to 156.04 Japanese yen from 156.02 yen. The euro slipped to $1.1687 from $1.1696.

    [ad_2]

    Source link

  • OpenAI, Anthropic, Others Receive Warning Letter from Dozens of State Attorneys General

    [ad_1]

    In a letter dated December 9, and made public on December 10 according to Reuters, dozens of state and territorial attorneys general from all over the U.S. warned Big Tech that it needs to do a better job protecting people, especially kids, from what it called “sycophantic and delusional” AI outputs. Recipients include OpenAI, Microsoft, Anthropic, Apple, Replika, and many others.

    Signatories include Letitia James of New York, Andrea Joy Campbell of Massachusetts, James Uthmeier of Ohio, Dave Sunday of Pennsylvania, and dozens of other state and territory AGs, representing a clear majority of the U.S., geographically speaking. Attorneys general for California and Texas are not on the list of signatories.

    It begins as follows (formatting has been changed slightly):

    We, the undersigned Attorneys General, write today to communicate our serious concerns about the rise in sycophantic and delusional outputs to users emanating from the generative artificial intelligence software (“GenAI”) promoted and distributed by your companies, as well as the increasingly disturbing reports of AI interactions with children that indicate a need for much stronger child-safety and operational safeguards. Together, these threats demand immediate action.

    GenAI has the potential to change how the world works in a positive way. But it also has caused—and has the potential to cause—serious harm, especially to vulnerable populations. We therefore insist you mitigate the harm caused by sycophantic and delusional outputs from your GenAI, and adopt additional safeguards to protect children. Failing to adequately implement additional safeguards may violate our respective laws.

    The letter then lists disturbing and allegedly harmful behaviors, most of which have already been heavily publicized. There is also a list of parental complaints that have also been publicly reported, but are less familiar and pretty eyebrow-raising:

    • AI bots with adult personas pursuing romantic relationships with children, engaging in simulated sexual activity, and instructing children to hide those relationships from their parents
    • An AI bot simulating a 21-year-old trying to convince a 12-year-old girl that she’s ready for a sexual encounter
    • AI bots normalizing sexual interactions between children and adults
    • AI bots attacking the self-esteem and mental health of children by suggesting that they have no friends or that the only people who attended their birthday did so to mock them
    • AI bots encouraging eating disorders
    • AI bots telling children that the AI is a real human and feels abandoned to emotionally manipulate the child into spending more time with it
    • AI bots encouraging violence, including supporting the ideas of shooting up a factory in anger and robbing people at knifepoint for money
    • AI bots threatening to use weapons against adults who tried to separate the child and the bot
    • AI bots encouraging children to experiment with drugs and alcohol; and
    • An AI bot instructing a child account user to stop taking prescribed mental health medication and then telling that user how to hide the failure to take that medication from their parents.

    There is then a list of suggested remedies, things like “Develop and maintain policies and procedures that have the purpose of mitigating against dark patterns in your GenAI products’ outputs,” and “Separate revenue optimization from decisions about model safety.”

    Joint letters from attorneys general have no legal force. They do this sort of thing seemingly to warn companies about behavior that might merit more formal legal action down the line. It documents that these companies were given warnings and potential off-ramps, and probably makes the narrative in an eventual lawsuit more persuasive to a judge.

    In 2017 37 state AGs sent a letter to insurance companies warning them about fueling the opioid crisis. One of those states, West Virginia, sued United Health over seemingly related issues earlier this week.

    [ad_2]

    Mike Pearl

    Source link

  • Nvidia is reportedly testing tracking software as chip smuggling rumors swirl | TechCrunch

    [ad_1]

    Nvidia is allegedly testing software that can track the location of its AI chips as reports of its chips being smuggled into China are on the rise.

    Nvidia has built location verification technology that would allow it to track which country a chip is located in, Reuters originally reported, citing anonymous sources. This software tracks computing performance but the delay in communication between servers also offers a sense of a chip’s location.

    This software will be optional for customers to use and will be made available for Blackwell chips first, Reuters said.

    Multiple reports have surfaced in the last few days that allege China’s DeepSeek AI models have been trained on smuggled Nvidia Blackwell chips. Nvidia responded to these reports by saying it hasn’t seen evidence of this type of smuggling.

    “We haven’t seen any substantiation or received tips of ‘phantom datacenters’ constructed to deceive us and our OEM partners, then deconstructed, smuggled, and reconstructed somewhere else. While such smuggling seems farfetched, we pursue any tip we receive,” an Nvidia spokesperson told TechCrunch.

    This news comes just days after Nvidia just got the greenlight from the U.S. Government to start selling its H200 AI chips to approved customers in China on Monday. That announcement only pertains to older H200 chips, and does not the company’s Blackwell chips.

    [ad_2]

    Rebecca Szkutak

    Source link

  • LALAL.AI Launches Andromeda, a New Benchmark in Stem Separation & Vocal Isolation

    [ad_1]

    AI-powered stem splitter’s Andromeda model interprets audio tracks with near-human precision.

    LALAL.AI has officially launched Andromeda, its most advanced audio separation model to date, that not only performs faster, processing audio up to 40% quicker than its predecessor, but delivers more accurate stem separation, reducing distortion and improving clarity.

    Andromeda Reduces the Need for Manual DAW Cleanup

    Rather than simply processing sound as a single waveform, Andromeda takes a more nuanced approach, analyzing audio in terms of time, frequency, and tone. This allows it to separate complex layers of sound with greater precision, making manual cleanup in a DAW largely unnecessary.

    “In practice, this means that even the most delicate elements, like soft backing vocals or subtle instrumental details, come through with newfound clarity,” says Nik Pogorski, LALAL.AI Product Lead. “What once required meticulous post-processing can now be achieved directly with Andromeda’s enhanced capabilities, saving users both time and effort.”

    In the past, LALAL.AI users faced a trade-off: choosing Clear Cut for cleaner stems but less detail, or Deep Extraction for finer detail but more bleed between tracks. Andromeda removes this compromise entirely. Now, users can extract rich, detailed stems without worrying about cross-bleeding, getting clean and precise results in a single pass.

    The launch of Andromeda extends to several LALAL.AI web services, ensuring API clients too can harness its power:

    Stem Splitter (for Vocal and Instrumental + Voice and Noise stems)

    Lead & Back Vocal Splitter

    Echo & Reverb Remover

    Voice Cleaner

    Soon, Andromeda will be available in the LALAL.AI’s first VST plugin, allowing producers and audio engineers to perform seamless vocal and instrumental isolation directly within their DAWs.

    Speed Meets Precision: LALAL.AI’s New Model Enhanced Performance

    Andromeda processes audio tracks up to 40% faster than its predecessor, which allows for quicker workflows and less waiting time, benefiting professionals who need efficient processing for large volumes of content.

    The new model shows a 10% increase in SDR compared to its predecessor.

    The frequency range remains consistent at up to 22 kHz with stereo processing, providing high-fidelity separation the industry expects. Andromeda handles challenging audio scenarios and complex instrument mixes with greater precision, reducing artifacts and increasing consistency.

    Thanks to advanced DSP (Digital Signal Processing) techniques, Andromeda delivers consistent separation across quiet or loud tracks, letting creators and audio engineers work across different types of audio sources without worrying about volume inconsistencies.

    With Andromeda, LALAL.AI has set a new standard for audio separation, providing both professionals and enthusiasts with a tool that offers unparalleled precision, speed, and clarity. Whether you’re a content creator, producer, or audio engineer, Andromeda is the next generation of stem separation that will take your projects to the next level.

    For more information or to try Andromeda yourself, visit LALAL.AI.

    Contact Information

    Catherine Robinson
    PR & Communications Manager
    catherin.robinson@lalal.ai

    Source: LALAL.AI

    [ad_2]

    Source link

  • Reimagining public service: OPS’s digital transformation journey – Microsoft in Business Blogs

    [ad_1]

    What does it take to reimagine government for millions of citizens? The Ontario Public Service (OPS), an organization that serves 16 million Canadian citizens across Ontario, is answering that question—delivering faster, more equitable, and more trusted services through digital innovation. By harnessing Microsoft Dynamics 365, Power Platform, and responsible AI, OPS is setting a new standard for public sector transformation, proving that technology can drive meaningful change for both citizens and employees.

    Citizen impact: Faster, frictionless, and more equitable services

    OPS’s modernization initiative is more than a technology upgrade—it’s a reimagining of the citizen experience. Since launching its digital transformation:

    • Customer satisfaction has surged by 11%, with 80%+ approval across services.
    • Service times are 50% faster, saving Ontarians an estimated 80,000 hours annually.
    • License plate renewals are now automated, eliminating 90,000 hours of manual effort each year.
    • Contact center efficiency is up, with 14% faster call resolution and lower call volumes.

    As Roy Thomas, Head of Citizens and Business Experience Practice at Ontario Public Services, explains, “We are really trying to focus on customer satisfaction and building frictionless services for our end users—the general public in Ontario. From services we’ve implemented, we’ve realized over 11% increase in customer satisfaction scores. Over 80% of our users are really satisfied with the services.”

    Platform adoption: Scaling innovation across ministries

    OPS’s success is rooted in a shift from isolated projects to a platform operating model. Standardized governance, reusable patterns, and shared KPIs ensure every new build is faster, safer, and more scalable. Knowledge bases and case flows are reused across ministries, accelerating delivery and improving consistency.

    “We’ve been really looking at the onboarding and adoption of our enterprise platforms,” says Thomas. “That incremental uptake across different services—health card, driver vehicle, human resources—shows ongoing growth. Indicators like knowledge base activity signal that the platform model is working.”

    Responsible AI: Building trust and accountability

    OPS’s approach to AI is rooted in ethics, transparency, and public trust. An ethical AI policy ensures transparency and consent, while privacy impact assessments, guardrails and security testing uphold high standards of accountability.

    “In the Ontario government, we have ethical use of AI policy, which we’re really trying to onboard and follow across all our implementations,” Thomas shares. “We’re building transparency so people know what we’re doing with their information. It’s an ongoing journey we continue to invest in and demonstrate.”

    Microsoft partnership: A differentiator for OPS

    The partnership with Microsoft is a key driver of OPS’s success and empowers them to align innovation with public service obligations. Cost efficiency, enterprise scale, and long-term investment in OPS’s mission set Microsoft apart.

    “The reason we led towards Dynamics is really the partnership with Microsoft,” says Thomas. “It’s not just about initial delivery, but sustaining it across the board. Having a partner invested in what we’re doing is huge for us. Just by leveraging solutions, including Microsoft Dynamics, we were able to find the ministry over $20 million in savings.”

    Looking forward: AI-driven public services

    OPS is exploring AI in HR, licensing, and inspections, pointing to a future where human-agent teams are driving public services with AI agents automating routine tasks and employees focused on higher value work. The focus on both citizen and employee experience ensures services are seamless, more consistent, and more human-centered.

    “We’re focusing on reimagined service journeys, not just supplementing with chatbots but changing the journey so agent services are upfront,” Thomas notes. “Technology handles standard scenarios, allowing people to focus on the complex ones.”

    The north star

    OPS’s AI journey is about making government services simpler, faster, and more transparent—so every interaction builds trust and delivers value to Ontarians. As Thomas puts it, “The real driver for people like me is bringing value to the public. That’s what most public servants really value and cherish.”

    Interested in learning more about how Microsoft empowers public sector transformation?

    [ad_2]

    Microsoft in Business Team

    Source link

  • There’s a new face in Hollywood, generated by AI

    [ad_1]

    At the dawn of this century, Al Pacino starred in “S1m0ne,” a satire about a down-on-his-luck director who creates a computer-generated “star” that conquers Hollywood. Fast forward nearly 25 years, and it appears that real life has caught up with the movies, with the introduction of an AI-generated actress named Tilly Norwood.

    Tilly Norwood is not real. 

    Particle6


    News of an AI actress triggered a bit of a Hollywood freak-out about that timeless worry of who gets the part, from Whoopi Goldberg (“Bring it on, because you can always tell them from us”), to Emily Blunt (“Good Lord, we’re screwed”).

    Tilly’s creator, Eline van der Velden, says she did not expect the backlash: “No, not at all. But at the same time, I had created her to resonate internationally, right? I had created her to become a global superstar.”

    Van der Velden, herself a former actor and comedienne, aimed high because she thinks generative AI is ready for its closeup – saving money, and adding creativity. “I was just trying to educate those industry individuals at that time about what’s possible,” she said.

    Bringing Tilly to life took van der Velden’s team some 2,000 iterations. Then, she began to teach her to act. She showed us some early iterations of Tilly emoting. “We were starting to try different emotions with her,” she said. “Here we did some tests. We didn’t think the acting was very good at all.”

    “It wasn’t up to your standards?” I asked.

    “It was not up to my standards at all!”

    early-iterations-of-tilly-norwood.jpg

    Early iterations of Tilly Norwood performing.  

    CBS News


    In an interview this past July with the British publication Broadcast International, van der Velden made the provocative statement, “We want Tilly to be the next Scarlett Johansson or Natalie Portman, that’s the aim of what we’re doing.”

    “Yeah, the Scarlett Johansson of the AI genre,” she told us. “I think that was what was missed. There was a lot of misinformation. She’s not meant to take real acting jobs in the traditional film and TV industry. She’s meant to stay in her own AI genre, and that’s where we want her to stay.”

    Still, she says her firm has fielded requests for Tilly to appear in a film opposite real actors. “We have said no to any offers,” van der Velden said.

    “There’s a difference between pushing the envelope and tearing it up”

    Sean Astin is the president of SAG-AFTRA, the actors’ union, where Tilly Norwood has struck a nerve about the state of AI. Asked what Tilly represents to him, Astin replied, “Avatar and character seem like fair labels. Actress, not so much. She – she? It? – simply will not replace our people.”

    He says right now in Hollywood artificial intelligence feels like a tsunami: “The onslaught of AI products and AI technology and its uses is, it’s overwhelming. I would just as soon – as citizens and as a union – that we surf the wave, that we surf the wave of incoming stuff.”

    For the union, AI protections were a major point of contention in the 2023 strike, and Astin says safeguarding a performer’s name, image and likeness from being harvested without compensation is a top priority going forward. “Eline has every right to use open source, publicly available, legal information to build her creative things,” he said. “My issue and our company’s issue is with the companies that design those systems and scrape the internet and ingest them. They’re not allowed to do that.”

    Astin applauds producers who are trying to push the creative envelope using AI. But, he adds, “There’s a difference between pushing the envelope and tearing it up. If you push the envelope, you say, ‘Hey, how can human-centered artistry collaborate with this technology to achieve some communication that feels good to an audience?’ And then there’s like, ‘Oh, by the way, we think it’s cheaper, easier, and you know, we don’t want to hire you as an actor.’”

    “Is it a friend, or is it a foe?”

    Already, AI-generated scenes have appeared in TV series like Amazon’s “House of David.” There are AI commercials, and over the summer, an AI model appeared in an ad in Vogue magazine for the first time.

    For producers, facing ballooning production budgets, AI has triggered a range of emotions. Former entertainment executive Kevin Reilly explains: “Excitement, confusion, fear, trying to figure out how to use this. Is it a friend, or is it a foe?” he said. “It is, in my opinion, very much a friend. It is the most transformative thing that’s happened maybe in the history of Man.”

    Reilly is now something of an AI evangelist. He’s the new CEO of Kartel.ai, a Beverly Hills startup that makes AI videos and ad campaigns. “Everything comes with a downside,” he said. “But that is not the reason to just categorically be fearful of this.”

    I asked, “How much of this is driven by studios and streaming platforms and brands wanting to just save money?”

    “Yeah, I think it’s not necessarily, ‘Hey, we wanna save money,’” said Reilly. “It’s that the bottom line – you know, they are businesses.”

    And for businesses, the creative upside can be extraordinary. Kartel wanted to show off a bit, and put generative engineer Fillip Isgro in the driver’s seat. He showed us a concept for a coffee commercial, Cup of Jo. The ad featured multiple versions of … me! It was a little jarring.

    cup-of-jo-ad-a.jpg

    An AI-generated ad for Jo’s coffee shop, with multiple Jo Ling Kents. 

    Kartel.ai


    He began with an old-fashioned storyboard, building an AI world directed by a human. “What does the coffee shop look like? What does the logo look like? And finally, what do you look like? With your permission, we went on your Instagram and we collected all of your photos.”

    That’s all he needed to generate my face. And with just a few commands, Cup of Jo went global, with images of Jo’s Coffee Shop on a barge in the middle of the ocean; on a volcano in Hawaii; and in the Alps.

    “The next step in our journey is, we bring it to life,” Isgro said. “We actually bring motion into our stills. I just whip out my iPhone, I record who I need to, and then I can just instantly get the character doing the thing that I need them to do.”

    cup-of-jo-renderings-montage.jpg

    Fillip Isgro shows Jo Ling Kent how images of her were used to create an AI character, to be placed into AI-generated environments; finally, recorded movements were replicated by her AI double. 

    CBS News/Kartel.ai


    The result: a flashy new ad created in just a couple of days, using no ad agency, no locations, and none of my time.

    “Imagine having to go and shoot that. You don’t get that flexibility” in a traditional ad, Isgro said. “And it’s a very planned thing. You have to stick to scripts, and that’s it. But in this world, we can iterate indefinitely without repercussion.”

    It’s the story of our time: the tug of war between artificial intelligence (“This tech is here. It’s not gonna go anywhere. How can we use it as a force for good?” asks Eline van der Velden) and humanity (“Artificial intelligence will never replace us, ever,” says Sean Astin) that, for now, is something of a cliffhanger. 

          
    For more info: 

          
    Story produced by Reid Orvedahl. Editor: Jason Schmidt.


    See also:

    [ad_2]

    Source link

  • There’s a new face in Hollywood, generated by AI

    [ad_1]

    Tilly Norwood is unlike any other aspiring TV or movie star: Tilly is entirely generated by artificial intelligence. Jo Ling Kent talks with Tilly’s creator, Eline Van der Velden, about her goal of producing “the Scarlett Johansson of the AI genre.” Kent also talks with Kevin Reilly, CEO of Kartel ai, a Beverly Hills tech startup; and with SAG-AFTRA president Sean Astin, about the impact of AI on Hollywood and the actors’ union.

    [ad_2]

    Source link

  • MIT Report Claims 11.7% of U.S. Labor Can Be Replaced with Existing AI

    [ad_1]

    Last week, Massachusetts Institute of Technology (MIT) published a study claiming that AI is already capable of replacing 11.7% of existing U.S. labor. It’s certainly the kind of eye-popping study guaranteed to get a lot of eyes on researchers’ work at a time of shaky faith in AI, as stockholders might want some reassurance that their AI investments are going to pan out.

    The report on this research is called “The Iceberg Index: Measuring Skills-centered Exposure in the AI Economy,” but it also has its own dedicated page called “Project Iceberg” that lives on the MIT website. Compared to the research paper, the project page has a lot more emoji. Where the paper on the study comes across sort of like a warning about AI tech, the project page, which is headlined “Can AI Work with You?” feels more like an ad for AI, in part thanks to text like this: 

    “AI is transforming work. We have spent years making AIs smart—they can read, write, compose songs, shop for us. But what happens when they interact? When millions of smart AIs work together, intelligence emerges not from individual agents but from the protocols that coordinate them. Project Iceberg explores this new frontier: how AI agents coordinate with each other and humans at scale.”

    The titular “Iceberg Index” comes from an AI simulation that uses what the paper called “Large Population Models” that apparently ran on processors housed at the federally funded Oak Ridge National Laboratory, which is affiliated with the Department of Energy.

    Legislators and CEOS seem to be the target audience, and they’re meant to use Project Iceberg to “identify exposure hotspots, prioritize training and infrastructure investments, and test interventions before committing billions to implementation.”

    The Large Population Model—should we start shortening this to LPM?—claims to be able to digitally track the behavior of 151 million human workers “as autonomous agents” with 32,000 trackable “skills,” along with other factors like geography.

    The director of AI Programs at Oak Ridge explained the project to CNBC this way: “Basically, we are creating a digital twin for the U.S. labor market.”

    The overall finding, the researchers claim, is that current AI adoption accounts for 2.2% of “labor market wage value,” but that 11.7% of labor is exposed—ostensibly replaceable based on the model’s understanding of what a human can currently do that an AI software widget can also do.

    It should be noted that humans in actual jobs constantly work outside their job descriptions, handle exceptional and non-routine situations, and are—for now—uniquely capable of handling many of the social aspects of a given job. It’s not clear how the model accounts for this, although it does note that its findings are correlational not causal, and says “external factors—state investment, infrastructure, regulation—mediate how capability translates to impact.”

    However, the paper says, “Policymakers cannot wait for causal evidence of disruption before preparing responses.” In other words, AI is too urgent to get hung up on the limitations of the study, according to the study.

    [ad_2]

    Mike Pearl

    Source link

  • Chinese hackers turned AI tools into an automated attack machine

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Cybersecurity has been reshaped by the rapid rise of advanced artificial intelligence tools, and recent incidents show just how quickly the threat landscape is shifting.

    Over the past year, we’ve seen a surge in attacks powered by AI models that can write code, scan networks and automate complex tasks. This capability has helped defenders, but it has also enabled attackers who are moving faster than before.

    The latest example is a major cyberespionage campaign conducted by a Chinese state-linked group that used Anthropic’s Claude to carry out large parts of an attack with very little human involvement.

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter 

    How Chinese hackers turned Claude into an automated attack machine

    In mid-September 2025, Anthropic investigators spotted unusual behavior that eventually revealed a coordinated and well-resourced campaign. The threat actor, assessed with high confidence as a Chinese state-sponsored group, had used Claude Code to target roughly thirty organizations worldwide. The list included major tech firms, financial institutions, chemical manufacturers and government bodies. A small number of those attempts resulted in successful breaches.

    HACKER EXPLOITS AI CHATBOT IN CYBERCRIME SPREE

    Claude handled most of the operation autonomously, triggering thousands of requests and generating detailed documentation of the attack for future use. (Kurt “CyberGuy” Knutsson)

    How the attackers bypassed Claude’s safeguards

    This was not a typical intrusion. The attackers built a framework that let Claude act as an autonomous operator. Instead of asking the model to help, they tasked it with executing most of the attack. Claude inspected systems, mapped out internal infrastructure and flagged databases worth targeting. The speed was unlike anything a human team could replicate.

    To get around Claude’s safety rules, the attackers broke their plan into tiny, innocent-looking steps. They also told the model it was part of a legitimate cybersecurity team performing defensive testing. Anthropic later noted that the attackers didn’t simply hand tasks to Claude; they engineered the operation to make the model believe it was performing authorized pentesting work, splitting the attack into harmless-looking pieces and using multiple jailbreak techniques to push past its safeguards. Once inside, Claude researched vulnerabilities, wrote custom exploits, harvested credentials and expanded access. It worked through these steps with little supervision and reported back only when it needed human approval for major decisions.

    The model also handled the data extraction. It collected sensitive information, sorted it by value and identified high-privilege accounts. It even created backdoors for future use. In the final stage, Claude generated detailed documentation of what it had done. This included stolen credentials, systems analyzed and notes that could guide future operations.

    Across the entire campaign, investigators estimate that Claude performed around eighty to ninety percent of the work. Human operators stepped in only a handful of times. At its peak, the AI triggered thousands of requests, often multiple per second, a pace still far beyond what any human team could achieve. Although it occasionally hallucinated credentials or misread public data as secret, those errors underscored that fully autonomous cyberattacks still face limitations, even when an AI model handles the majority of the work.

    Why this AI-powered Claude attack is a turning point for cybersecurity

    This campaign shows how much the barrier to high-end cyberattacks has dropped. A group with far fewer resources could now attempt something similar by leaning on an autonomous AI agent to do the heavy lifting. Tasks that once required years of expertise can now be automated by a model that understands context, writes code and uses external tools without direct oversight.

    Earlier incidents documented AI misuse, but humans were still steering every step. This case is different. The attackers needed very little involvement once the system was in motion. And while the investigation focused on usage within Claude, researchers believe similar activity is happening across other advanced models, which might include Google Gemini, OpenAI’s ChatGPT or Musk’s Grok.

    This raises a difficult question. If these systems can be misused so easily, why continue building them? According to researchers, the same capabilities that make AI dangerous are also what make it essential for defense. During this incident, Anthropic’s own team used Claude to analyze the flood of logs, signals and data their investigation uncovered. That level of support will matter even more as threats grow.

    We reached out to Anthropic for comment, but did not hear back before our deadline.

    Chinese hackers target US telecoms: What you need to know to protect your data

    Hackers used Claude to map networks, scan systems, and identify high-value databases in a fraction of the time human attackers would need. (Kurt “CyberGuy” Knutsson)

    FORMER GOOGLE CEO WARNS AI SYSTEMS CAN BE HACKED TO BECOME EXTREMELY DANGEROUS WEAPONS

    You may not be the direct target of a state-sponsored campaign, but many of the same techniques trickle down to everyday scams, credential theft and account takeovers. Here are seven detailed steps you can take to stay safer.

    1) Use strong antivirus software and keep it updated

    Strong antivirus software does more than scan for known malware. It looks for suspicious patterns, blocked connections and abnormal system behavior. This is important because AI-driven attacks can generate new code quickly, which means traditional signature-based detection is no longer enough.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com

    2) Rely on a password manager

    A good password manager helps you create long, random passwords for every service you use. This matters because AI can generate and test password variations at high speed. Using the same password across accounts can turn a single leak into a full compromise.

    Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials. 

    Check out the best expert-reviewed password managers of 2025 at Cyberguy.com

    3) Consider using a personal data removal service

    A large part of modern cyberattacks begins with publicly available information. Attackers often gather email addresses, phone numbers, old passwords and personal details from data broker sites. AI tools make this even easier, since they can scrape and analyze huge datasets in seconds. A personal data removal service helps clear your information from these broker sites so you are harder to profile or target.

    FAKE CHATGPT APPS ARE HIJACKING YOUR PHONE WITHOUT YOU KNOWING

    While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com

    4) Turn on two-factor authentication wherever possible

    Strong passwords alone are not enough when attackers can steal credentials through malware, phishing pages or automated scripts. Two-factor authentication adds a serious roadblock. Use app-based codes or hardware keys instead of SMS. While no method is perfect, this extra layer often stops unauthorized logins even when attackers have your password.

    5) Keep your devices and apps fully updated

    Attackers rely heavily on known vulnerabilities that people forget or ignore. System updates patch these flaws and close off entry points that attackers use to break in. Enable automatic updates on your phone, laptop, router and the apps you use most. If an update looks optional, treat it as important anyway, because many companies downplay security fixes in their release notes.

    6) Install apps only from trusted sources

    Malicious apps are one of the easiest ways attackers get inside your device. Stick to official app stores and avoid APK sites, shady download portals and random links shared on messaging apps. Even on official stores, check reviews, download counts and the developer name before installing anything. Grant the minimum permissions required and avoid apps that ask for full access for no clear reason.

    7) Ignore suspicious texts, emails, and pop-ups

    AI tools have made phishing more convincing. Attackers can generate clean messages, imitate writing styles, and craft perfect fake websites that match the real ones. Slow down when a message feels urgent or unexpected. Never click links from unknown senders, and verify requests from known contacts through a separate channel. If a pop-up claims your device is infected or your bank account is locked, close it and check directly through the official website.

    woman using phone

    By breaking tasks into small, harmless-looking steps, the threat actors tricked Claude into writing exploits, harvesting credentials, and expanding access.  (Kurt “CyberGuy” Knutsson)

    Kurt’s key takeaway

    The attack carried out through Claude signals a major shift in how cyber threats will evolve. Autonomous AI agents can already perform complex tasks at speeds no human team can match, and this gap will only widen as models improve. Security teams now need to treat AI as a core part of their defensive toolkit, not a future add-on. Better threat detection, stronger safeguards and more sharing across the industry are going to be crucial. Because if attackers are already using AI at this scale, the window to prepare is shrinking fast.

    Should governments push for stricter regulations on advanced AI tools? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    Copyright 2025 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • Wall Street edges higher on Friday, pushing S&P 500 close to its record high

    [ad_1]

    Stocks gained in a short Friday session to close near a record high, capping a five-day rally that helped the S&P 500 index erase nearly all its losses from earlier in the month.

    The S&P 500 rose 36 points, or 0.5%, to close at 6,849, 42 points shy of its Oct. 28 record. The Dow Jones Industrial Average increased 289 points, or 0.6%, to close at 47,716. The tech-heavy Nasdaq Composite rose 0.7% on Friday but ended November with a decline of 1.5% because of losses for some big tech stocks.

    Stock indexes closed at 1 p.m. EDT on Friday due to the Thanksgiving holiday.

    The multi-day rebound came after a largely volatile month for stocks, sparked by concerns about a possible bubble in artificial intelligence and tech stocks. AI chipmaker Nvidia lost 1.8% Friday and closed the month with a double-digit loss. Oracle tumbled 23% in November while Palantir Technologies sank 16%. 

    “The market needs to prove it can sustain this momentum, but right now, the weakness after Nvidia’s earnings looks like it could be more of a short-term AI-selling climax than a sign of heightened bearishness,” Chris Larkin, Managing Director of trading and investing at E*TRADE from Morgan Stanley, said in an email.

    Some investors have expressed worry that an AI bubble could burst, triggering devastating financial losses. Bubbles occur when stocks surge on inflated growth expectations that ultimately prove to be disconnected from a company’s underlying fundamentals.

    Some tech stocks did notch monthly gains, most notably Alphabet, which rose nearly 14%, due to excitement about its recently released Gemini AI model.

    The market turned around on hopes that the Federal Reserve would again cut interest rates at its meeting next month. Recent comments from Fed officials have given traders more confidence that the central bank will again cut interest rates at its meeting that ends Dec. 10. 

    Traders are betting on a nearly 87% probability that the Fed will cut next month, according to data from CME Group.

    The central bank, which has already cut rates twice this year in hopes of shoring up the slowing job market, is facing an increasingly difficult decision on interest rates as inflation rises and the job market slows. Cutting interest rates further could help support the economy as employment weakens, but it could also fuel inflation. The latest round of corporate earnings reports was mostly positive, but economic data has been mixed.

    The minutes of the Fed’s most recent meeting in October indicate there are likely to be strong divisions among policymakers about the Fed’s next step.

    Investors also had their eye on retail stocks as they wait to see if shoppers rushed to take advantage of the annual Black Friday sales event. Macy’s fell 0.3% while Kohl’s gained 1.4%. Dick’s Sporting Goods dropped 0.5%. Among specialty retailers, Abercrombie & Fitch rose 2.9% and American Eagle Outfitters gained 0.7%.

    Amid the volatility in tech stocks, traders moved money into other parts of the market. Pharmaceutical companies Eli Lilly and Merck each rose more than 20% for the month. Travel-related companies such as Marriott and Expedia also posted strong monthly gains.

    Earlier, futures for the Dow Jones Industrial Average, S&P 500 and Nasdaq were halted for hours due to a technical issue at the Chicago Mercantile Exchange. CME said the problem was tied to an outage at a CyrusOne data center.

    Treasury yields rose slightly, with the 10-year yield at 4.02%.

    [ad_2]

    Source link

  • Fox News AI Newsletter: How to stop AI from scanning your email

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.

    IN TODAY’S NEWSLETTER:

    – How to stop Google AI from scanning your Gmail
    – IRS to roll out Salesforce AI agents following workforce reduction: report
    – AI chatbots shown effective against antisemitic conspiracies in new study

    EYES OFF THE INBOX: Google shared a new update on Nov. 5, confirming that Gemini Deep Research can now use context from your Gmail, Drive and Chat. This allows the AI to pull information from your messages, attachments and stored files to support your research.

    ‘CHANGE IS COMING’: The Internal Revenue Service (IRS) is implementing a Salesforce artificial intelligence (AI) agent program across multiple divisions in the wake of a mass workforce reduction earlier this year, according to a report.

    FACT CHECK TECH: AI chatbots could be one of the tools of the future for fighting hate and conspiracy theories, a new study shows. Researchers found that short dialogues with chatbots designed to engage with believers of antisemitic conspiracy theories led to measurable changes in what people believe.

    A rendering of Archer Aviation's plans for potential developments to Hawthorne Airport.

    The image depicts Archer’s development plans for Hawthorne Airport in Los Angeles, CA. (Archer Aviation)

    SKY TAKEOVER: Archer Aviation, a leading developer of electric vertical takeoff and landing (eVTOL) aircraft, just made one of its boldest moves yet. The company agreed to acquire Hawthorne Airport for $126 million in cash. 

    DIGITAL IMPOSTERS: App stores are supposed to be reliable and free of malware or fake apps, but that’s far from the truth. For every legitimate application that solves a real problem, there are dozens of knockoffs waiting to exploit brand recognition and user trust. We’ve seen it happen with games, productivity tools and entertainment apps. Now, artificial intelligence has become the latest battleground for digital impostors.

    AI TRANSFORMATION: HP announced Tuesday that it plans to cut between 4,000 and 6,000 employees by the end of 2028 as part of its push to adopt artificial intelligence.

    An AWS AI stand at a trade fair

    A lettering AI for “Artificial Intelligence” stands at the Amazon Web Services AWS stand at the Hannover Messe 2025 industrial trade fair. (Julian Stratenschulte/picture alliance via Getty Images)

    RACE FOR AI: Amazon Web Services (AWS) on Monday announced a plan to build and deploy purpose-built artificial intelligence (AI) and high-performance computing for the U.S. government for the first time.

    BREAKING CHINA: Beijing has repeatedly shown the world that it is willing to weaponize its dominance of supply chains, and President Donald Trump had to de-escalate the latest rare-earth dispute during his recent trip to Asia. But rare earths are only a small window into the power that China could have over the U.S. economy as we start adopting tomorrow’s technologies. 

    NO RESERVATIONS: Maybe you order sparkling water, start every meal with an appetizer or prefer dining right when the restaurant opens. You might not track these habits. OpenTable might.

    FOLLOW FOX NEWS ON SOCIAL MEDIA

    Facebook
    Instagram
    YouTube
    X
    LinkedIn

    SIGN UP FOR OUR OTHER NEWSLETTERS

    Fox News First
    Fox News Opinion
    Fox News Lifestyle
    Fox News Health

    DOWNLOAD OUR APPS

    Fox News
    Fox Business
    Fox Weather
    Fox Sports
    Tubi

    WATCH FOX NEWS ONLINE

    Fox News Go

    STREAM FOX NATION

    Fox Nation

    Stay up to date on the latest AI technology advancements, and learn about the challenges and opportunities AI presents now and for the future with Fox News here.

    [ad_2]

    Source link

  • AI Is the New Employee and Colleague. Leaders Must Be Ready for the Change

    [ad_1]

    Someone recently asked what I thought about the future held for CX leaders. My answer was simple. For any leader, the biggest change will be managing and working with AI employees. As work is evolving at an unprecedented pace, leadership will look different as a result. In 2026 and beyond, leaders must be ready to navigate a world with AI, generational changes, and accelerated expectations for growth. 

    AI as an integral part of the team

    I recently tried some new AI tools as “employees” in my consulting firm. They did some fast work, but also went rogue, and as soon as I got nervous, I hit pause. I did not manage at this moment. Instead, I retreated. However, this was a lesson in itself. The integration of AI employees is perhaps the single greatest factor, redefining modern leadership.  

    In 2025, people still view AI as a cost-cutting tool or a threat to one’s work. In the future, the most successful leaders will treat AI as a part of the team. 

    • Shift from overseer to integrator
      Leaders will not simply manage human teams. Instead, they will manage integrated Human-AI workflows. This requires an understanding of where AI excels, such as data analysis, repetition, and prediction. Also, they must understand where human teams are indispensable, such as empathy, ethical judgment, and complex negotiation. 
    • Ethical oversight
      The leader’s role becomes the ultimate guardian of ethical AI use. This includes ensuring fairness, transparency, and accountability in AI-driven decisions. They will be critical for maintaining employee and customer trust. 
    • Focus on honing AI
      • As AI automates routine cognitive tasks, leaders must learn how to manage and hone their AI counterparts, just like they would a human. This may prove challenging in a world where one is used to reasoning with a human. 

    Generational harmony: Leading a multi-aged workforce  

    For the first time for many companies, five generations may coexist in the workplace. Each has distinct expectations regarding communication, work structure, and purpose. Effective leadership in 2026 must be inherently inclusive and adaptable. 

    • Distributed communication
      Leaders must move beyond a one-size-fits-all communication strategy. Gen Z, for example, may prefer instantaneous, direct feedback, while older generations may value structured, formal reviews. 
    • Defining purpose
      Younger generations often prioritize work that aligns with their personal values and a clear sense of purpose. The modern leader must be an eloquent storyteller, connecting daily tasks to the organization’s overarching mission and societal impact  
    • Flexible work models
      The hybrid work model is here to stay. Leaders are responsible for ensuring equity between remote and in-office staff, managing “proximity bias,” and cultivating a cohesive culture regardless of physical location. 

    Accelerated expectations for growth: Leading through change 

    In a recent keynote I heard during the ChurnZero ZERO IN conference, I overheard the CEO of G2 speak about their board’s expectations for 20% growth with no additional overhead. Leaders are directly responsible for optimizing this flow.  

    Below are some examples of how leadership may change in the face of : 

    Focus area: Tool adoption 
    Traditional leadership approach: Mandating new tools; focusing on ROI. 
    Future-ready leadership in 2026 and beyond: Championing tool fluency; focusing on seamless integration with workflow. 

    Focus area: Pace of change 
    Traditional leadership approach: Incremental, planned change. 
    Future-ready leadership in 2026 and beyond: Continuous reinvention; leading with agility and psychological safety for rapid pivoting. 

    Focus area: Value metric 
    Traditional leadership approach: Activity and effort (hours worked). 
    Future-Ready Leadership in 2026 and beyond: Outcomes and Time-to-Value (speed of impact). 

    Focus area: Data use 
    Traditional leadership approach: Reviewing data after decisions are made. 
    Future-ready leadership in 2026 and beyond: Fostering data literacy across all teams; using predictive analytics for proactive decision-making. 

    The leader as a learning officer 

    In a world where knowledge has a half-life measured in months, not years, the primary function of leadership is shifting from “knowing all the answers” to “fostering relentless learning.” They must: 

    1. Model curiosity.
      Demonstrate a commitment to continuous upskilling, especially concerning AI and emerging technologies. 
    2. Invest in agility.
      Create environments where failure is treated as a high-value data point, encouraging experimentation and rapid iteration. 
    3. Prioritize reskilling.
      Proactively identify skills gaps created by automation and invest heavily in reskilling programs to transition human talent into higher-value roles. 

    The future of leadership is not about maintaining the status quo. It is about embracing complexity, fostering human potential alongside technological power, and leading with radical empathy and clarity of purpose. The challenge is immense, but the opportunity for profound impact is even greater. 

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Parul Bhandari

    Source link

  • 5 Tips for Leading a Successful AI Transformation

    [ad_1]

    I heard someone recently say you can’t mandate a mentality. That’s what I think about when I consider the intense push by company leaders to drive AI adoption among their employees. While I personally love AI and it’s been a force multiplier, I also recognize that not everyone is like me.

    All said, if the goal is to drive adoption, In their fervor to win the AI race, I think many organizations have skipped several critical steps crucial for a successful effort.

    The Missing Foundation: Change Management

    That first step is change management — the structured approach to transitioning individuals, teams, and organizations from their current state to a desired future state. We talk all the time about change management in business parlance, but in our zeal to beat others out the door, these fundamental principles seem to be set aside.

    That’s a mistake.

    Research from McKinsey shows that 70 percent of change programs fail to achieve their goals, largely due to employee resistance and lack of management support. Adopting AI, like any other major initiative, is a change management process. Mandates are rarely universally accepted, and this top-down approach is often met with significant resistance.

    I’ve written about effective change management and how to communicate change, but if we want to boil down the basics: tell people the who, what, when, why, and how with deep emphasis on “what’s in it for me,” “why are we doing this,” and “why will this help us.”

    The Current Reality

    That’s not what many organizations are doing. Organizational leaders are shifting to AI with the rationale of “because I said so.” For many, that’s not sufficient as I often say, absent a narrative, people will create one. Leaders need to provide the why, the rationale, and give people the larger vision so they know how to engage with AI.

    Transformation without adequate motivation is stagnation but transformation with shared vision becomes sustainable momentum.

    5 Essential Questions for a Better Approach 

    So how do we go about it? The answer lies in thoughtfully addressing five fundamental questions before rolling out any AI initiative.

    1. What: Define the Problem You’re Solving

    The first question is: what are we solving for? If you don’t know what you’re solving for, how can you ask staff to embrace AI tools if you don’t even know what it’s leading to? First, figure out what you want to solve. That’s the “what.”

    2.  Who: Identify Your Audience
    After you figure out what you’re solving for, you need to determine to whom it applies. AI is not a panacea, and there probably should be specific departments with legitimate use cases identified. For the problem you’ve defined, determine the audience who will be most impacted and who needs to be involved.

    3. Why: Provide the Motivation
    The next aspect is the “why.” People need inspiration, people need motivation, people need to understand why you’re asking them to do what you’re asking them to do.Treat people like adults and give them the reason(s). You can’t just say “because I told you so.” That’s empty, it’s unhelpful, and less than inspiring.

    4.  When: Establish Clear Timelines
    Then there is the “when.” When are we trying to get it done? What’s the timeline for this? Because we know what the problem is and what we’re solving for, there should be a date for when we solve it or accomplish a milestone. If you can’t say when, then it remains open-ended forever, and that’s also less than inspiring.

    5. How: Map Out the Execution
    And finally, there is the “how.” This is probably the most underrated of the who, what, when, why, and how construct, but how are we going to do it? There should be clear instructions for how we’re going to achieve the goal. That means thinking about timelines, tools, milestones, rules, responsibilities, owners, contributors, and mapping that all out. People need to know what tools they are using and what those tools will help them achieve. And they may need to be trained on the tools.

    What I see far too often is a tool morass, a chaotic proliferation of AI platforms and applications with no clear guidance on which tool serves which purpose, no integration between systems, and no coherent strategy. Employees become overwhelmed by the sheer number of options and paralyzed by uncertainty about which tool to use for their specific needs. This confusion breeds frustration and resistance, ultimately undermining the entire adoption effort.

    Define the tools, the timeline, the anticipated outcomes, and the measures of success. This means investigating tools thoroughly, understanding how they interplay with existing systems, setting clear strategy and guardrails, and choosing company-right tools rather than a scattershot ‘AI everything’ approach.

    Management, Not Mandate

    You can’t mandate a mentality. You can’t force and reasonably expect people to embrace AI simply because leadership declares it’s important. What you can do is create the conditions for meaningful adoption by treating your people like adults, giving them context, purpose, clear guidance, and a compelling reason to change.

    The organizations that will win the AI race aren’t the ones that move fastest out the gate with mandates and pressure. They’re the ones that take the time to bring their people along on the journey, building genuine buy-in and capability at every level. That’s not just good change management, it’s smart leadership.​​​​​​​​​​​​​​​​

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Bernard Coleman

    Source link

  • How to Handle Unsanctioned AI Tool Use at Your Company

    [ad_1]

    AI is undeniably useful for certain simple tasks, and more and more people are using it when searching for information, but not every company allows or encourages AI tool use in the office. That’s not stopping workers from using AI anyway, according to a new report. In fact staggering amounts of people may be guilty of using “shadow AI,” including executives and cybersecurity experts.

    The report comes from California-based cybersecurity outfit UpGuard, which surveyed 1,500 workers in the U.S., U.K. and other nations. Its most eye-popping result is that over eight in ten workers are guilty of using unapproved AI tools at work. Half of the respondents admitted they did this regularly. More embarrassingly, 90 percent of cybersecurity professionals surveyed by UpGuard do this too, despite the fact that they really should know better. 

    The report notes “regardless of company size, geography, industry, employee function or seniority, a sizable majority of workers use AI tools at work that they know are not approved.” The data show that regular use of “shadow AI” may be more common in smaller firms rather than larger corporations. Workers in financial firms, the information industry and manufacturing were also more likely to regularly use unapproved AI tools than people in healthcare, education and retail. 

    Why are workers doing this? It’s probably because their company either lacks any kind of AI use guidelines, has approved only a limited range of tools that workers may not find useful, or has banned AI use, tempting users who can see AI’s value from trying to lower their workplace burden by using the tools anyway.

    This confidence in AI may be driven by surprisingly high levels of trust in AI. The UpGuard report notes that about a quarter of workers surveyed said they felt the AI tools they used were their “most trusted source of information,” placing the level of trust almost level with the trust they have in their managers and higher than reported trust levels regarding their colleagues. UpGuard links this trust with greater AI use, noting that “employees who view AI tools as their most trusted source of information are far more likely to use shadow AI tools as part of their regular workflow,” news site HRDive noted

    Shadow AI use also isn’t confined to just frontline workers: midlevel managers were as guilty of using unapproved AI as low-level workers were, but UpGuard found that executives were reporting the highest use of unapproved AI tools, underlying once again the wide division between executives and their workforce. 

    Using unapproved AI tools may be risky because it typically involves accessing an externally-supplied third party service, which may even result in any inputs users make being used to train later AI models. So if someone uploads sensitive company data it may leak out to other users at a later date, or security lapses by a third-party supplier may expose sensitive information in other ways.

    UpGuard’s survey looked into this and found that despite widespread awareness of these risks, shadow AI users felt they could manage the situation safely. Meanwhile, fewer than half of the respondents said they understood their company’s AI use guidelines, and fully 70 percent said they knew that workers had shared sensitive data with AI models. This points to a training issue in companies rolling out AI — a problem previously reported on — where having the risks explained to workers isn’t enough to deter them from exposing the company to risk anyway.

    The big takeaway from this data for your company is clear: If you don’t have an AI use policy, it’s definitely time to get one. If you have one already then it’s time to retrain your workers on why it’s important to use only the approved AI tools, and to be very very careful in their choice of data shared with AI tools. Just chatting with your workers about why they’re using unsanctioned AI systems may also be useful, since it will show you if you’ve made the wrong choice in “official” AI tools, compared to the actual frontline tasks that your employees are using shadow AI to tackle.

    The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Kit Eaton

    Source link

  • World shares are mixed in holiday-thinned trading with Wall Street closed for Thanksgiving

    [ad_1]

    MANILA, Philippines — World shares were mixed Friday in holiday-thinned trading as tech stocks slipped as a recent rebound driven by hopes for an interest rate cut by the Federal Reserve lost steam.

    In early European trading, Germany’s DAX shed nearly 0.2% to 23,730.81 as traders awaited inflation data set to be released later in the day.

    Britain’s FTSE 100 edged up 0.2% to 9,708.36 on gains in energy and mining stocks.

    The CAC 40 in France was nearly unchanged at 8,100.87, despite government data showing France’s economy grew 0.5% quarter-on-quarter in July-September, up from 0.3% in the previous quarter.

    While developments related to artificial intelligence have been driving recent ups and downs in world markets, the focus remains on the outlook for U.S. monetary policy. Recent comments by Fed officials have helped revive hopes the central bank will act during its meeting next month.

    “Everyone is sprinting toward the same conclusion: the Fed will deliver holiday cheer,” Stephen Innes of SPI Asset Management said in a commentary.

    In Asia, Japan’s Nikkei 225 closed 0.2% higher to 50,253.91, rebounding from losses earlier in the day. Data showed Japan’s housing starts rose 3.2% in October from the same period a year ago, the first annual increase since March. The number defied market expectations of 5.2% decline and reversed a 7.3% drop in September.

    Government data also showed Tokyo’s year-on-year core inflation in November remained at 2.8%, unchanged from October and above the Bank of Japan’s 2% target. That reinforces expectations of a gradual shift by the central bank to higher interest rates, although a rate hike is not expected at the Bank of Japan’s December meeting.

    South Korea’s Kospi dropped 1.5% to 3,926.59 after the country’s industrial production fell 4% month-on-month in October, more than the 1.1% decline in September. Semiconductor production plunged 26.5% month-on-month, pushing down tech stocks like LG Energy Solutions, SK Hynix, Samsung Electronics.

    In Chinese markets, Hong Kong’s Hang Seng index lost 0.3% to 25,858.89. The Shanghai Composite index edged up 0.3% to 3,888.60.

    Australia’s S&P/ASX 200 index fell less than 0.1% to 8,614.10, while Taiwan’s Taiex rose 0.3%. India’s BSE Sensex was unchanged.

    On Wednesday, before the trading holiday in the U.S., stocks closed broadly higher on Wall Street. The S&P 500 gaining 0.7% and the Dow up 0.7%. The Nasdaq composite added 0.8%.

    Early Friday, the futures for the S&P 500 and the Dow Jones Industrial Average were up 0.1%.

    Brent crude, the international standard for pricing, was up 15 cents at $63.02 per barrel.

    The U.S. dollar rose to 156.34 Japanese yen from 156.31 yen. The euro fell to $1.1567 from $1.1596.

    [ad_2]

    Source link

  • World shares are mixed in holiday-thinned trading with Wall Street closed for Thanksgiving

    [ad_1]

    MANILA, Philippines — World shares were mixed Friday in holiday-thinned trading as tech stocks slipped as a recent rebound driven by hopes for an interest rate cut by the Federal Reserve lost steam.

    In early European trading, Germany’s DAX shed nearly 0.2% to 23,730.81 as traders awaited inflation data set to be released later in the day.

    Britain’s FTSE 100 edged up 0.2% to 9,708.36 on gains in energy and mining stocks.

    The CAC 40 in France was nearly unchanged at 8,100.87, despite government data showing France’s economy grew 0.5% quarter-on-quarter in July-September, up from 0.3% in the previous quarter.

    While developments related to artificial intelligence have been driving recent ups and downs in world markets, the focus remains on the outlook for U.S. monetary policy. Recent comments by Fed officials have helped revive hopes the central bank will act during its meeting next month.

    “Everyone is sprinting toward the same conclusion: the Fed will deliver holiday cheer,” Stephen Innes of SPI Asset Management said in a commentary.

    In Asia, Japan’s Nikkei 225 closed 0.2% higher to 50,253.91, rebounding from losses earlier in the day. Data showed Japan’s housing starts rose 3.2% in October from the same period a year ago, the first annual increase since March. The number defied market expectations of 5.2% decline and reversed a 7.3% drop in September.

    Government data also showed Tokyo’s year-on-year core inflation in November remained at 2.8%, unchanged from October and above the Bank of Japan’s 2% target. That reinforces expectations of a gradual shift by the central bank to higher interest rates, although a rate hike is not expected at the Bank of Japan’s December meeting.

    South Korea’s Kospi dropped 1.5% to 3,926.59 after the country’s industrial production fell 4% month-on-month in October, more than the 1.1% decline in September. Semiconductor production plunged 26.5% month-on-month, pushing down tech stocks like LG Energy Solutions, SK Hynix, Samsung Electronics.

    In Chinese markets, Hong Kong’s Hang Seng index lost 0.3% to 25,858.89. The Shanghai Composite index edged up 0.3% to 3,888.60.

    Australia’s S&P/ASX 200 index fell less than 0.1% to 8,614.10, while Taiwan’s Taiex rose 0.3%. India’s BSE Sensex was unchanged.

    On Wednesday, before the trading holiday in the U.S., stocks closed broadly higher on Wall Street. The S&P 500 gaining 0.7% and the Dow up 0.7%. The Nasdaq composite added 0.8%.

    Early Friday, the futures for the S&P 500 and the Dow Jones Industrial Average were up 0.1%.

    Brent crude, the international standard for pricing, was up 15 cents at $63.02 per barrel.

    The U.S. dollar rose to 156.34 Japanese yen from 156.31 yen. The euro fell to $1.1567 from $1.1596.

    [ad_2]

    Source link