ReportWire

Tag: Artificial Intelligence

  • More data centers coming to Illinois as residents complain about noise, electric bills: What to know

    [ad_1]

    AURORA, Ill. (WLS) — Data centers are moving in. They power everything from streaming services to artificial intelligence, but critics say they are noisy and can jack up your electric bills.

    Now, the I-Team and ABC News are finding that more than 3,000 data centers are already operating nationwide, with at least 1,000 more planned. Some are in the Chicago area.

    ABC7 Chicago is now streaming 24/7. Click here to watch

    Companies point to economic benefits, but residents are raising concerns about noise and power usage.

    When David Szala moved into his Aurora home in 2015, he knew he was by a data center.

    “You can hear it as soon as you walk out. Fans, just constant with the noise,” Szala said.

    But in recent years, the CyrusOne data center campus has expanded significantly.

    Szala and his neighbor, Bryan Castro, both say they hear cooling fans all day and night, and sometimes, generators create more noise.

    “You feel it in your bones,” Szala said.

    Castro says the buzzing bounces through his backyard, which looked a lot different when he moved there in 2007.

    “You can feel the vibrations in the house,” Castro said. “This was 25 acres of nothing but forest.”

    Neighbors say CyrusOne put up a sound recorder to monitor noise levels and erected walls, but both residents ABC7 spoke with said the walls do not help much.

    “The noise doesn’t drop down and get stopped. The noise radiates from above,” Castro said.

    CyrusOne told the I-Team the noise issue is unique to their Aurora location, and it apologizes “for the impact this situation has had on our neighbors in Aurora. We take responsibility and are well underway with a three-phase engineering project.” The company says additional rooftop sound walls and other noise reduction equipment are on schedule for completion and “we anticipate continued improvement in sound levels.” The city of Aurora also says these steps should help.

    There is also a concern over the rising cost of electric bills.

    “Our electric bills this past year are probably 50% higher than they’ve been years past,” Castro said.

    CyrusOne says it understands that higher energy bills are a concern and it “pays for all electricity we consume at rates established through Illinois’ regulatory framework,” and that it takes steps with utilities to “protect households from cost volatility” and “moderate costs over time.”

    Illinois watchdog group Citizens Utility Board says the cost of improving the infrastructure for data centers can get passed on to consumers.

    “Some of them use a decent amount and some use massive amounts of electricity,” said Citizens Utility Board Executive Director Sarah Moskowitz. “The way that our power system is regulated, you have to build infrastructure, and then, it takes decades to pay it off.”

    Moskowitz continued, “What if the data centers don’t show up, or what if they are there for only a short period of time? Or, what if they don’t use as much electricity as they said? Then, they’re not going to be able to pay that off. And the rest of the customers, those of us who’ve been here, are left holding the bag.”

    ABC7 has also been covering public meetings over proposed data centers, and there are questions about water use and the environment.

    The I-Team and ABC News studied a private company’s Data Center Map and found that there are at least 4,302 data center projects across the U.S., large and small. Of those, 3,038 are currently operational, with another 1,203 either under construction or planned for construction. Sixty-one have acquired land.

    In Illinois, there are 164 operating data centers, with another 81 planned for construction. The largest state project planned is in Yorkville. It would be 2 Gigawatts, and according to ABC7 data team, would use the same energy that would power approximately 1.7 million homes. That’s more than every home in the city of Chicago.

    Industry experts say the facilities are needed for modern digital infrastructure and can benefit the economy.

    “So, for poor communities that specifically need a big increase in tax revenue, data centers are really good for that. They’re really not very good for jobs. They create a lot of construction jobs, and then a few additional maintenance jobs. But they create very few jobs relative to the resources that they use,” said Effective Altruism DC Director and artificial intelligence expert Andy Masley.

    The Illinois Pollution Control Board says that there have been no noise enforcement proceedings for data centers in the entire state, in 2025, and there are no open cases right now.

    “They have to build these things to support what’s going with computers, but they need to keep them away from neighborhoods,” Castro said.

    Illinois state legislators recently introduced a bill that could require data centers to reveal how much water and energy they are using. The bill could also limit the amount of energy costs passed on to consumers.

    You can watch more on “Data Land USA: AI on overdrive next door” on Tuesday morning on “Good Morning America” and throughout the day on ABC News.

    Copyright © 2026 WLS-TV. All Rights Reserved.

    [ad_2]

    Jason Knowles

    Source link

  • SaaS Companies Take Unusual Step to Prove AI Has Not Mortally Wounded Them

    [ad_1]

    As a sort of proof-of-life exercise, a collection of private software-as-a-service (SaaS) companies recently posted their earnings despite it not being strictly necessary, according to Bloomberg. This is, you won’t be shocked to read, “a bid to convince lenders of their resilience to disruption from artificial intelligence,” Bloomberg says.

    The SaaS world is in a rough place at the moment because Wall Street sees a near future in its crystal ball where a lot of the dreary computer programs people use at work will be replaced with vibe-coding. The narrative around this is that highly debt-burdened software companies may soon not have enough cash coming in to service their debt—bad for the companies, and bad for the companies they’ve borrowed from.

    The wider phenomenon around this is known as the SaaSpocalypse, and it kicked off in earnest when about $300 billion worth of business software company value vanished from the universe around the start of this month. Companies hit by the high-profile sell-off throughout January and February included LegalZoom, LexisNexis, Thomson Reuters, Salesforce, Adobe, and Figma, according to the New York Times.

    So Bloomberg noticed on Tuesday that McAffee had announced earnings that are about the same as this time last year—implying, probably, that it’s not about to miss any debt payments. An “IT modernization” company called Rocket Software saw a 5.2% bump compared to last year, Bloomberg says. Perforce Software’s earnings are slightly down by just the tiniest bit—$644 million compared to $654 million last year—but a call went out to investors recently in which Perforce Software’s leaders explained that they would soon increase revenue by “embedding AI into products.”

    An analytics company called Cloudera that Bloomberg described as “unusually private about its financials” is trumpeting “over 50% year-over-year growth,” in a statement on its website. “Cloudera’s momentum is fueled by its unique position as the only data and AI platform vendor supporting deployment anywhere with a unified experience,” it also claims in that statement. 

    As noted by the Harvard Business Review in 2022, SaaS companies are thought of as money-printing machines because they’re on the monthly subscription model, like Netflix, but boring. The sudden frenzy over agentic tools like OpenClaw seems to have conjured a vivid mental image: millions of IT workers across the world smashing the “unsubscribe” button en masse. These SaaS companies themselves are, quite reasonably, demonstrating that the nightmare many are envisioning hasn’t actually come true.

    Gizmodo reached out to McAffee, Rocket Software, Perforce, and Cloudera for comment, and will update if we hear back. 

    [ad_2]

    Mike Pearl

    Source link

  • Viral article warns of looming impacts of artificial intelligence

    [ad_1]

    Matt Shumer joins “CBS Mornings” to discuss his now viral article, “Something Big Is Happening.” He writes that AI’s “capability for massive disruption could be here by the end of this year.” Shumer explains why he wrote the article, and his message to concerned readers.

    [ad_2]

    Source link

  • Long Island businesses eye cautious growth in 2026 | Long Island Business News

    [ad_1]

    THE BLUEPRINT:

    • 45% of Long Island businesses forecast growth in 2026, down from 52% last year.

    • (45%) and retention of young professionals (34%) rank as top concerns.

    • 59% say AI will positively impact business; 51% have invested in AI tools.

    Businesses on Long Island are projecting a cautious outlook for growth in 2026.

    That’s according to the ‘s “” released last week. Conducted in partnership with Adelphi University and Citrin Cooperman, the survey polled an estimated 120 leaders of Long Island-based businesses across a wide range of industries.

    That cautious optimism “doesn’t surprise us,” said Terri Alessi-Miceli, president and CEO of HIA-LI, introducing a panel discussion about the survey, adding that entrepreneurs “go out and fight the good fight every day.”

    And, she said, “I know at least half of you said that you’re going to expand in some way. I think that’s really positive news.”

    The “survey showed that in 2025, many businesses expanded more than they had anticipated, and that was a great thing to see,” said John Fitzgerald, a partner at Citrin Cooperman, who moderated the panel. “We’re seeing … a more cautious outlook for 2026.”

    Forty-five percent of survey respondents forecasted growth, compared to 52 percent last year.

    Kevin Santacroce, chief banking officer of ConnectOne Bank said on the panel that his team is “very optimistic about 2026.”

    Looking historically “at the performance of our loan portfolios, our past-dues, we’re at all-time lows with regards to delinquencies and troubled credit,” he said. In addition, he said, viewing balance sheets, “most people are not overly leveraged.” And there’s been a stabilization in . Most client, she said, also have strong liquidity. “We see our clients pretty well-positioned,” he said.

    Despite optimism in the economy, the “ industry has struggled,” said Jimmy Coughlan, executive vice president and partner of Tritec. With a rise in construction costs and a period of increased interest rates, “we actually took about a five year pause on new developments outside of Station Yards.” But now, he said, “we’re finally getting optimistic again.” There is expectation of more rate cuts in the next two years, which would have “a big impact on our industry. And the housing crisis here is so acute that the demand is overwhelming,” he said.

    The survey found that 59 percent expected revenue to increase by less than 10 percent or stay the same, while 14 percent expected revenue to increase by 10 percent or more. Still 14 percent expected revenue to drop by less than 10 percent, and another 13 percent expected decreases of more than 10 percent.

    Of the challenges facing Long Island businesses, 45 percent cited inflation, 34 percent said retention of young professionals and families. And 8 percent said tariffs.

    As for , 59 percent thought it would positively impact their business, and 7 percent thought it could negatively impact business. And while 25 percent expected no effect, 79 percent said they had no plans to freeze hiring or implement a workforce reduction because of efficiencies created by AI. Meanwhile, 51 percent have made some investment into AI tools.

    As for threats, 37 percent of respondents reported being very to extremely concerned, 45 percent were moderately to slightly concerned and 3 percent had no concerns.

    When it comes to political issues, 35 percent expressed concern over partisan policy-making that influences the business environment, while 26 percent said immigration is one of most important issues facing Long Island.

    Top human resources concerns for business included compensation and benefits (41 percent), retention (19 percent), workforce productivity (14 percent) and hiring (13 percent).

    With government investment to facilitate growth on Long Island, 40 percent said it was needed for housing, 35 percent said transportation and infrastructure, 19 percent wanted to see more business grants or incentives while 3 percent said workforce training and education.

    Additional panelists included Rich Humann, president and CEO of H2M architects + engineers; Rick Lewis, CEO of the Suffolk Y Jewish Community Center; Christopher Nelson, president of St. Catherine of Siena Hospital; and Chris Storm, interim president of Adelphi University.

    Before the panel discussion, Rob Calarco, New York State assistant secretary for intergovernmental affairs – Long Island, delivered a presentation of the governor’s budget proposal.

    The full survey, along with insights, is available here.


    [ad_2]

    Adina Genn

    Source link

  • AI Money Is Coming to a Midterm Near You

    [ad_1]

    Photo: Matt Rourke/AP Photo

    During the past two election cycles, the giants of cryptocurrency emerged as some of the biggest money players. Sam Bankman-Fried’s PAC spent $70 million on donations in 2022, and Fairshake, a super-PAC formed to support pro-crypto politicians, spent a whopping $245 million in 2024. In just a few years, their bipartisan donations helped reshape the Senate, with cash going to support swing-state Democrats like Ruben Gallego, who pledged to play ball with industry-friendly legislation, while stymieing the election of swing-state Democratic crypto skeptics like Sherrod Brown.

    For the 2026 midterms, it looks like the artificial-intelligence companies are the new players with startling amounts of cash to spend. Bloomberg reports that Marc Andreessen and Ben Horowitz (of the eponymous AI-leaning venture-capital firm) and OpenAI co-founder Greg Brockman are among the leading donors to a super-PAC called Leading the Future, which looks to spend $125 million this cycle. Also in the mix is Public First, a PAC that received a $20 million pledge from Anthropic PBC, the OpenAI rival behind the AI assistant Claude.

    Leading the Future is already spending on primary races to boost Democratic and Republican candidates who are friendly to the AI and tech sectors, with appropriately named cutout PACs for both parties. (Take a guess which party is getting funding from American Mission and which is benefiting from Think Big.) In Texas, Leading the Future is supporting pro-AI Republican Chris Gober in the congressional race for the Tenth District outside of Austin, while in New York, it has spent $1.1 million dinging the AI-skeptic state assemblyman Alex Bores, who is running in the crowded primary to replace Jerry Nadler. A spokesperson for Leading the Future told Bloomberg that the PAC is “committed to supporting policymakers who want a smart national regulatory framework for AI.” If the crypto model is any indication, that most likely means industry-friendly regulation written or co-sponsored by lawmakers from both parties who received bags of campaign cash from crypto donors.

    Like the crypto-ad blitz of 2024, it may be hard to tell which ads are paid for by the AI PACs. For example, the Gober spots cite his record as a “MAGA warrior” but say nothing about the fact that one of his platforms is to ensure “America’s AI dominance.” The ads condemning Bores, who has proposed consumer-friendly AI regulation, mostly refer to his record working with the defense contractor Palantir. If only AI executives were voting, this might be a good association; Palantir was co-founded by Peter Thiel and is a close partner of the industry titan NVIDIA. But in New York’s progressive 12th District, where Bores is running, the contractor’s connections to Palantir (and by association, ICE) could weigh him down.

    While PACs, by their very nature, try to conceal where the money is coming from, there might be another reason why Leading the Future is running ads that obscure a focus on AI: The industry’s obscene energy demand is increasing the cost of electricity in many regions throughout the country. Maybe a cost-of-living election isn’t the best time for a politician to admit that they’re running on AI donations.

    [ad_2]

    Matt Stieb

    Source link

  • Former NPR Host Accuses Google Of Copying His Voice For AI Offering

    [ad_1]

    Podcaster David Greene is accusing Google of using his voice without permission to create one of the AI voices in the company’s research and note-taking tool NotebookLM.

    Google added Audio Overviews in the second half of 2024, allowing NotebookLM users to make brief podcast episodes out of pages of notes and documents of any kind. The AI-generated podcasts typically have one male and one female cohost. Greene is now claiming that the male co-host was clearly trained on hours of his hard work, which it allegedly now mimics, and he is suing the company for failing to get his permission or offering him any compensation.

    “Without his consent, Google sought to replicate Mr. Greene’s distinctive voice—a voice made iconic over decades of decorated radio and public commentary—to create synthetic audio products that mimic his delivery, cadence, and persona,” the complaint filed in a state trial court in Santa Clara County, California claims.

    Greene was the co-host of NPR’s award-winning Morning Edition podcast for roughly a decade, and now he hosts KCRW’s Left, Right & Center podcast.

    Following the release of the AI podcasting feature in 2024, the internet praised how the podcasters sounded more human than expected. At the time, Forbes called the feature “eerily human,” while WIRED said that the cadence and vocal performance of the virtual podcasters, and the use of filler words or peculiar phrasing, made the product “stand out.”

    Google has called NotebookLM one of the company’s “breakout AI successes.”  The lawsuit claims that the company “misappropriated a beloved public radio and podcast host’s career, identity, and livelihood as raw material for a tech company’s bottom line without any compensation.”

    Greene was first alerted to the similarity by colleagues, and he then consulted an AI forensic firm to confirm his suspicions. According to the lawsuit, the tests indicated a 53-60% confidence that the voice was Greene’s, with any confidence score above 50% deemed “relatively high.” The CEO of the unnamed forensic company eventually concluded that it was their “confident opinion that the Google Podcast model was trained on David Greene’s voice,” per the lawsuit.

    “These allegations are baseless,” Google spokesperson José Castañeda told Gizmodo. “The sound of the male voice in NotebookLM’s Audio Overviews is based on a paid professional actor Google hired.”

    The use of intellectual and artistic property has been a huge issue in AI, leading to several high-profile lawsuits aimed at AI industry giants like OpenAI and Google. Models need lots of data for training, but with limited regulatory guardrails, the lines blur when it comes to proper authorization by and compensation for those who have labored to create the stuff it trains on.

    When it comes to mimicking likenesses, such as in voice or video generation, there is also the added uncanny experience of individuals having to surrender all autonomy over their own voice or image, as users can have the models do and say pretty much anything that they want. In a bit of high-profile fallout in 2024, Scarlett Johansson complained about OpenAI after the company allegedly used or replicated her voice to power a ChatGPT voice, even after the actress (who famously voiced an AI companion in the 2013 movie “Her”) declined the company’s requests for her participation.

    [ad_2]

    Ece Yildirim

    Source link

  • AI out of control? How a single article is sending shock waves with an apocalyptic warning

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Be afraid. Be very afraid.

    That’s the message that has caught fire in the media-tech world when it comes to artificial intelligence (AI).

    This column, for what it’s worth, is being written by a fallible human being on a battered keyboard with no technological assistance.

    It’s extremely rare–once in a blue moon–that I read a piece that completely changes my view of an issue.

    Like most people, I have viewed the rise of AI with a mixture of concern, skepticism and bemusement.

    DEMOCRATS ARE LOSING AI BECAUSE OF A BIG MESSAGING PROBLEM

    It’s fun to conjure up images on ChatGPT, for instance, and I get that some people use it for hyperspeed research. But then you hear anecdotes about AI screwing up math problems or spewing stuff that’s simply untrue.

    Sure, we’ve all seen warnings that this fast-growing technology will cost some people their jobs, but I assumed that would be mainly in Silicon Valley. The era of plane travel didn’t wipe out passenger trains or buses, though it was curtains for the horse-and-buggy business.

    But now comes Matt Shuman, who works in AI, and he’s not simply joining the prediction sweepstakes. He tells us what is happening right now.

    Last year, he says, “new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn’t just better than the last… it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise.”

    On Feb. 5, two major companies, OpenAI and Anthropic, released new models that Shuman likens to “the moment you realize the water has been rising around you and is now at your chest.”

    Rude prompts made ChatGPT more accurate. Polite ones scored lower. Tone changed the outcome. (Kurt “CyberGuy” Knutsson)

    Bingo: “I am no longer needed for the actual technical work of my job. I describe what I want built in plain English, and it just … appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.”

    Wait, there’s more. The new GPT model “wasn’t just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.”

    This goes well beyond the geeky world of techies, in case you were feeling immune. “Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I’ve seen in just the last couple of months, I think ‘less’ is more likely.”

    AI RAISES THE STAKES FOR NATIONAL SECURITY. HERE’S HOW TO GET IT RIGHT

    My knee-jerk reaction is, well, I’ll be okay because no super-smart bot could talk about news on TV or podcasts with the same attitude and verve that I do. Then I remember, even as a writer, that news organizations are increasingly relying on AI.

    What about musicians who bring soul to their rock ’n roll or bop to their pop? Well, the most popular AI singer is Xania Monet. Some fans were stunned to discover she wasn’t real, though created by an actual poet, Telisha “Nikki” Jones, and most listeners didn’t care. In fact, “Xania” now has a multimillion-dollar recording deal.

    One other sobering thought: “Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years.”

    Gulp.

    Woman scrolling through apps.

    Experts predict that AI will eliminate 50% of entry-level white-collar jobs within one to five years. This statistic comes as concerns relating to job security mount around technology. (Cheng Xin/Getty Images)

    This has really hit the media echo chamber, reverberating from Axios to the New York Times to the Wall Street Journal, among others.

    The fact that Matt Shuman presents this in a measured tone, not a sky-is-falling shout, adds to his credibility.

    Anthropic, for its part, released a study that defended its Claude Opus model, “against any attempt to autonomously exploit, manipulate, or tamper” with a company’s operations “in a way that raises the risk of future catastrophic outcomes.”

    The report added: “We do not believe it has dangerous coherent goals that would raise the risk of sabotage, nor that its deception capabilities rise to the level of invalidating our evidence.”

    95% OF FACULTY SAY AI MAKING STUDENTS DANGEROUSLY DEPENDENT ON TECHNOLOGY FOR LEARNING: SURVEY

    Meanwhile, National Review provides a counterweight to what’s called “doomerism.”

    For one thing, “most predictions anticipate that AI will be a top-down disruption rather than a bottom-up phenomenon.”

    For another, writes Noah Rothman, “there is almost no room in the discourse for undesirable outcomes that fall short of catastrophism. After all, modesty and prudence do not go viral.”

    And what about the positive impact?

    businesswoman looking stressed out while working on a laptop in an office at night

    Concerns around AI have led to the rise of “doomerism.” Though experts say that “modesty and prudence” in AI discourse “do not go viral.” (iStock)

    “Rather than wiping out whole sectors, it is just as possible that the workers displaced by AI will be retained in the sectors in which they’re already employed.

    It defies logic to assume that an industry that grows as rapidly as AI is predicted to will not need human data scientists, research analysts, specialized engineers, and, yes, even support and administrative staff. In addition, sectors such as health care, agriculture, and emerging industries will require as much, or even more, human talent than they currently employ.”

    The conservative magazine is also annoyed that “participants in this debate default to the assumption that the only solution to AI’s disaggregating potential, whatever its scale, is big government.”

    Well, take your pick.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    If AI, which can now code well enough to reproduce itself, doesn’t wipe out zillions of jobs, or society finds ways to adapt, we can all breathe a very human sigh of relief.

    And if artificial intelligence is as destructive as Shuman’s alarming article says it already is, we can’t say we weren’t warned–but perhaps we can harness it to do our jobs for us while we work three days a week with three-hour lunches.

    I’m agnostic at this point, except to say it’s going to be a wild ride.

    [ad_2]

    Source link

  • Hollywood groups condemn ByteDance’s AI video generator, claim copyright infringement

    [ad_1]

    A new artificial intelligence video generator from Beijing-based ByteDance, the creator of TikTok, is drawing the ire of Hollywood organizations

    A new artificial intelligence video generator from Beijing-based ByteDance, the creator of TikTok, is drawing the ire of Hollywood organizations that say Seedance 2.0 “blatantly” violates copyright and uses the likeness of actors and others without permission.

    Seedance 2.0, which is only available in China for now, lets users generate high-quality AI videos using simple text prompts. The tool quickly gained condemnation from the movie and TV industry.

    The Motion Picture Association said Seedance 2.0 “has engaged in unauthorized use of U.S. copyrighted works on a massive scale.”

    “By launching a service that operates without meaningful safeguards against infringement, ByteDance is disregarding well-established copyright law that protects the rights of creators and underpins millions of American jobs. ByteDance should immediately cease its infringing activity,” Charles Rivkin, chairman and CEO of the MPA, said in a statement Tuesday.

    Screenwriter Rhett Rheese, who wrote the “Deadpool” movies, said on X last week that “I hate to say it. It’s likely over for us.” His post was in response to Irish director Ruairí Robinson’s post of a Seedance 2.0 video that shows AI versions Tom Cruise and Brad Pitt fighting in a post-apocalyptic wasteland.

    Actors union SAG-AFTRA said Friday it “stands with the studios in condemning the blatant infringement” enabled by Seedance 2.0.

    “The infringement includes the unauthorized use of our members’ voices and likenesses. This is unacceptable and undercuts the ability of human talent to earn a livelihood,” SAG-AFTRA said in a statement. “Seedance 2.0 disregards law, ethics, industry standards and basic principles of consent. Responsible AI development demands responsibility, and that is nonexistent here.”

    ByteDance said in a statement Sunday that it respects intellectual property rights.

    “(We) have heard the concerns regarding Seedance 2.0. We are taking steps to strengthen current safeguards as we work to prevent the unauthorized use of intellectual property and likeness by users,” the company said.

    [ad_2]

    Source link

  • Airbnb is testing out AI search with a ‘small percentage’ of users

    [ad_1]

    Airbnb plans to double down on artificial intelligence to improve its user experience for both guests and hosts. During a fourth-quarter earnings call, Airbnb’s CEO, Brian Chesky, said the company is building an “AI-native experience” aimed at helping guests book trips, assisting hosts with their listings, and running the company more efficiently. According to Chesky, there’s an AI search tool to help guests book trips that’s live for a small percentage of users right now.

    In a shareholder letter posted on Airbnb’s website, the company said it’s conducting early testing with an AI-powered search that is “focused on giving guests a more natural way to describe what they’re looking for, and ask questions about the listing and location.” The letter added that the AI search tool will become “a more comprehensive and intuitive search experience that extends through the trip,” but the company didn’t offer a definitive date on when it would be available to the public.

    While it may feel like Airbnb is late to incorporating AI into its ecosystem, it introduced an AI chatbot that handles customer service requests last year. While the AI agent is only available to users in North America currently, Airbnb said that it already handles a third of customer requests without the need for human intervention, as reported by TechCrunch. Chesky also said during the earnings call that the AI chatbot would tackle “significantly more” customer tickets a year from now and that it would roll out to the rest of the world.

    [ad_2]

    Jackson Chen

    Source link

  • Dems Want to Ban Surveillance Pricing at Big Grocery Stores

    [ad_1]

    Sen. Ben Ray Luján, a Democrat from New Mexico, and Sen. Jeff Merkley, a Democrat from Oregon, introduced legislation Thursday that would ban so-called surveillance and surge pricing in grocery stores. Officially known as the Stop Price Gouging in Grocery Stores Act of 2026, the Senate legislation is modeled on a 2025 bill in the House.

    The new bill would require stores to disclose their use of facial recognition technology and would ban electronic shelf labels (ESL) in large grocery stores. ESLs are controversial because they allow retailers to change the price of a given item remotely, opening up the possibility that they could be tied to algorithms which raise and lower prices based on conditions in the store or who’s trying to buy something.

    Hypothetically, stores can charge different prices at different times of day or rely on different inputs, right down to personalizing the price based on an individual who was looking at a given item, spotted with facial recognition tech. The concern is that factors like race, gender, and income level could be used to determine how much people are charged. A 2025 study found that Instacart was charging customers different prices for the same products, sometimes as much as 23% more. A few weeks after the study received negative press coverage, Instacart announced it was pulling the plug on its AI-powered pricing.

    “In New Mexico and across the country, Americans are struggling to put food on the table,” Sen. Luján said in a statement posted online. “With rising costs driven by President Trump’s trade war and Republican cuts to SNAP, Congress must act to ensure that technologies are being used to improve the lives of Americans, not increase their grocery bills. Our friends, family, and neighbors should be able to shop at their local grocery store without worrying about predatory pricing.”

    At least six states have seen legislation introduced to stop surge and surveillance pricing, according to the United Food and Commercial Workers International Union (UFCW), which has also developed a 30-second ad to spread the word on the threat.

    It’s not clear how many grocery outlets are actually utilizing in-store surveillance pricing, but part of the reason legislators feel like new laws are needed is that they want to get ahead of things before the practice becomes commonplace.

    “This legislation is actually pretty simple: If two people are in the same store buying the same item, they should pay the same price,” Washington State Representative Mary Fosse said in an emailed statement.

    “Large retailers are investing in AI, algorithms, and data systems that can change prices instantly, individually, and secretly,” Fosse continued. “We need to stop the rip-off at the register before these practices become the norm. Technology should serve workers and consumers, not exploit them.”

    The Biden administration launched an investigation into surveillance pricing in 2024 with FTC chair Lina Khan initiating a study on the ways it may harm U.S. consumers. But after President Donald Trump took power in 2025, his administration killed the study.

    Surge pricing for food is extremely unpopular, with one of the most famous cases happening in 2024 when Wendy’s merely discussed the possibility of introducing it in 2025. Within just a couple of days the backlash had gotten so bad the company denied even contemplating the idea, despite pretty clear evidence it was working on surge pricing. The restaurant chain’s CEO had even said it would “begin testing more enhanced features like dynamic pricing” in an earnings call.

    Consumers are extremely price sensitive when it comes to food these days, and it’s no wonder, as people struggle to get by in an economy that prioritizes stock prices and Wall Street.

    “Americans are hurting under the affordability crisis, and UFCW members see the pain in their faces every time they enter the grocery store,” UFCW International President Milton Jones said in a statement to Gizmodo. “Our members also feel it themselves when they shop for their families.”

    “We are starting this national campaign to stop corporations from being able to change prices in front of their eyes just because they live in the wrong zipcode or are a new parent. We are proud to work with elected officials in every part of the country to lead the fight for affordable groceries and good jobs because that is what our members want.”

    [ad_2]

    Matt Novak

    Source link

  • Video: Can You Rely on A.I. to Translate Love?

    [ad_1]

    new video loaded: Can You Rely on A.I. to Translate Love?

    A.I. translation has become a huge industry, but how accurate is it? Our tech reporter, Kashmir Hill, explores its successes and failures through a couple who relies on of A.I. translation to communicate.

    By Kashmir Hill, Gilad Thaler, Kassie Bracken, Jon Miller, Jon Hazell and Joey Sendaydiego

    February 14, 2026

    [ad_2]

    Kashmir Hill, Gilad Thaler, Kassie Bracken, Jon Miller, Jon Hazell and Joey Sendaydiego

    Source link

  • AI tool Claude helped capture Venezuelan dictator Maduro in US military raid operation: report

    [ad_1]

    NEWYou can now listen to Fox News articles!

    The U.S. military used Anthropic’s artificial-intelligence tool Claude during the operation that captured Venezuelan dictator Nicolás Maduro, according to a report.

    Last month, U.S. special operations forces captured Maduro and his wife, who were brought to the U.S. to face sweeping narcotics charges.

    Claude was deployed through Anthropic’s partnership with data company Palantir Technologies, whose tools are widely used by the Defense Department and federal law enforcement, according to The Wall Street Journal, which cited people familiar with the matter.

    “We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise,” an Anthropic spokesperson told Fox News Digital. “Any use of Claude — whether in the private sector or across government — is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance.”

    US RAID IN VENEZUELA SIGNALS DETERRENCE TO ADVERSARIES ON THREE FRONTS, EXPERTS SAY

    Captured Venezuelan President Nicolas Maduro is escorted, as he heads towards the Daniel Patrick Manhattan United States Courthouse for an initial appearance to face U.S. federal charges including narco-terrorism, conspiracy, drug trafficking, money laundering and others in New York City, U.S., January 5, 2026.  (Adam Gray/Reuters)

    Anthropic’s usage guidelines prohibit Claude from being used for violence, weapons development, or surveillance.

    A source familiar with the matter told Fox News Digital that Anthropic has visibility into classified and unclassified usage and has confidence that all usage has been in line with Anthropic’s usage policy, as well as its partners’ own compliance policies.

    Reached by Fox News Digital, the Department of War declined to comment.

    SEVEN US SERVICE MEMBERS INJURED IN VENEZUELA RAID TO CAPTURE MADURO, OFFICIAL SAYS

    Apps displayed on phone within an "AI" folder.

    The U.S. military reportedly used Anthropic’s AI tool Claude during the operation that captured Venezuelan leader Nicolás Maduro. (Kurt “CyberGuy” Knutsson)

    Anthropic was the first AI model developer to be used in classified operations by the Department of War, according to the Journal.

    Anthropic has raised concerns about how Claude can be used by the Pentagon, prompting officials within the Trump administration to consider canceling its contract worth up to $200 million, which was awarded last summer, the paper reported.

    The AI tools can be used for everything from summarizing documents to controlling autonomous drones, the outlet noted.

    The Trump administration has prioritized AI development, and in December War Secretary Pete Hegseth said “the future of American warfare is here, and it’s spelled AI.”

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Nicolas Maduro

    Anthropic’s artificial-intelligence model Claude was reportedly used in a classified U.S. military operation targeting Nicolás Maduro. (Eduardo Munoz/Reuters)

    “As technologies advance, so do our adversaries,” he said. “But here at the War Department, we are not sitting idly by.”

    [ad_2]

    Source link

  • Fei-Fei Li and Andrej Karpathy Back a New A.I. Use Case: Simulating Human Behavior

    [ad_1]

    A.I. pioneer Fei-Fei Li is lending her support to Simile’s effort to simulate human behavior at scale. John Nacion/Variety via Getty Images

    Every three months, public companies brace for analyst questions during quarterly earnings calls. But what if firms could predict these queries in advance and rehearse their responses? That’s one of the capabilities touted by Simile, a new A.I. startup spun out of Stanford and backed by acclaimed researcher Fei-Fei Li and OpenAI co-founder Andrej Karpathy.

    Simile emerged from stealth yesterday (Feb. 12) with $100 million in funding from a round led by Index Ventures. Alongside Li and Karpathy, the startup—which hasn’t disclosed its valuation—also counts investors including Quora co-founder Adam D’Angelo and Scott Belsky, a partner at A24 Films.

    Li and Karpathy both have close ties to Simile’s founding team, which includes Stanford researchers Joon Park, Percy Liang and Michael Bernstein. Li is the co-director of Stanford’s Human-Centered A.I. Institute and advised Karpathy during his Ph.D. study at the university. She is widely known for foundational work such as ImageNet, a large-scale image database that helped drive major breakthroughs in computer vision. Karpathy and Bernstein also contributed to that project.

    Simile’s mission of using A.I. to reflect and model societal behavior taps into an underexplored research area, according to Karpathy, who previously worked at OpenAI and Tesla before launching his own education-focused A.I. startup. While large language models typically present a single, cohesive personality, Karpathy argues they are actually trained on data drawn from vast numbers of people. “Why not lean into that statistical power: Why simulate one ‘person’ when you could try to simulate a population?” he wrote in a post on X.

    That idea underpins Simile’s broader goal. The Palo Alto-based startup aims to simulate the real-world effects of major decisions, from public policy to product launches, across virtual populations that mirror human behavior. The team has already tested this concept on a smaller scale through projects like Smallville, a 2023 Stanford experiment in which 25 autonomous A.I. agents interacted in a virtual environment.

    Now, Simile is scaling the approach for business use. After spending the past seven months developing its model, the company is already working with clients on applications ranging from product development to litigation forecasting. CVS Health Corporation, for example, uses Simile to create simulated focus groups, while Gallup uses the platform to build digital polling panels. For earning calls, Simile can predict about 80 percent of the questions that analysts ultimately ask, said Park, the startup’s CEO, during a recent appearance on TBPN.

    At present, Simile’s models are based on data from hundreds of thousands of people who have signed up for its studies. Over time, the company hopes to expand that to simulations representing the world’s entire population of roughly 8 billion people.

    Simile joins a growing wave of A.I. companies focused on using simulation to model real-world scenarios. Much of the existing research in this space has centered on physical systems, such as robotics and autonomous vehicles, through “world model” platforms developed by firms like Google and Nvidia.

    One of the most prominent figures in world models is Li herself. In 2024, she took a leave of absence from Stanford to launch World Labs, a startup that builds 3D digital environments from image and text prompts. The company has raised $230 million to date and is valued at more than $1 billion.

    Fei-Fei Li and Andrej Karpathy Back a New A.I. Use Case: Simulating Human Behavior

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Beacon Hill targets AI in political advertising

    [ad_1]

    BOSTON — Doctored photos and video footage coupled with ads twisting candidates’ words have been used for decades in political campaigns, but the rise of artificial intelligence has elevated such deceptive tactics to a new level.

    That has prompted a bipartisan push on Beacon Hill for restrictions on the misuse of the technology to sway voters and bash political opponents.

    This page requires Javascript.

    Javascript is required for you to be able to read premium content. Please enable it in your browser settings.

    kAmp A2:C @7 3:==D E92E 4=62C65 E96 s6>@4C2E:44@?EC@==65 w@FD6 (2JD 2?5 |62?D r@>>:EE66 @? %F6D52J H:E9 2 72G@C23=6 G@E6 H@F=5 C6BF:C6 42>A2:8?D E@ 5:D4=@D6 E96 FD6 @7 px 😕 A@=:E:42= 25G6CE:D6>6?ED 2?5 32? “5646AE:G6” 4@>>F?:42E:@?D 😕 42>A2:8? 25D h_ 52JD 367@C6 2? 6=64E:@?]k^Am

    kAmx? 2 ;@:?E DE2E6>6?E[ w@FD6 $A62<6C #@? |2C:2?@[ s”F:?4J[ 2?5 w@FD6 (2JD 2?5 |62?D r92:C>2? p2C@? |:49=6H:EK[ sq@DE@?[ D2:5 w@FD6 s6>@4C2ED A=2? E@ AFE 3@E9 3:==D FA 7@C 5632E6 2?5 2 G@E6 2E 2 7@C>2= D6DD:@? (65?6D52J]k^Am

    kAm“pD 2CE:7:4:2= :?E6==:86?46 4@?E:?F6D E@ C6D92A6 @FC 64@?@>J 2?5 >2?J 2DA64ED @7 @FC 52:=J =:G6D[ =2H>2<6CD 92G6 2 C6DA@?D:3:=:EJ E@ 6?DFC6 E92E px 5@6D ?@E 7FCE96C E96 DAC625 @7 >:D:?7@C>2E:@? 😕 @FC A@=:E:4D[” E96J D2:5]k^Am

    kAm“w@FD6 =6256CD9:A 4@?E:?F6D E@ 92G6 AC@5F4E:G6 4@?G6CD2E:@?D H:E9 E96 >6>36CD9:A @? E9:D :DDF6[ 2?5 H6 =@@< 7@CH2C5 E@ A2DD:?8 E9:D :>A@CE2?E =68:D=2E:@? @? (65?6D52J]”k^Am

    kAm~?6 3:==[ 7:=65 3J #6A] %C:4:2 u2C=6Jq@FG:6C[ s!:EED7:6=5[ H@F=5 AC@9:3:E 2?J@?6 CF??:?8 7@C 6=64E65 @77:46 7C@> 5:DEC:3FE:?8 5646AE:G6 @C 7C2F5F=6?E “DJ?E96E:4” 25D H:E9:? h_ 52JD @7 2? 6=64E:@? 😕 H9:49 E96 42?5:52E6 @C E96:C A@=:E:42= A2CEJ H:== 2AA62C @? DE2E6 @C =@42= 32==@ED] ‘:@=2E@CD H@F=5 7246 7:?6D @7 FA E@ S`[___ F?56C E96 AC@A@D2=]k^Am

    kAmp?@E96C 3:==[ 7:=65 3J w@FD6 |:?@C:EJ {6256C qC25 y@?6D[ #}@CE9 #625:?8[ H@F=5 C6BF:C6 A@=:E:42= 42>A2:8?D E@ 5:D4=@D6 E96 FD6 @7 2?J px E649?@=@8J E@ 86?6C2E6 %'[ 5:8:E2= @C AC:?E 25D E2C86E:?8 E96:C @AA@?6?ED]k^Am

    kAm!@=:E:42= @3D6CG6CD 2?E:4:A2E6 2? @?D=2F89E @7 D@A9:DE:42E65 px86?6C2E65 G:56@ @C 2F5:@ 4=:AD 😕 AC6D:56?E:2= 25D 7@C E6=6G:D:@? 2?5 D@4:2= >65:2 D:E6D 29625 @7 E96 A:G@E2= }@G6>36C >:5E6C> 6=64E:@? H96? 4@?EC@= @7 r@?8C6DD H:== 36 FA 7@C 8C23D]k^Am

    kAmp a_ac C6A@CE :DDF65 3J E96 r@?8C6DD:@?2= #6D62C49 $6CG:46[ 2 AF3=:4 A@=:4J C6D62C49 2C> @7 r@?8C6DD[ H2C?65 E92E 566A72<6D 4@F=5 2=D@ 36 86?6C2E65 3J C@8F6 4@F?EC:6D @C 7@C6:8? 25G6CD2C:6D E@ >655=6 😕 E96 FA4@>:?8 AC6D:56?E:2= 6=64E:@?D]k^Am

    kAm“$E2E6 25G6CD2C:6D @C A@=:E:42==J >@E:G2E65 :?5:G:5F2=D 4@F=5 C6=62D6 72=D:7:65 G:56@D @7 6=64E65 @77:4:2=D @C @E96C AF3=:4 7:8FC6D >2<:?8 :?46?5:2CJ 4@>>6?ED @C 3692G:?8 :?2AAC@AC:2E6=J[” E96 C6A@CE’D 2FE9@CD HC@E6] “s@:?8 D@ 4@F=5[ 😕 EFC?[ 6C@56 AF3=:4 ECFDE[ ?682E:G6=J 27764E AF3=:4 5:D4@FCD6[ @C 6G6? DH2J 2? 6=64E:@?]”k^Am

    kAmx? a_ac[ E96 u656C2= t=64E:@? r@>>:DD:@? G@E65 E@ 368:? E96 AC@46DD @7 C68F=2E:?8 px86?6C2E65 566A72<6D 😕 A@=:E:42= 25D 29625 @7 E96 a_ac 6=64E:@?] %96 A2?6= 96=5 2 e_52J AF3=:4 962C:?8 AC@46DD[ 3FE 92D J6E E@ E2<6 24E:@? @? 2?J ?6H C68F=2E:@?D] p =24< @7 utr 4@>>:DD:@?6CD >62?D E96 A2?6= 5@6D ?@E 92G6 2 BF@CF> E@ >66E @C G@E6 @? D2?4E:@?D]k^Am

    kAmp 8C@FA @7 4@?8C6DD:@?2= =2H>2<6CD[ :?4=F5:?8 |2DD249FD6EED #6AD] $6E9 |@F=E@? 2?5 y:> |4v@G6C?[ HC@E6 E@ E96 utr 😕 yF=J a_ac[ FC8:?8 E96 286?4J E@ 24E @? 2 A6E:E:@? 7C@> 8@@5 8@G6C?>6?E 8C@FAD E@ D6E C6DEC:4E:@?D @? 566A 72<6 A@=:E:42= 25G6CE:D:?8]k^Am

    kAm“”F:4<=J 6G@=G:?8 px E649?@=@8J >2<6D :E :?4C62D:?8=J 5:77:4F=E 7@C G@E6CD E@ 244FC2E6=J :56?E:7J 7C2F5F=6?E G:56@ 2?5 2F5:@ >2E6C:2=[ H9:49 😀 :?4C62D:?8=J EC@F3=:?8 😕 E96 4@?E6IE @7 42>A2:8? 25G6CE:D6>6?ED[” E96J HC@E6]k^Am

    kAmk6>mr9C:DE:2? |] (256 4@G6CD E96 |2DD249FD6EED $E2E69@FD6 7@C }@CE9 @7 q@DE@? |65:2 vC@FAUCDBF@jD ?6HDA2A6CD 2?5 H63D:E6D] t>2:= 9:> 2E k2 9C67lQ>2:=E@i4H256o4?9:?6HD]4@>Qm4H256o4?9:?6HD]4@>k^2m]k^6>mk^Am

    [ad_2]

    By Christian M. Wade | Statehouse Reporter

    Source link

  • Bitcoin biopic starring Casey Affleck to use AI to generate locations and tweak performances

    [ad_1]

    Killing Satoshi, an upcoming biopic about the elusive creator of Bitcoin, will reportedly rely heavily on artificial intelligence to generate locations and adjust actors’ performances, Variety reports. The film was announced in 2025 as being directed by Doug Liman (The Bourne Identity, The Edge of Tomorrow) and starring Casey Affleck and Pete Davidson in undisclosed roles, but its connection to overhyped technology was previously understood to begin and end with cryptocurrency.

    According to a UK casting notice viewed by Variety, the producers of Killing Satoshi reserve the right to “change, add to, take from, translate, reformat or reprocess” actors’ performances, using “generative artificial intelligence (GAI) and/or machine learning technologies.” No digital replicas will be created of performers, but it sounds like plenty of other AI-driven tweaks are on the table. The production’s use of AI will also extend to the setting of its shoots, per Variety’s source. Killing Satoshi will be shot on a “markerless performative capture stage” and things like backgrounds and locations will be entirely generated by AI.

    You guess is as good as mine as to why a film about blockchain technology needs to be filmed this way, but Doug Liman has been connected with plenty of unusual projects in the past, including a rumored Tom Cruise film that was supposed to film on the International Space Station. Killing Satoshi will be far less practical in comparison, and walking a much finer line of what’s acceptable in the entertainment industry.

    A major sticking point in SAG-AFTRA’s 2023 contract negotiations was guaranteeing protections for actors who could be replaced by AI. Equity, the union representing actors in the UK, is currently negotiating protections for members that are concerned that AI could be used to reproduce their likenesses and voices and let studios use them without their consent.

    [ad_2]

    Ian Carlos Campbell

    Source link

  • World’s fastest humanoid robot runs 22 MPH

    [ad_1]

    NEWYou can now listen to Fox News articles!

    A full-size humanoid robot just ran faster than most people will ever sprint. 

    Chinese robotics firm MirrorMe Technology has unveiled Bolt, a humanoid robot that reached a top speed of 22 miles per hour during real-world testing. This was not CGI or a computer simulation. The footage, shared by the company on X, shows a real humanoid robot running at full speed inside a controlled testing facility.

    That milestone makes Bolt the fastest running humanoid robot of its size ever demonstrated outside computer simulations. For robotics, this is a line-crossing moment.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    WARM-SKINNED AI ROBOT WITH CAMERA EYES IS SERIOUSLY CREEPY

    MirrorMe Technology’s humanoid robot Bolt reaches 22 mph during a real-world sprint test inside a controlled facility. (Zhang Xiangyi/China News Service/VCG via Getty Images)

    What allows the world’s fastest humanoid robot to run at 22 mph

    In the promotional video, the run is shown using a split-screen view. On one side of the screen, Wang Hongtao, the founder of MirrorMe Technology, runs on a treadmill. On the other side, Bolt runs under the same conditions. The comparison makes the difference clear. As the pace increases, Wang struggles to keep up and eventually gives up, while Bolt continues running smoothly, maintaining balance as its stride rate increases.

    Bolt takes shorter strides than a human runner but makes up for it with a much faster stride rhythm. That faster rhythm helps the robot stay stable as it accelerates. Engineers say this performance reflects major progress in humanoid locomotion control, dynamic balance and high-performance drive systems. Speed is impressive. Speed with control is the real achievement.

    The humanoid robot design choices behind Bolt’s speed

    Bolt stands about 5 feet, 7 inches tall and weighs roughly 165 pounds, putting it close to the size and mass of an average adult human. MirrorMe says that similarity is intentional. The company describes this as the ideal humanoid form. 

    Rather than oversized limbs or exaggerated mechanics, Bolt relies on newly designed joints paired with a fully optimized power system. The goal is to replicate natural human motion while staying stable at extreme speeds. That combination is what sets Bolt apart.

    HUMANOID ROBOTS ARE GETTING SMALLER, SAFER AND CLOSER

    Robot running on a track.

    MirrorMe says Bolt’s 22 mph run highlights stability and control, not just raw speed. ( Cui Jun/Beijing Youth Daily/VCG via Getty Images)

    Why Bolt’s sprint reflects years of robotics development

    Bolt did not appear overnight. MirrorMe has focused on robotic speed as a long-term priority since 2016. Last year, its Black Panther II robot stunned viewers by sprinting 328 feet in 13.17 seconds during a live television broadcast in China. Reports suggested the performance exceeded comparable tests involving Boston Dynamics machines. 

    In 2025, the company also set a record with a four-legged robot that surpassed 22 mph, reinforcing its focus on acceleration, agility and sustained high-speed motion. China’s interest in robotic athletics continues to grow. Beijing even hosted the first World Humanoid Robot Games, where humanoid robots competed in sprint races on a track.

    Why MirrorMe says speed is not the end goal

    Running at 22 mph grabs attention, but MirrorMe says speed alone is not the point. The engineers behind Bolt care more about what happens at that speed. Balance, reaction time and control matter more than a headline number. Those skills are what let a humanoid robot move like a trained runner instead of a machine on the verge of tipping over.

    That is where the athlete angle comes in. MirrorMe envisions Bolt as a training partner that can run alongside elite athletes, hold a steady pace and push limits without getting tired. By matching and slightly exceeding human performance, the robot could help runners fine-tune form, pacing and endurance while collecting precise motion data. In that context, the sprint is not a stunt. It shows how humanoid robots could move beyond demos and into real training and performance settings.

    What this means to you

    Humanoid robots that can run at highway speeds are no longer something you only see in demos or concept videos. As these machines get faster and more stable, they start to fit into real-world roles. That includes athletic training, emergency response and physically demanding jobs where speed and endurance make a real difference. At the same time, faster robots bring real concerns. Safety, oversight and clear rules matter even more when machines can move this quickly around people. When robots run this fast, the limits need to be clear.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    HUMANOID ROBOT MAKES ARCHITECTURAL HISTORY BY DESIGNING A BUILDING

    Robots running a race.

    Engineers say Bolt’s high-speed sprint reflects advances in locomotion control, balance and drive systems. (Photo by Kevin Frayer/Getty Images)

    Kurt’s key takeaways

    Bolt running at 22 mph is eye-catching, but the speed is not the main takeaway. What matters is what it shows. Robots are starting to move more like people. They can run, adjust and stay upright at speeds that used to knock machines over. That opens the door to real uses, but it also raises real questions. How fast is too fast around people? Who sets the rules? And who is responsible when something goes wrong? The technology is moving quickly. The conversation around it needs to move just as fast.

    If humanoid robots can soon outrun and outtrain humans, where should limits be set on how and where they are allowed to operate? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP 

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • Amazon scraps partnership with surveillance company after Super Bowl ad backlash

    [ad_1]

    Amazon’s smart doorbell maker Ring has terminated a partnership with police surveillance tech company Flock Safety.

    The announcement follows a backlash that erupted after 30-second Ring ad that aired during the Super Bowl featuring a lost dog that is found through a network of cameras, sparking fears of a dystopian surveillance society.

    But that feature, called Search Party, was not related to Flock. And Ring’s announcement doesn’t cite the ad as a reason for the “joint decision” for the cancellation.

    Ring and Flock said last year they were planning on working together to give Ring camera owners the option to share their video footage in response to law enforcement requests made through a Ring feature known as Community Requests.

    “Following a comprehensive review, we determined the planned Flock Safety integration would require significantly more time and resources than anticipated,” Ring’s statement said.

    “The integration never launched, so no Ring customer videos were ever sent to Flock Safety.”

    Beyond the Flock partnership, Ring has faced other surveillance concerns.

    In the Super Bowl ad, a lost dog is found with Ring’s Search Party feature, which the company says can “reunite lost dogs with their families and track wildfires threatening your community.” The clip depicts the dog being tracked by cameras throughout a neighborhood using artificial intelligence.

    And viewers took to social media to criticize it for being sinister, leaving many wondering if it would be used to track humans and saying they would turn the feature off.

    The Electronic Frontier Foundation, a nonprofit that focus on civil liberties related to digital technology, said this week that Americans should feel unsettled over the potential loss of privacy.

    “Amazon Ring already integrates biometric identification, like face recognition, into its products via features like “Familiar Faces,” which depends on scanning the faces of those in sight of the camera and matching it against a list of pre-saved, pre-approved faces,” the Foundation wrote Tuesday. “It doesn’t take much to imagine Ring eventually combining these two features: face recognition and neighborhood searches.”

    [ad_2]

    Source link

  • Amazon Scraps Partnership With Surveillance Company After Super Bowl Ad Backlash

    [ad_1]

    Amazon’s smart doorbell maker Ring has terminated a partnership with police surveillance tech company Flock Safety.

    The announcement follows a backlash that erupted after 30-second Ring ad that aired during the Super Bowl featuring a lost dog that is found through a network of cameras, sparking fears of a dystopian surveillance society.

    But that feature, called Search Party, was not related to Flock. And Ring’s announcement doesn’t cite the ad as a reason for the “joint decision” for the cancellation.

    Ring and Flock said last year they were planning on working together to give Ring camera owners the option to share their video footage in response to law enforcement requests made through a Ring feature known as Community Requests.

    “Following a comprehensive review, we determined the planned Flock Safety integration would require significantly more time and resources than anticipated,” Ring’s statement said.

    “The integration never launched, so no Ring customer videos were ever sent to Flock Safety.”

    In Super Bowl ad, a lost dog is found with Ring’s Search Party feature, which the company says can “reunite lost dogs with their families and track wildfires threatening your community.” The clip depicts the dog being tracked by cameras throughout a neighborhood on using artificial intelligence.

    And viewers took to social media to criticize it for being sinister, leaving many wondering if it would be used to track humans and saying they would turn the feature off.

    The Electronic Frontier Foundation, a nonprofit that focus on civil liberties related to digital technology, said this week that Americans should feel unsettled over the potential loss of privacy.

    “Amazon Ring already integrates biometric identification, like face recognition, into its products via features like “Familiar Faces,” which depends on scanning the faces of those in sight of the camera and matching it against a list of pre-saved, pre-approved faces,” the Foundation wrote Tuesday. “It doesn’t take much to imagine Ring eventually combining these two features: face recognition and neighborhood searches.”

    Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – Feb. 2026

    [ad_2]

    Associated Press

    Source link

  • Albanian Actor Sues Government for Using Her Image as ‘AI Minister’

    [ad_1]

    By Fatos Bytyci and Florion Goga

    TIRANA, Feb 13 (Reuters) – ⁠An ⁠Albanian actor is suing the ⁠government for using her face and voice to create the avatar ​for an “AI minister” – a virtual member of the cabinet.

    When Edi Rama began his fourth term as ‌Albania’s prime minister last September, he ‌also unveiled an AI-generated virtual minister, “Diella” – sun in Albanian –  to oversee the awarding of ⁠government contracts ⁠as a step to fight corruption.

    Diella features the face and voice ​of Anila Bisha, a film and theatre actor who says she never gave consent for her likeness to be used that way, and it has led to harassment online and unwanted attention in ​the street.

    “First I was surprised, smiled and I said it must be a joke,” ⁠Bisha ⁠told Reuters. “Now people call me ⁠Diella and ​they consider me as just another minister of the government.”

    She says she allowed her likeness ​to be used last ⁠year to create an AI-powered virtual assistant for a government website to help citizens and businesses get state documents, but not as a virtual politician on the prime minister’s team.

    “People who don’t like the prime minister, now they also hate me.”

    The government denies using her ⁠likeness improperly. The “lawsuit is nonsense, but we welcome the opportunity to solve it once ⁠and for all in a court of law,” the government’s press office said in response to questions from Reuters.

    The Albanian government’s public image has been battered since December after a special prosecution unit indicted Rama’s deputy, Belinda Balluku, for meddling in tenders for infrastructure projects, which she denies.

    Diella’s image appears in the first row of the cabinet list on the government’s website, next to photos of Rama and Balluku. 

    A court is expected to rule on Monday whether to order ⁠the government to stop using her image. Her lawyer, Aranit Roshi, said Bisha is seeking 1 million euros in damages.

    “The law says that in cases of personal data violation, penalties for state institutions are up to 21 million euros so our ​request for 1 million is a reasonable amount,” he said.

    (Reporting by Fatos ​Bytyci and Florion GogaEditing by Peter Graff)

    Copyright 2026 Thomson Reuters.

    Photos You Should See – Feb. 2026

    [ad_2]

    Reuters

    Source link

  • People — and robots — are getting ready to celebrate the Lunar New Year in China

    [ad_1]

    BEIJING — It’s not just people — in China, the robots are also getting ready to celebrate the Lunar New Year.

    Friday was dress rehearsal day for four cute humanoid robots, each about 95 centimeters (3 feet) tall at a mall in western Beijing. Curious onlookers stopped to watch.

    Each robot got a colorful lion costume and within minutes the moves started: Bend the knees, up, to the left, to the right, shake the mask, and do it all again!

    Ahead of the Lunar New Year celebrated next week, and as part of different “fairs” and activities around Beijing, some venues have been busy setting up their stages and props.

    For a second year in a row, one of the fairs will be devoted to technology and — yes, again — robots will take center stage.

    People will see them dancing and also them stacking blocks on top of others to make a little tower, skewering hawthorn berries onto a stick — coated with a syrup, a popular sweet snack — or playing soccer.

    “This year, the number of our robots has increased a lot,” said Qiu Feng, a member of the organizing committee. “They will perform dance, martial arts, Peking Opera, poetry and soccer.”

    “Some events were also available last year but the finness of the actions and the high-tech vibe are stronger” this time, Qui added.

    China has been scaling up its efforts to develop better robots that can perform different activities, powered by artificial intelligence and with less human intervention.

    But though they can now do things that were difficult to imagine a few years ago, humans are still needed to help them — for example, to dress them or move them when they stop in the middle of a mini-soccer field.

    “Technology is developing faster and becoming more advanced every day,” Qui also said. “As long as we keep up with this trend, our … fair will continue to evolve and rise with the times.”

    The robots performing at the mall were developed by some Chinese startups, like Booster Robotics. The company will display around 20 humanoid robots, which will also dance and play soccer.

    “It is an AI environment, which means, once the whistle sounds, the remote control will all be put aside and all its decision-making and motion control are made by the robots themselves,” said Ren Zixin, director of marketing at Booster Robotics.

    [ad_2]

    Source link