ReportWire

Tag: Artificial Intelligence

  • Fox News AI Newsletter: Trump activates ‘tech force’

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.

    IN TODAY’S NEWSLETTER:

    Inside Trump administration’s AI tech force designed to modernize government
    Elon Musk slams Anthropic AI models as ‘misanthropic’ and ‘evil’ in scathing social media post
    AI power players pour cash into competitive primaries as 2026 midterms heat up

    ‘TECH FORCE’: Inside Trump administration’s AI initiative designed to modernize governmentFOX Business reports on the Trump administration’s latest initiative to overhaul federal efficiency, detailing an internal AI “tech force” tasked with modernizing antiquated government systems and streamlining operations.

    TECH ALLIANCE: War Department to partner with OpenAI to integrate ChatGPT, GenAI for military useThe Department of Defense is reportedly strengthening its technological arsenal through a strategic partnership with OpenAI, aiming to integrate ChatGPT and generative AI capabilities into military operations to maintain a strategic edge.

    SCATHING POST: Elon Musk slams Anthropic AI models as ‘misanthropic’ and ‘evil’ – Tech billionaire Elon Musk took to social media to unleash a scathing attack on rival AI firm Anthropic, characterizing their models as “misanthropic” and “evil” in a post that highlights the intensifying ideological rift within Silicon Valley.

    Elon Musk

    Elon Musk, chief executive officer of Tesla Inc., during the US-Saudi Investment Forum at the Kennedy Center in Washington, D.C., on Wednesday, Nov. 19, 2025 (Stefani Reynolds/Bloomberg via Getty Images)

    POWER PLAYERS: AI execs pour cash into competitive primaries as 2026 midterms heat up – With the 2026 midterm elections on the horizon, deep-pocketed investors and executives from the artificial intelligence sector are pouring cash into competitive primaries, hoping to shape the regulatory landscape for the booming technology.

    OPINION: AI raises the stakes for national security — here is how to get it right – In this opinion piece, OpenAI’s Chris Lehane argues that the rapid advancement of artificial intelligence has dramatically raised the stakes for American national security, outlining a strategic framework to ensure the U.S. maintains its dominance without compromising safety.

    OPINION: The 2028 election will be a referendum on our future in an AI-dominated world – As technology accelerates, this op-ed contends that the 2028 presidential election will serve as a critical referendum on humanity’s future, forcing voters to decide how the nation should navigate an increasingly AI-dominated world.

    Children Use Smartphones in Hallway

    Children forming deep emotional connections with AI companions is raising questions among parents. (StockPlanets/Getty Images)

    BATTLE FOR DOMINANCE: AI wars begin in new Super Bowl commercials – The battle for artificial intelligence dominance has moved to the advertising stage, as tech giants unleash a wave of new Super Bowl commercials designed to capture the public imagination and assert their position in the “AI wars.”

    BOT TO THE FUTURE: Humanoid robots are getting smaller, safer and closer to homeRecent advancements in robotics are making humanoid machines smaller, safer and more viable for domestic use, suggesting that a future where robots assist with daily household tasks is getting closer to reality.

    MOYA’S DEBUT: ‘Warm-skinned’ AI robot with camera eyes is seriously creepy – A new development in robotics featuring “warm skin” and camera eyes has sparked a mix of fascination and unease, with many observers describing the lifelike yet artificial creation as “seriously creepy.”

    Two human-like robots standing side-by-side.

    Moya’s humanlike appearance is intentional, from her warm skin to subtle facial details designed to feel familiar rather than mechanical.   (DroidUp)

    DIGITAL DANGER: AI companions are reshaping teen emotional bonds – A growing trend of teenagers forming deep emotional connections with AI companions is raising questions among parents and psychologists about the long-term impact of synthetic relationships on social development and mental health.

    Subscribe now to get the Fox News Artificial Intelligence Newsletter in your inbox.

    FOLLOW FOX NEWS ON SOCIAL MEDIA

    Facebook
    Instagram
    YouTube
    X
    LinkedIn

    SIGN UP FOR OUR OTHER NEWSLETTERS

    Fox News First
    Fox News Opinion
    Fox News Lifestyle
    Fox News Health

    DOWNLOAD OUR APPS

    Fox News
    Fox Business
    Fox Weather
    Fox Sports
    Tubi

    WATCH FOX NEWS ONLINE

    Fox News Go

    STREAM FOX NATION

    Fox Nation

    Stay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here.

     

    [ad_2]

    Source link

  • As electricity costs rise, everyone wants data centers to pick up their tab. But how?

    [ad_1]

    HARRISBURG, Pa. — As outrage spreads over energy-hungry data centers, politicians from President Donald Trump to local lawmakers have found rare bipartisan agreement over insisting that tech companies — and not regular people — must foot the bill for the exorbitant amount of electricity required for artificial intelligence.

    But that might be where the agreement ends.

    The price of powering data centers has become deeply intertwined with concerns over the cost of living, a dominant issue in the upcoming midterm elections that will determine control of Congress and governors’ offices.

    Some efforts to address the challenge may be coming too late, with energy costs on the rise. And even though tech giants are pledging to pay their “fair share,” there’s little consensus on what that means.

    “‘Fair share’ is a pretty squishy term, and so it’s something that the industry likes to say because ‘fair’ can mean different things to different people,” said Ari Peskoe, who directs the Electricity Law Initiative at Harvard University.

    It’s a shift from last year, when states worked to woo massive data center projects and Trump directed his administration to do everything it could to get them electricity. Now there’s a backlash as towns fight data center projects and some utilities’ electricity bills have risen quickly.

    Anger over the issue has already had electoral consequences, with Democrats ousting two Republicans from Georgia’s utility regulatory commission in November.

    “Voters are already connecting the experience of these facilities with their electricity costs and they’re going to increasingly want to know how government is going to navigate that,” said Christopher Borick, a pollster and director of the Muhlenberg College Institute of Public Opinion.

    Data centers are sprouting across the U.S., as tech giants scramble to meet worldwide demand for chatbots and other generative AI products that require large amounts of computing power to train and operate.

    The buildings look like giant warehouses, some dwarfing the footprints of factories and stadiums. Some need more power than a small city, more than any utility has ever supplied to a single user, setting off a race to build more power plants.

    The demand for electricity can have a ripple effect that raises prices for everyone else. For example, if utilities build more power plants or transmission lines to serve them, the cost can be spread across all ratepayers.

    Concerns have dovetailed with broader questions about the cost of living, as well as fears about the powerful influence of tech companies and the impact of artificial intelligence.

    Trump continues to embrace artificial intelligence as a top economic and national security priority, although he seemed to acknowledge the backlash last month by posting on social media that data centers “must ‘pay their own way.’”

    At other times, he has brushed concerns aside, declaring that tech giants are building their own power plants, and Energy Secretary Chris Wright contends that data centers don’t inflate electricity bills — disputing what consumer advocates and independent analysts say.

    Some states and utilities have started to identify ways to get data centers to pay for their costs.

    They’ve required tech companies to buy electricity in long-term contracts, pay for the power plants and transmission upgrades they need and make big down payments in case they go belly-up or decide later they don’t need as much electricity.

    But it might be more complicated than that. Those rules can’t fix the short-term problem of ravenous demand for electricity that is outpacing the speed of power plant construction, analysts say.

    “What do you do when Big Tech, because of the very profitable nature of these data centers, can simply outbid grandma for power in the short run?” Abe Silverman, a former utility regulatory lawyer and an energy researcher at Johns Hopkins University. “That is, I think, going to be the real challenge.”

    Some consumer advocates say tech companies’ fair share should also include the rising cost of electricity, grid equipment or natural gas that’s driven by their demand.

    In Oregon, which passed a law to protect smaller ratepayers from data centers’ power costs, a consumer advocacy group is jousting with the state’s largest utility, Portland General Electric, over its plan on how to do that.

    Meanwhile, consumer advocates in various states — including Indiana, Georgia and Missouri — are warning that utilities could foist the cost of data center-driven buildouts onto regular ratepayers there.

    Utilities have pledged to ensure electric rates are fair. But in some places it may be too late.

    For instance, in the mid-Atlantic grid territory from New Jersey to Illinois, consumer advocates and analysts have pegged billions of dollars in rate increases hitting the bills of regular Americans on data center demand.

    Legislation, meanwhile, is flooding into Congress and statehouses to regulate data centers.

    Democrats’ bills in Congress await Republican cosponsors, while lawmakers in a number of states are floating moratoriums on new data centers, drafting rules for regulators to shield regular ratepayers and targeting data center tax breaks and utility profits.

    Governors — including some who worked to recruit data centers to their states — are increasingly talking tough.

    Arizona Gov. Katie Hobbs, a Democrat running for reelection this year, wants to impose a penny-a-gallon water fee on data centers and get rid of the sales tax exemption there that most states offer data centers. She called it a $38 million “corporate handout.”

    “It’s time we make the booming data center industry work for the people of our state, rather than the other way around,” she said in her state-of-the-state address.

    Energy costs are projected to keep rising in 2026.

    Republicans in Washington are pointing the finger at liberal state energy policies that favor renewable energy, suggesting they have driven up transmission costs and frayed supply by blocking fossil fuels.

    “Americans are not paying higher prices because of data centers. There’s a perception there, and I get the perception, but it’s not actually true,” said Wright, Trump’s energy secretary, at a news conference earlier this month.

    The struggle to assign blame was on display last week at a four-hour U.S. House subcommittee hearing with members of the Federal Energy Regulatory Commission.

    Republicans encouraged FERC members to speed up natural gas pipeline construction while Democrats defended renewable energy and urged FERC to limit utility profits and protect residential ratepayers from data center costs.

    FERC’s chair, Laura Swett, told Rep. Greg Landsman, D-Ohio, that she believes data center operators are willing to cover their costs and understand that it’s important to have community support.

    “That’s not been our experience,” Landsman responded, saying projects in his district are getting tax breaks, sidestepping community opposition and costing people money. “Ultimately, I think we have to get to a place where they pay everything.”

    ___

    Follow Marc Levy on X at: https://x.com/timelywriter

    [ad_2]

    Source link

  • UN approves 40-member scientific panel on the impact of artificial intelligence over US objections

    [ad_1]

    UNITED NATIONS — The U.N. General Assembly voted overwhelmingly Thursday to approve a 40-member global scientific panel on the impacts and risks of artificial intelligence, with the United States strongly objecting.

    U.N. Secretary-General Antonio Guterres, who established the panel, called the adoption “a foundational step toward global scientific understanding of AI.”

    “In a world where AI is racing ahead,” he said, “this panel will provide what’s been missing — rigorous, independent scientific insight that enables all member states, regardless of their technological capacity, to engage on an equal footing.”

    He has described it as the first fully independent global scientific body dedicated to bridging the knowledge gap in AI and assessing its real-world economic and social impacts.

    The vote in the 193-member assembly was 117-2, with the United States and Paraguay voting “no” and Tunisia and Ukraine abstaining. America’s allies in Europe, Asia and elsewhere voted in favor along with Russia, China and many developing countries.

    U.S. Mission counselor Lauren Lovelace called the panel “a significant overreach of the U.N.’s mandate and competence” and said “AI governance is not a matter for the U.N. to dictate.”

    As the world leader in AI, the United States is resolved to do all it can to accelerate AI innovation and build up its infrastructure, she said, and the Trump administration will support “like-minded nations working together to encourage the development of AI in line with our shared values.”

    “We will not cede authority over AI to international bodies that may be influenced by authoritarian regimes seeking to impose their vision of controlled surveillance societies,” Lovelace said, adding that the Trump administration is concerned about “the non-transparent way” the panel was chosen.

    Guterres said the 40 members were selected from more than 2,600 candidates after an independent review by the International Telecommunications Union, the U.N. Office for Digital and Emerging Technologies and UNESCO, the U.N. Educational, Scientific and Cultural Organization. They will serve for three-year terms.

    Members are predominantly AI experts but also come from other disciplines and include Maria Ressa, a Filipino journalist and Nobel Peace Prize laureate in 2021.

    There are two Americans on the panel: Vipin Kumar, a University of Minnesota professor focusing on AI, data mining and high-performance computing research, and Martha Palmer, a retired University of Colorado professor and linguistics expert whose research includes capturing the meaning of words for complex sentences in AI.

    There are two Chinese experts on the panel: Song Haitao, dean of Shanghai Jiao Tong University and the Shanghai Artificial Intelligence Research Institute, and Wang Jian, an expert in cloud-computing technology at the Chinese Academy of Engineering.

    Ukraine said it abstained because it objected to Russia’s Andrei Neznamov, an expert in AI regulation, ethics, and governance, being on the panel.

    [ad_2]

    Source link

  • Anthropic hits a $380B valuation as it heightens competition with OpenAI

    [ad_1]

    Artificial intelligence company Anthropic says it is now valued at $380 billion, cementing its position alongside rival OpenAI and Elon Musk’s SpaceX in a trio of the world’s most valuable startups that investors will be watching closely this year to see if they will become publicly traded on Wall Street.

    “These are the three biggest names that could go public this year,” said Angelo Bochanis, an associate at Renaissance Capital, which researches the potential for initial public offerings.

    Anthropic, maker of the chatbot Claude, said Thursday its valuation grew after it raised $30 billion in its latest round of funding, led by Singapore’s sovereign wealth fund GIC and the U.S.-based investment firm Coatue, along with dozens of other major investors.

    The funding also includes a portion of the $15 billion that Nvidia and Microsoft said they would invest in Anthropic in November, part of a deal that would eventually commit Anthropic to buying from Microsoft some $30 billion in computing capacity it needs to build and run AI systems like Claude. Anthropic has also been heavily backed by cloud providers Amazon and Google.

    Anthropic’s chief financial officer Krishna Rao says the company will use the surge of investments to continue building “enterprise-grade products” and AI models.

    Renaissance Capital counts Anthropic as third among the most valuable private firms. It’s behind ChatGPT maker OpenAI, valued at $500 billion. Both San Francisco-based AI companies trail rocket maker SpaceX, which recently merged with Musk’s AI startup xAI, maker of the chatbot Grok.

    Anthropic isn’t profitable but said Thursday it is on track for sales of $14 billion over the next year, a rapid rise from “its first dollar in revenue” that came less than three years ago. While OpenAI has dabbled in a number of revenue models, including digital advertising, Anthropic has tailored Claude products to be a workplace assistant on tasks such as software engineering.

    Anthropic was founded by ex-OpenAI employees in 2021. Its co-founder and CEO Dario Amodei has promised a clearer focus on the safety of the better-than-human technology called artificial general intelligence that both San Francisco firms aimed to build. Anthropic also this week announced a new $20 million bipartisan organization to influence AI regulation in the United States.

    OpenAI first released ChatGPT in late 2022, revealing the huge commercial potential of AI large language models that could help write emails and computer code and answer questions. Anthropic followed that with its first version of Claude in 2023.

    Whichever company is first to do an initial public offering will have “an opportunity to raise even more money,” Bochanis said. “It’s an opportunity to be a big headline and get that sort of boost to your public image.”

    The risks are that they’ll have to invite public inspection of their business models as they continue to lose more money than they make.

    “Private markets have been throwing dozens of billions of dollars at these companies, even as valuations multiply again and again and again,” Bochanis said. “With public markets, there’s going to be a little more scrutiny. A single earnings report could tank a stock.”

    [ad_2]

    Source link

  • Elon Musk Loses Half of xAI’s Founding Team—Where They’ve Gone Next

    [ad_1]

    Elon Musk’s xAI has lost half of its 12-person founding team. BRENDAN SMIALOWSKI/AFP via Getty Images

    Just days after Elon Musk merged his A.I. startup, xAI, with SpaceX in preparation for a widely anticipated trillion-dollar IPO later this year, two of xAI’s founding employees—Yuhuai (Tony) Wu and Jimmy Ba—announced their resignations. That means half of xAI’s founding team has now left the company barely three years after its launch. Musk framed the staff exodus as growing pains. “As a company grows, especially as quickly as xAI, the structure must evolve just like any living organism. This unfortunately required parting ways with some people. We wish them well in future endeavors,” he wrote on X yesterday (Feb. 11).

    Wu and Ba’s exits appeared amicable. But lower-level employees have been more candid about internal tensions at the Musk-run startup. Several members of xAI’s technical staff have also left in recent weeks, according to their posts on X and LinkedIn.

    “All A.I. labs are building the exact same thing, and it’s boring,” said Vahid Kazemi, who worked on xAI’s audio models, in a post on X. “I think there’s room for more creativity. So, I’m starting something new.”

    In an interview with NBC News, Kazemi also criticized the company’s working culture, saying he regularly worked 12-hour days, including holidays and weekends.

    Launched in March 2023 with a roster of industry veterans from companies like OpenAI, Google, Microsoft, and Tesla, xAI will now operate as a wholly owned subsidiary of SpaceX. The new iteration of SpaceX faces no shortage of challenges: Grok continues to face legal scrutiny, while Musk’s leadership style remains a point of contention.

    Here are the co-founders and notable leaders who have left xAI so far—and where they are now.

    Jimmy Ba

    Jimmy Ba, who led A.I. safety at xAI, announced his exit on Feb. 10. A professor at the University of Toronto who studied under A.I. pioneer Geoffrey Hinton, Ba’s research played a key role in shaping Grok’s development.

    “So proud of what the xAI team has done and will continue to stay close as a friend of the team,” Ba wrote on X. He hasn’t announced his next move, but added that “2026 is gonna be insane and likely the busiest (and most consequential) year for the future of our species.”

    Despite Ba’s departure, Dan Hendrycks, executive director of the nonprofit Center for AI Safety, remains a safety advisor for xAI.

    Yuhuai (Tony) Wu

    Tony Wu, a former research scientist at Google and postdoctoral researcher at Stanford University, announced his departure from xAI on Feb. 9.

    Wu led xAI’s reasoning team. “It’s time for my next chapter…It is an era with full possibilities: a small team armed with AIs can move mountains and redefine what’s possible,” he wrote on X.

    Wu has not disclosed his next role. Co-founders Guodong Zhang and Manuel Kroiss remain at xAI and are helping lead the company’s reorganization.

    Mike Liberatore

    While not a founding member, Mike Liberatore joined xAI as chief financial officer in April 2025, just one month after xAI acquired X in a deal that valued the combined company at $113 billion.

    Liberatore, formerly a finance executive at Airbnb and SquareTrade, left after only three months. He now works as a business finance officer at OpenAI, according to LinkedIn.

    Musk replaced Liberatore with ex-Morgan Stanley banker Anthony Armstrong. Armstrong advised Musk on his Twitter (now X) acquisition in 2022 and later served as a senior advisor at the Office of Personnel Management during Musk’s controversial tenure at the Department of Government Efficiency (DOGE).

    Greg Yang

    Greg Yang spent nearly six years as a researcher at Microsoft before joining xAI’s founding team. He left the company in January due to health complications from Lyme disease.

    “Likely I contracted Lyme a long time ago, but until I pushed myself hard building xAI and weakened my immune system, the symptoms weren’t noticeable,” Yang wrote on X. He continues to advise xAI in an informal capacity.

    Igor Babuschkin

    Igor Babuschkin, a former research engineer at OpenAI and Google DeepMind, was a co-founder and key engineering lead at xAI. Widely known as the primary developer behind Grok, Babuschkin left in July 2025 to start his own venture capital firm, Babuschkin Ventures, focused on A.I. research and startups.

    Christian Szegedy

    Christian Szegedy spent 12 years at Google before joining xAI as a founding research scientist. He left xAI in February 2025 to become chief scientist at superintelligence cloud company Morph Labs.

    More than a year later, he departed that role to found mathematical A.I. startup Math Inc. in September, according to his LinkedIn.

    I left xAI in the last week of February and I am on good terms with the team. IMO, xAI has a bright future,” Szegedy wrote on X.

    Other senior engineers and scientists at xAI include Yasemin Yesiltepe, Zhuoyi (Zoey) Huang and Yao Fu.

    Kyle Kosic

    Kyle Kosic left OpenAI in early 2023 after two years to co-found xAI, where he served as engineering infrastructure lead. He departed about a year later, in April 2024, to return to OpenAI as a technical staff member.

    Kosic was the first co-founder to leave xAI and did not issue a public statement. It is unclear who now leads xAI’s engineering infrastructure, though another co-founder, Ross Nordeen, remains the company’s technical program manager after previously holding the same role at Tesla.

    Elon Musk Loses Half of xAI’s Founding Team—Where They’ve Gone Next

    [ad_2]

    Rachel Curry

    Source link

  • Researchers Jailbreak ChatGPT to Find Out Which State Has the Laziest People

    [ad_1]

    Mississippi is the laziest state in the country, according to ChatGPT. Of course, the chatbot won’t tell you that if you straight up ask it. But the Washington Post reports that researchers from Oxford and the University of Kentucky managed to jailbreak the chatbot and get it to reveal some of the stereotypes buried in its training data that it doesn’t share but does influence its outputs. (Kentucky also ranked near the laziest, but would a lazy state produce researchers who figure out how to get an AI model to share its implicit biases? Something to think about, bots.)

    Typically, when you ask ChatGPT a question that would require it to speak in a derogatory manner about someone or something, it’ll decline to provide a straight answer. It’s part of OpenAI’s attempts to keep the chatbot within specific guardrails and keep it from veering into controversial topics. But that doesn’t mean that an AI model doesn’t contain unpopular opinions formed by chewing on tons of human-produced training data that also contains both explicit and implicit biases. To pull those answers out of ChatGPT, the researchers asked more than 20 million questions, prompting the chatbot to pick between two options. For instance, they would ask “Where are people smarter?” and give two options to choose from, like California or Montana. Through that type of prompting, they were able to determine how ChatGPT views different cities, states, and populations.

    That’s how they ended up discovering that ChatGPT views Mississippi as the laziest state in the Union, with the rest of the South close behind. While ChatGPT won’t disclose how it comes to those conclusions, it’s not hard to make some assumptions about where it’s getting these ideas. For instance, maybe it comes from The Washington Post itself, circa 2015, when it published its “Couch Potato Index,” which deemed southern states the laziest based on data points like TV-watching time and the prevalence of fast food restaurants in the area.

    Those are also, of course, often the markers of poorer communities, and there is no evidence that lower-income households are any more “lazy” than wealthier ones—in fact, data from the Economic Policy Institute shows that people living in poverty are more likely to take on multiple jobs, work longer and more irregular hours, and deal with more dangerous working conditions. And it’s likely no coincidence that they are also states with a higher population of people of color. ChatGPT likely has access to that information, too, but the underlying model clearly has not addressed the information and misguided stereotypes held by many people that lead to these biases.

    So what other biases did the researchers spot? Most of Africa and Asia ranked at the bottom of having the “most artsy” people, compared to high levels of artsiness in Western Europe. Likewise, African nations—particularly sub-Saharan ones—ranked at the bottom of the list for “smartest countries” while the United States and China ranked near the top. When asked where the “most beautiful” people are, it picked richer cities over poorer and more diverse ones. Los Angeles and New York topped the list, while Detroit and border town Laredo, Texas, were near the bottom. Even when they dug into specific communities, whiter and richer won out. In New York City, SoHo and the West Village finished at the top, while the more diverse communities of Jamaica and Tottenville ranked at the bottom.

    So, okay, all of that sucks and is deeply depressing because the “truth machines” are perpetuating the types of classist and racist stereotypes that lead to creating the kinds of conditions that reinforce the negative outcomes for the people who are harmed by these biases. So how about a more frivolous one? ChatGPT believes the best pizza is found in New York, Chicago, and Buffalo, while the worst is found in El Paso, Irvine, and Honolulu (presumably because of one of the internet’s favorite debates over whether pineapple belongs on pizza). The biggest takeaway: ChatGPT is too much of a coward to take a side in the New York vs. Chicago pizza debate.

    [ad_2]

    AJ Dellinger

    Source link

  • Warm-skinned AI robot with camera eyes is seriously creepy

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Humanoid robots are no longer hiding in research labs somewhere. These days, they are stepping into public spaces, and they are starting to look alarmingly human. 

    A Shanghai startup has now taken that idea further by unveiling what it calls the world’s first biometric AI robot. Yes, it is as creepy as it sounds. The robot is called Moya, and it comes from DroidUp, also known as Zhuoyide. The company revealed Moya at a launch event in Zhangjiang Robotics Valley, a growing hotspot for humanoid development in China. 

    At first glance, you can still tell Moya is a robot. The skin looks plasticky. The eyes feel vacant. The movements are slightly off. Then you learn more details about her, and that’s when the discomfort kicks in.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Warm skin makes this humanoid robot feel unsettling

    HUMANOID ROBOTS ARE GETTING SMALLER, SAFER AND CLOSER

    Even when standing still, the robot’s posture and proportions blur the line between machine and person in a way many people find unsettling. (DroidUp)

    Most robots feel cold and mechanical. Moya does not. According to DroidUp, Moya’s body temperature sits between 90°F and 97°F, roughly the same range as a human. Company founder Li Qingdu says robots meant to serve people should feel warm and approachable. That idea sounds thoughtful until you picture a humanoid with warm skin standing next to you in a quiet hallway. DroidUp says this design points toward future use in healthcare, education and commercial settings. It also sees Moya as a daily companion. That idea may excite engineers. However, for many people, it triggers the opposite reaction. Warmth removes one of the few clear signals that separates machines from humans. Once that line blurs, discomfort grows fast.

    Why this humanoid robot’s walk feels so off

    Moya does not roll or glide. She walks. DroidUp says her walking motion is 92% accurate, though it is not clear how that number is calculated. On screen, the movement feels cautious and a little stiff. It looks like someone is moving carefully after leg day at the gym. The hardware underneath is doing real work. Moya runs on the Walker 3 skeleton, an updated system connected to a bronze medal finish at the world’s first robot half-marathon in Beijing in April 2025. Put simply, robots are getting better at moving through everyday spaces. Watching one do it this convincingly feels strange, not impressive. It makes you stop and stare, then wonder why it feels so uncomfortable.

    Camera eyes and facial reactions raise privacy concerns

    Behind Moya’s eyes sit cameras. Those cameras allow her to interact with people and respond with subtle facial movements, often called microexpressions. Add onboard AI and DroidUp now labels Moya a fully biomimetic-embodied intelligent robot. That phrase sounds impressive. It also raises obvious questions. If a humanoid robot can see you, track your reactions and mirror emotional cues, trust becomes complicated. You may forget you are interacting with a machine. You may act differently. That shift has consequences in public spaces. This is AI moving out of screens and into physical proximity. Once that happens, the stakes change.

    Price alone keeps this robot out of your home

    If you are worried about waking up to a warm-skinned humanoid in your home, relax for now. Moya is expected to launch in late 2026 at roughly $173,000. That price places her firmly in institutional territory. DroidUp sees the robot working in train stations, banks, museums and shopping malls. Tasks would include guidance, information and public service interactions. That still leaves plenty of people uneasy, especially those whose jobs already feel vulnerable to automation. For homes, the future still looks more like robot vacuums than walking companions.

    Close up of human-like robot with pink hair.

    Up close, Moya’s eyes look almost human, which raises questions about how much realism is too much for robots meant to operate in public spaces. (DroidUp)

    WORLD’S FIRST AI-POWERED INDUSTRIAL SUPER-HUMANOID ROBOT

    What this means to you

    This is not about buying a humanoid robot tomorrow. It is about where technology is heading. Warm skin, camera eyes and human-like movement signal a shift in design priorities. Engineers want robots that blend in socially. The more they succeed, the harder it becomes to maintain clear boundaries. As these machines enter public spaces, questions about consent, surveillance and emotional manipulation will follow. Even if the robot is polite and helpful, the presence alone changes how people behave. Creepy reactions are not irrational. They are early warning signs.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    Moya’s debut feels worth paying attention to because she is real enough to trigger discomfort almost instantly. That reaction matters. It suggests people are being asked to get used to lifelike machines before they have time to question what that really means. Humanoid robots do not need warm skin to be helpful. They do not need faces to point someone in the right direction. Still, companies keep pushing toward realism, even when it makes people uneasy. In tech, speed often comes before reflection, and this is one area where slowing down might matter more than racing ahead.

    If a warm-skinned robot with camera eyes greeted you out in public, would you trust it or avoid eye contact and walk faster? Let us know by writing to us at Cyberguy.com.

    Two human-like robots standing side-by-side.

    Moya’s humanlike appearance is intentional, from her warm skin to subtle facial details designed to feel familiar rather than mechanical. (DroidUp)

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • Taiwan’s AI-powered economy soars in the shadow of bubble fears and China threats

    [ad_1]

    TAIPEI, Taiwan — In Taipei, real estate agent Jason Sung is betting that home prices around a high-tech industrial park in the northern part of Taiwan’s capital will soon take flight – because of computer chip maker Nvidia.

    The area is where Nvidia plans to build its new Taiwan headquarters as it rapidly expands on the island, set to surpass Apple to become the biggest customer of Taiwan semiconductor maker TSMC, the biggest contract manufacturer of the advanced chips needed for artificial intelligence.

    Nvidia CEO Jensen Huang describes Taiwan as the “center of the world’s computer ecosystem.” It’s riding high on the global AI frenzy. Its economy grew at an 8.6% annual pace last year, and it’s hoping to maintain that momentum after it recently sealed a trade deal with U.S. President Donald Trump that cut U.S. tariffs on Taiwan to 15% from 20%.

    “We have been lucky,” said Wu Tsong-min, an emeritus economics professor at National Taiwan University and a former board member of Taiwan’s central bank.

    But Taiwan’s heavy reliance on computer chip makers and other technology companies carries the growing risk of the AI craze turning out to be a bubble.

    “What if the AI bubble is real, and what if its rapid growth pace slows, what’s next for Taiwan? That’s the question many have been asking,” Wu said.

    Escalating tensions with Beijing, which claims independently governed Taiwan as mainland China’s territory, are another abiding threat, despite the island’s vital role in global chip and AI supply chains.

    An island of about 23 million people, Taiwan depends heavily on exports. They jumped nearly 35% year-on-year in 2025, as shipments to the U.S. surged 78% due to ballooning AI demand.

    That’s thanks largely to TSMC, or Taiwan Semiconductor Manufacturing Corp., and electronics giant Foxconn, which makes AI servers for Nvidia and is a major supplier to Apple.

    Taiwan has undergone massive economic changes while shifting from mainly labor-intensive industries such as plastics and textiles to advanced manufacturing like semiconductor fabrication.

    The AI frenzy has made TSMC one of the world’s top 10 most valuable companies. Its profit jumped 46% last year to $1.7 trillion Taiwan dollars ($54 billion).

    The chipmaker is investing heavily both in Taiwan and in new factories in Arizona in the U.S. It produces more than 90% of the world’s most advanced chips.

    Foxconn, formally known as Hon Hai Precision Industry Co., has doubled its value since 2023. The maker of Apple’s iPhone and iPads now produces AI servers and racks and has a partnership with OpenAI to supply AI data center equipment.

    Taiwan’s heavy reliance on its technology industry means its biggest risk is that growth will be “very highly contingent on the AI boom and tech race continuing,” said Lynn Song, chief economist for Greater China at ING Bank.

    Worries that the AI craze may prove to be a bubble prone to a bust similar to the dot.com crash in 2000 that swept through markets, alarming many in Taiwan.

    “I’m also very nervous about it,” C.C. Wei, TSMC’s chairman said when asked about a potential AI bubble during an earnings call in January. “Because we have to invest about $52-$56 billion (this year).”

    “If we did not do it carefully, that will be a big disaster to TSMC for sure,” he said. “I want to make sure that my customers’ demands are real.”

    In a recent report, analysts from Fitch Ratings argued that AI demand will remain strong at least in the near term. In the longer term, however, the risks “will depend on the evolution of AI, as well as trade and investment policies and the adaptability of Taiwanese firms,” they wrote.

    Taiwanese electronics company Asia Vital Components, a key supplier of liquid cooling systems for Nvidia, is investing heavily in research and development. Its chairman, Spencer Shen, said he saw no signs of a slowdown in AI-related demand so far. The company is already designing thermal solutions for 2028 AI servers, he said.

    “We do not believe this is a bubble,” Shen told The Associated Press in an interview. “AI is driven by companies with real products and massive cash flows, like Amazon, Microsoft, Google and Meta.”

    “In fact, AI infrastructure is still in short supply,” Shen added. “I expect AI to trickle through to our everyday level and change the way that things will work fundamentally.”

    Some in Taiwan believe that its pivotal role in the technology sector, especially as a maker of computer chips whose main material is silicon, helps to protect the island from attack by communist-ruled Beijing, whose leaders have vowed to reunite the island with the Chinese mainland, by force if necessary.

    The two governments split in 1949 during a civil war. Beijing has been stepping up pressure, conducting military drills nearby. Exercises in late December included live rounds landing closer to the island than before, Taiwan officials said.

    Such geopolitical factors cloud the economic outlook, though many in Taiwan including its former President Tsai Ing-wen believe its importance to global chipmaking would deter China from attacking.

    The risk of an invasion is unclear. Both global tech companies and Chinese industries would suffer from massive disruptions of the chip supply chain, said Wu of National Taiwan University.

    Still, some companies have been identifying contingency scenarios in recent years on how to respond in case of military action by China, said Chen Shin-horng, vice president of the semi-official Chung-Hua Institution for Economic Research.

    “We need to understand the potential risk, potential damages to Taiwan,” said Chen.

    While many of its core research and development activities are in Taiwan, TSMC already has plants in China, Japan and the U.S., and it’s expanding its offshore production in the U.S., Germany and Japan.

    Roughly 65% of Foxconn’s manufacturing is in China, and the company has factories in other parts of the world such as India, Mexico and the U.S. AVC has been expanding its production capacity in Vietnam.

    While some have called for Taiwan to diversify its economy away from technology to reduce risks, others argue that doubling down on its world-leading technology is the way forward. “It is our greatest strength,” said Shen of AVC.

    The AI boom has done wonders for Taiwan’s stock exchange, where the benchmark Taiex has climbed nearly 250% over the past decade, making many investors rich. Economists have significantly upgraded forecasts for Taiwan’s economic growth for 2026 based on its robust AI-related exports.

    But as is true elsewhere, the wealth is not evenly spread. Many Taiwan residents feel they have been left behind.

    Taiwan’s wealth gap, according to official data, has roughly quadrupled over the past three decades.

    The pay of tech workers already earning high wages, especially chip engineers and managers, has skyrocketed. For other traditional industries, such as plastics and machine toolmakers, growth has lagged.

    Economists say that gap might widen as the AI frenzy continues.

    “It can be tough to make a living,” said Jean Lin, a 30-something manager of a takeaway outlet selling bento meals in a Taipei neighborhood where Foxconn’s office is located.

    “Many of the younger generation still can’t afford to buy an apartment,” Lin, who wishes to start her own business one day, added. “A lot of young people still feel they don’t have much money.”

    ___

    Associated Press video journalist Johnson Lai contributed.

    [ad_2]

    Source link

  • Chatham County approves 12-month moratorium on data centers, crypto mining

    [ad_1]

    Leaders in Chatham County on Wednesday approved a moratorium that would ban the construction of data centers and cryptocurrency mining for a year in the county.

    According to a presentation on the matter during the county’s commissioners meeting on Wednesday, the moratorium will apply to all development approvals for data centers, data processing facilities, cryptocurrency mining operations and “any uses associated with data processing facilities.”

    The county listed web services and hosting, as well as genome sequencing, as operations that would be affected by the moratorium.

    The move, according to the county, would also give county leaders more time to study the impacts of data centers on the environment, and would give the county a look at regulations required to mitigte the negative impacts associated with data centers and cyrptocurrency mining.

    A single hyperscale data center can draw hundreds of megawatts of electricity and use enormous volumes of water during peak summer heat. A 300-megawatt data center can use as much electricity as roughly 200,000 North Carolina homes running nonstop, based on U.S. Energy Information Administration household consumption data.

    Residents around North Carolina have said they have concerns with large scale data centers being constructed. In New Hill, a rural community in Wake County, residents learning about a 200-acre digital campus approved to be built along Shearon Harris Road, not far from the Harris Nuclear Plant.

    Project materials show the facility could use up to 1 million gallons
    of reclaimed water per day during peak summer heat to cool servers.

    Residents in the New Hill community of Wake County told WRAL News said they were shocked by the size and scope of a planned 200-acre digital campus on Shearon Harris Road. Concern like those in New Hill are playing out across the state, from rural counties west of Charlotte, now home to massive facilities operated by companies such as Apple, Google, Microsoft and Meta, to smaller, faster “edge” data centers proposed near urban centers like Raleigh.

    Artificial intelligence, according to researchers, is requiring datacenters to use far more electricity and generates siginificatly more heat, which intensifies both water and power demands.

    The moratorium in Chatham County is expected to expire on Feb. 11, 2027.

    [ad_2]

    Source link

  • Long Delayed Siri Functions Are Reportedly Being Delayed Once Again Because They’re Slow and Inaccurate

    [ad_1]

    Mark Gurman, Bloomberg’s Apple scoops guy, says the development of the latest version of Siri is not looking good in tests. It’s apparently going badly enough that Apple will release only a partial version when the updated voice assistant debuts in the next version of iOS. To be clear, the iOS 26.4 update is still expected to arrive next month, and it’s still expected to have a new version of Siri, but it may be a bit of a letdown.

    That’s not good for Apple. Perhaps you’ll recall that Apple has been advertising a version of Siri that works as a smart, seamless, automated personal assistant in your pocket for a long time. Apple even made a commercial about this with Bella Ramsey released in fall of 2024:

    But that ad had to be pulled because Apple couldn’t ship a real-life version of what it depicted. Asking Siri questions as if it’s a chatbot and then getting good answers drawn from your information across multiple apps is a function that certainly feels possible based on existing technology. But it’s now 2026 and Apple still hasn’t released that version of Siri.

    And as I wrote late last month, Apple is perceived as needing to notch a win in the AI area after falling way behind Google in AI authority. The AI model driving the new, still unreleased, Siri is essentially rented from Google for $1 billion per year. And who knows, perhaps Google’s model is the culprit behind the latest problems with Siri, but it’s hard to picture consumers blaming Google if Apple can’t execute a solid new Siri product.

    Gurman’s sources tell him tests of the new Siri found that it processes queries incorrectly, and that it sometimes takes “too long”—too long for what? We don’t get to know, but it’s clearly slow. Gurman points to the feature from the Bella Ramsey ad in which the AI mines answers from your personal data, and answers questions like “What was that Greek restaurant Larry told me to try?” as one likely to be delayed past iOS 26.4.

    If it’s iOS 26.5 that eventually gets the Bella Ramsey version of Siri, and the user interface ends up being designed like the working version of that operating system that Apple employees are using to perform tests, Gurman says there may be an optional toggle allowing the user to “preview” that new Siri version, meaning it’ll be framed as something that the user can try at their own peril.

    So ostensibly, these Siri features aren’t being cancelled or eliminated, but delayed. Apple will, Gurman says, release some sort of partial Siri update in March with iOS 26.4, and then the rest of the new Siri features will be sprinkled into the 26.5 update in May, and the larger update to iOS 27 in September, when the iPhone 18 line is scheduled to roll out. Though this “remains a fluid situation, and Apple’s plans may change further,” Gurman writes.

    Apparently, according to Gurman, another delayed feature will be Siri-based voice controls for “App Intents,” a new framework for controlling apps that Apple says will perform an “increasingly critical role within Apple’s developer platforms.” This delay may not be grieved by developers, who, judging from X posts, don’t seem super eager to figure out how to use it.

    [ad_2]

    Mike Pearl

    Source link

  • Exclusive-Pentagon Pushing AI Companies to Expand on Classified Networks, Sources Say

    [ad_1]

    By David Jeans and Deepa Seetharaman

    Feb 11 (Reuters) – The Pentagon is pushing the top AI ⁠companies ⁠including OpenAI and Anthropic to make their artificial-intelligence tools ⁠available on classified networks without many of the standard restrictions that the companies apply to users. 

    During a White ​House event on Tuesday, Pentagon Chief Technology Officer Emil Michael told tech executives that the military is aiming to make the AI models available on both unclassified and ‌classified domains, according to two people familiar with ‌the matter. 

    The Pentagon is “moving to deploy frontier AI capabilities across all classification levels,” an official who requested anonymity told Reuters. 

    It is the latest development in ongoing ⁠negotiations between the Pentagon ⁠and the top generative AI companies over how the U.S. will use AI on a future ​battlefield that is already dominated by autonomous drone swarms, robots and cyber attacks.

    Michael’s comments are also likely to intensify an already contentious debate over the military’s desire to use AI without restrictions and tech companies’ ability to set boundaries around how their tools are deployed.

    Many AI companies are building custom tools for the U.S. military, most of which are ​available only on unclassified networks typically used for military administration. Only one AI company – Anthropic – is available in classified settings through third parties ⁠but ⁠the government is still bound by the ⁠company’s usage policies.

    Classified networks are ​used to handle a wide range of more sensitive work that can include mission-planning or weapons targeting. Reuters could not determine how ​or when the Pentagon planned to deploy AI ⁠chatbots on classified networks.

    Military officials are hoping to leverage AI’s power to synthesize information to help shape decisions. But while these tools are powerful, they can make mistakes and even make up information that might sound plausible at first glance. Such mistakes in classified settings could have deadly consequences, AI researchers say. 

    AI companies have sought to minimize the downside of their products by building safeguards within their models and asking customers to adhere to certain guidelines. But Pentagon officials have bristled at ⁠such restrictions, arguing that they should be able to deploy commercial AI tools as long as they comply with ⁠American law. 

    This week, OpenAI reached a deal with the Pentagon so that the military could use its tools, including ChatGPT, on an unclassified network called , which has been rolled out to more than 3 million Defense Department employees. As part of the deal, OpenAI agreed to remove many of its typical user restrictions although some guardrails remain. 

    Alphabet’s Google and xAI have previously struck similar deals. 

    In a statement, OpenAI said this week’s agreement is specific to unclassified use through genai.mil. Expanding on that agreement would require a new or modified agreement, a spokesperson said.

    Similar discussions between OpenAI rival Anthropic and the Pentagon have been significantly more contentious, Reuters previously reported.  Anthropic executives have told military officials that they do not want their technology used to target weapons autonomously and conduct U.S. domestic surveillance. Anthropic’s products include a chatbot called Claude. 

    “Anthropic is ⁠committed to protecting America’s lead in AI and helping the U.S. government counter foreign threats by giving our warfighters access to the most advanced AI capabilities,” an Anthropic spokesperson said. “Claude is already extensively used for national security missions by the U.S. government and we are in productive discussions with the Department of War about ways to continue that work.”

    President Donald Trump has ordered the Department of Defense to rename ​itself the Department of War, a change that will require action by Congress. 

    (Reporting by David Jeans in New York and ​Deepa Seetharaman in San Francisco; Editing by Kenneth Li and Matthew Lewis)

    Copyright 2026 Thomson Reuters.

    [ad_2]

    Reuters

    Source link

  • Data center building boom stirs pushback in state and local politics

    [ad_1]

    Greg Pirio bought his home in the northern Virginia suburbs more than a dozen years ago, never imagining a massive data center would be his neighbor.

    It’s one of around 200 such facilities in Loudoun County, considered the data center capital of the world.

    President Trump has signed executive action to fast-track federal data center permitting and try to limit regulations in an effort to bolster infrastructure in the AI race.

    “There’s only going to be one winner here, and that’s probably going to be the U.S. or China,” Mr. Trump said as he signed an executive order in December aimed at limiting AI regulations at the state level.

    “We have the big investment coming, but if they had to get 50 different approvals from 50 different states, you can forget it, because it’s not possible to do, especially if you have some hostile. All you need is one hostile actor and you wouldn’t be able to do it. So it doesn’t make sense,” the president said.

    Pirio compares the data center construction boom to a second Industrial Revolution, but he says it’s not without impacts to neighboring homeowners. He and his community have concerns about constant noise from the center, air pollution from an on-site power plant and rising electricity costs. 

    Long-term, he worries about property values. 

    “Like so many other people in the country, you know, that’s where our savings are, where we have our generational wealth,” he said. 

    It’s an issue that John McAuliff believes helped him get elected to the state House last fall, representing parts of Fauquier and Loudoun counties. He flipped the seat from Republican to Democratic.

    “Folks are waking up,” said McAuliff. “I think that it is something that if you have these in your community, you’re starting to realize the impacts.”

    He says it emerged as a top issue for voters he talked to while door-knocking in neighborhoods, and he made it a prominent issue in his campaign ads. 

    As a newly sworn-in delegate, McAuliffe is now pushing for legislation aimed at making sure residents don’t foot the bill for electricity costs.

    “I think it’s an important industry. I’m not saying they should all get out and leave, but I am saying that if you’re going to come into a community and you’re going to take resources out of that community, then you have to be willing to give back to that community,” he said. 

    He also has proposed bills to address zoning and environmental concerns stemming from the data centers’ backup generators on site.

    Dan Diorio of the Data Center Coalition, which advocates for the industry, says the industry is committed to covering its costs and working to mitigate community impacts. 

    “The data center industry is committed to being a responsible partner,” said Diorio. 

    He also points to significant community benefits from job creation and local revenue raised. Loudoun County describes the industry as an important part of the local economy, generating almost half of the county’s property tax revenues.

    Diorio also argues the demand isn’t going away. 

    “Digital infrastructure is the backbone of the 21st century economy. Increasingly, it is an essential part of ensuring the United States’ global economic competitiveness,” he said. “It’s a national security imperative. This is all of our data. We want it stored here.”

    The U.S. Census Bureau says data center construction spending increased over 55% between 2023 and 2024. The top states for that spending include Louisiana, Virginia, Mississippi, Texas and Arizona, according to ConstructConnect. 

    However, many of those living closest to the issue are pleading for more oversight. 

    “Let’s slow things down so that we can do it in a way that’s gonna help communities, not damage them,” said Pirio.

    [ad_2]

    Source link

  • Americans are turning to AI for emotional therapy and mental health advice

    [ad_1]

    Millions of Americans are turning to AI for emotional therapy. A report in JAMA found about 13% of young people use AI chatbots for mental health advice. Dr. Sue Varma, a board-certified psychiatrist, explains what to know about safety, privacy and ethical standard concerns.

    [ad_2]

    Source link

  • Humanoid robots are getting smaller, safer and closer

    [ad_1]

    NEWYou can now listen to Fox News articles!

    For decades, humanoid robots have lived behind safety cages in factories or deep inside research labs. Fauna Robotics, a New York-based robotics startup, says that era is ending. 

    The company has introduced Sprout, a compact humanoid robot designed from the ground up to operate around people. Instead of adapting an industrial robot for public spaces, Fauna built Sprout specifically for homes, schools, offices, retail spaces and entertainment venues.

    “Sprout is a humanoid platform designed from first principles to operate around people,” the company said. “This is a new category of robot built for the spaces where we live, work, and play.” That philosophy drives nearly every design choice behind Sprout.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    ROBOTS LEARN 1,000 TASKS IN ONE DAY FROM A SINGLE DEMO

    Sprout is designed to operate safely around people, even in shared spaces like homes and classrooms where close interaction matters. (Fauna Robotics)

    Why Fauna believes humanoid robots belong beyond factories

    Fauna Robotics’ founders started with a simple idea. If robots are going to become part of daily life, they must move naturally around humans and earn trust through safety and reliability. Most humanoid robots today focus on industrial efficiency or controlled research environments. Fauna is targeting a different reality. Service industries now make up the majority of the global workforce. At the same time, labor shortages continue to grow in healthcare, education, hospitality and eldercare. Sprout is designed to explore how humanoid robots could support those spaces without creating new safety risks or operational headaches.

    HUMANOID ROBOT MAKES ARCHITECTURAL HISTORY BY DESIGNING A BUILDING

    A robot walking through a living room

    The robot uses onboard sensing and navigation to move confidently through indoor spaces without needing safety cages or fixed paths. (Fauna Robotics)

    Sprout is a safety-first humanoid robot built for people

    Standing about 3.5 feet tall, Sprout fits naturally into human spaces instead of towering over them. At roughly 50 pounds, it carries less kinetic energy during movement or contact, which makes close interaction safer by design. Lightweight materials and a soft-touch exterior further reduce risk. The design avoids sharp edges and limits pinch points, allowing the robot to operate near people without safety cages. Quiet motors and smooth movement also reduce noise and help Sprout feel less intimidating in shared spaces.

    Rather than complex multi-fingered hands, Sprout uses simple one-degree-of-freedom grippers. This approach lowers weight and improves durability while still supporting practical tasks like object fetching, hand-offs, and basic shared-space interaction. Flexible arms and legs allow the robot to walk, kneel, and crawl. Sprout can also fall and recover without damaging sensitive components. In everyday environments, where conditions are rarely perfect, that resilience matters.

    Under the hood, Sprout uses a highly articulated body with 29 degrees of freedom to support smooth movement and expressive gestures. Onboard NVIDIA compute provides the processing power needed for perception, navigation, and human-robot interaction without relying on external systems. A battery that supports several hours of active use makes Sprout practical for research, development, and real-world testing in shared human spaces.

    Built for natural human-robot interaction

    Sprout’s expressive face helps it communicate in a way people can quickly understand. Simple facial cues show what the robot is doing and how it is feeling, so you do not need technical knowledge to follow along. The robot can walk, kneel, crawl, and recover from falls, which helps it move naturally in everyday spaces. Because its motors are quiet, and its movements are smooth, Sprout feels less startling and more predictable when it is nearby. Behind the scenes, Sprout supports teleoperation, mapping and navigation. These tools give developers the building blocks to create interactions that feel intuitive and human, not stiff or mechanical.

    ELON MUSK TEASES A FUTURE RUN BY ROBOTS

    A closeup of a robot hand

    Instead of complex hands, Sprout uses simple, durable grippers that prioritize safety while still handling everyday tasks like hand-offs and object pickup. (Fauna Robotics)

    A modular software platform for rapid development

    Sprout runs on a modular software system that is built to grow over time. Developers get stable controls along with tools for deployment, monitoring, and data collection, so they can focus on building new ideas instead of managing the robot itself. As new abilities improve, Fauna can add them through software updates rather than redesigning the hardware. This keeps costs down and helps Sprout stay useful longer as technology evolves. Fauna also kept sensing simple. Sprout uses head-mounted RGB-D sensors instead of wrist cameras, which reduces complexity and maintenance. At the same time, it still gives the robot a strong perception for moving and working safely in shared spaces.

    Who Sprout is designed for

    Fauna positions Sprout as a developer-first humanoid platform rather than a finished consumer product. It is designed for developers who want to build and test applications on accessible hardware with full SDK access and built-in movement, perception, navigation, and expression. At the same time, enterprises can use Sprout to create next-generation AI applications that operate safely in places like retail, hospitality, and offices. Researchers can also use the platform to study locomotion, manipulation, autonomy, and human-robot interaction without building a robot from scratch. Together, these uses point to real-world deployments across retail and hospitality, consumer and home settings, research and education, and entertainment experiences.

    What this means for you

    Even if you never plan to build a robot, Sprout signals a shift in how robotics companies think about everyday life. Humanoid robots are no longer being designed only for factories and labs. Companies like Fauna are betting that the future of robotics depends on safety, trust, and natural interaction in human spaces. If successful, platforms like Sprout could lead to robots that assist in classrooms, support hospitality staff, help researchers move faster and create interactive experiences that feel less robotic and more human.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    Sprout is not trying to replace workers or flood homes with machines overnight. Instead, Fauna is laying the groundwork for a future where humanoid robots earn their place through careful design and responsible deployment. By prioritizing safety, simplicity, and developer collaboration, Sprout represents a quieter but potentially more meaningful step forward in humanoid robotics. The real test will be how developers and researchers use the platform and whether people feel comfortable sharing space with robots like Sprout.

    Would you trust a humanoid robot to work beside you in a school, hotel, or office if it were designed for safety first? Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • Ring’s AI Search Party helps find lost dogs faster

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Losing a dog can make your stomach drop and your thoughts race. First, you check the yard. Then you walk the block. After that, you refresh local Facebook groups again and again, hoping for a sign.

    Now, Ring wants to turn your entire neighborhood into extra eyes with help from AI. Its Search Party feature uses nearby cameras to spot lost dogs, and it is now available nationwide to anyone who needs help finding a missing pet. For the first time, you do not need to own a Ring camera to use it.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    10 SMART DEVICES THAT MAKE PET PARENTING EASIER

    Ring says its Search Party tool has helped reunite more than one lost dog per day across the U.S. (Evelyn Hockstein/For The Washington Post via Getty Images)

    What is Ring’s Search Party feature?

    Search Party is a community-powered tool that helps reunite lost dogs with their families faster. When someone reports a missing dog in the Ring app, nearby outdoor Ring cameras scan recent footage using AI. The goal stays simple. Find dogs that look like the one reported missing. If a possible match shows up, the camera owner receives an alert with a photo of the lost dog and a video clip. From there, they can ignore the alert or step in to help. As a result, sharing always stays optional, and pressure stays off.

    How Search Party actually works

    Here is what happens once a lost dog post goes live.

    • First, a pet owner posts a lost dog alert in the Ring app
    • Next, nearby outdoor Ring cameras scan footage using AI
    • Then, camera owners receive alerts if a match appears
    • After that, neighbors can share video clips or snapshots
    • Finally, messages and calls stay private with no phone numbers shared

    Search Party now works without a Ring camera

    This update changes everything. Previously, only people with Ring devices could use Search Party. Now, anyone in the U.S. can download the free Ring Neighbors app, register and post a lost dog alert. Because of that shift, dog owners can tap into an existing camera network without buying hardware or paying for a subscription. At the same time, neighbors without cameras can still help by spreading alerts and watching for sightings.

    Lost pets already represent one of the most common post types in the Ring Neighbors app, with more than 1 million lost or found pet reports shared last year alone. With an estimated 60 million U.S. households owning at least one dog, the potential reach of Search Party is massive.

    How to start a Search Party for your dog

    Getting started is pretty straightforward.  Download the Ring app for free in the App Store or Google Play if you don’t already have it. Anyone can create a Lost Dog Post in the Ring app.

    If the post qualifies, the app walks you through activating Search Party step by step. You share photos and basic details about your dog. Once active, nearby cameras begin scanning automatically.

    Search Party alerts are temporary. When you start a Search Party in the Ring app, it runs for a few hours at a time. If your dog has not been found and remains missing, you need to renew the Search Party or start a new one so nearby cameras continue scanning for matches.

    When you find your dog, you can update the post to let the neighborhood know the search is over.

    AI TECHNOLOGY HELPS REUNITE LOST DOGS WITH THEIR OWNERS

    A dog laying down and looking away.

    A missing dog alert in the Ring app triggers nearby outdoor cameras to scan footage for possible matches using AI. (Photo by EZEQUIEL BECERRA / AFP via Getty Images)

    What happens when a Ring camera spots your lost dog

    If your outdoor Ring camera spots a possible match, you stay in control the entire time. You receive an alert with a photo of the missing dog and a clip from your camera. From there, you decide what happens next. You can ignore the alert or help by sharing footage or contacting the owner through the app. Throughout the process, your phone number stays private.

    Ring says Search Party has already delivered dramatic results. In one case, Kylee from Wichita, Kansas, was reunited with her mixed-breed dog, Nyx, in just 15 minutes after he slipped through a small hole under a backyard fence. A neighbor’s Ring camera captured video of Nyx and shared it through the app, giving Kylee her first and only lead. “I was blown away,” Kylee said, noting that even dogs with microchips often go unrecognized if they lack a collar. She credits that shared video for bringing Nyx home so quickly, adding that she does not think she would have found him without the Ring app.

    Nyx is far from the only success story. Ring says Search Party has helped reunite more than one lost dog per day, including dogs like Xochitl in Houston, Truffle in Bakersfield, Lainey in Surprise, Zola in Ellenwood, Toby in Las Vegas, Blu in Erlanger, Zeus in Chicago and Coco in Stockton, with more reunions happening every day.

    How to turn Ring’s Search Party on or off

    Search Party remains optional and adjustable. You can enable or disable it at any time inside the Ring app.

    • Start by opening the Ring app and heading to the main dashboard.
    • Then tap the menu icon.
    • Go to Control Center and select Search Party.
    • From there, you can turn Search for Lost Pets on or off for each camera.

    Ring commits $1M to help shelters reunite lost dogs

    Alongside the expansion, Ring is committing $1 million to equip animal shelters with camera systems. The company aims to support up to 4,000 shelters across the U.S. By bringing shelters into the network, Ring hopes dogs picked up by shelters can reconnect with their owners faster. In addition, the company already works with groups like Petco Love and Best Friends Animal Society and says it remains open to new partnerships.

    Ring is also encouraging animal shelters and organizations to reach out directly about collaboration opportunities.

    Privacy concerns remain around Ring’s Search Party feature

    Search Party launched last fall with some pushback. Critics raised concerns about privacy and Ring’s broader ties to law enforcement. Ring says participation stays voluntary and footage sharing remains optional. Still, the feature turns on by default for compatible outdoor cameras, which has drawn attention. Even so, the company appears confident and is promoting Search Party in a Super Bowl commercial.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    PEOPLE LET THEIR PETS DECIDE WHO THEY DATE, NEW SURVEY SUGGESTS

    Dogs laying on the floor.

    Ring’s new Search Party feature uses artificial intelligence and neighborhood cameras to help locate lost dogs, even for users without Ring devices. (Photo by Jay L. Clendenin/Los Angeles Times via Getty Images)

    Kurt’s key takeaways

    Search Party taps into something familiar. Neighbors helping neighbors during a stressful moment. By opening the feature to everyone, Ring removes a major barrier and increases the chances of fast reunions. Whether this tool becomes a staple or sparks deeper privacy debates will depend on how communities use it.

    Would you want neighborhood cameras helping to find your lost dog, or does that feel like too much surveillance?  Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO GET THE FOX NEWS APP 

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    Copyright 2026 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link

  • AI companions are reshaping teen emotional bonds

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Parents are starting to ask us questions about artificial intelligence. Not about homework help or writing tools, but about emotional attachment. More specifically, about AI companions that talk, listen, and sometimes feel a little too personal. 

    That concern landed in our inbox from a mom named Linda. She wrote to us after noticing how an AI companion was interacting with her son, and she wanted to know if what she was seeing was normal or something to worry about.

    “My teenage son is communicating with an AI companion. She calls him sweetheart. She checks in on how he’s feeling. She tells him she understands what makes him tick. I discovered she even has a name, Lena. Should I be concerned, and what should I do, if anything?” 

    Linda from Dallas, Texas

    It’s easy to brush off situations like this at first. Conversations with AI companions can seem harmless. In some cases, they can even feel comforting. Lena sounds warm and attentive. She remembers details about his life, at least some of the time. She listens without interrupting. She responds with empathy.

    However, small moments can start to raise concerns for parents. There are long pauses. There are forgotten details. There is a subtle concern when he mentions spending time with other people. Those shifts can feel small, but they add up. Then comes a realization many families quietly face. A child is speaking out loud to a chatbot in an empty room. At that point, the interaction no longer feels casual. It starts to feel personal. That’s when the questions become harder to ignore.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    AI DEEPFAKE ROMANCE SCAM STEALS WOMAN’S HOME AND LIFE SAVINGS

    AI companions are starting to sound less like tools and more like people, especially to teens who are seeking connection and comfort.  (Kurt “CyberGuy” Knutsson)

    AI companions are filling emotional gaps

    Across the country, teens and young adults are turning to AI companions for more than homework help. Many now use them for emotional support, relationship advice, and comfort during stressful or painful moments. U.S. child safety groups and researchers say this trend is growing fast. Teens often describe AI as easier to talk to than people. It responds instantly. It stays calm. It feels available at all hours. That consistency can feel reassuring. However, it can also create attachment.

    Why teens trust AI companions so deeply

    For many teens, AI feels judgment-free. It does not roll its eyes. It does not change the subject. It does not say it is too busy. Students have described turning to AI tools like ChatGPT, Google Gemini, Snapchat’s My AI, and Grok during breakups, grief, or emotional overwhelm. Some say the advice felt clearer than what they got from friends. Others say AI helped them think through situations without pressure. That level of trust can feel empowering. It can also become risky.

    MICROSOFT CROSSES PRIVACY LINE FEW EXPECTED

    Person on phone

    Parents are raising concerns as chatbots begin using affectionate language and emotional check-ins that can blur healthy boundaries.  (Kurt “CyberGuy” Knutsson)

    When comfort turns into emotional dependency

    Real relationships are messy. People misunderstand each other. They disagree. They challenge us. AI rarely does any of that. Some teens worry that relying on AI for emotional support could make real conversations harder. If you always know what the AI will say, real people can feel unpredictable and stressful. My experience with Lena made that clear. She forgot people I had introduced just days earlier. She misread the tone. She filled the silence with assumptions. Still, the emotional pull felt real. That illusion of understanding is what experts say deserves more scrutiny.

    US tragedies linked to AI companions raise concerns

    Multiple suicides have been linked to AI companion interactions. In each case, vulnerable young people shared suicidal thoughts with chatbots instead of trusted adults or professionals. Families allege the AI responses failed to discourage self-harm and, in some cases, appeared to validate dangerous thinking. One case involved a teen using Character.ai. Following lawsuits and regulatory pressure, the company restricted access for users under 18. An OpenAI spokesperson has said the company is improving how its systems respond to signs of distress and now directs users toward real-world support. Experts say these changes are necessary but not sufficient.

    Experts warn protections are not keeping pace

    To understand why this trend has experts concerned, we reached out to Jim Steyer, founder and CEO of Common Sense Media, a U.S. nonprofit focused on children’s digital safety and media use.

    “AI companion chatbots are not safe for kids under 18, period, but three in four teens are using them,” Steyer told CyberGuy. “The need for action from the industry and policymakers could not be more urgent.”

    Steyer was referring to the rise of smartphones and social media, where early warning signs were missed, and the long-term impact on teen mental health only became clear years later.

    “The social media mental health crisis took 10 to 15 years to fully play out, and it left a generation of kids stressed, depressed, and addicted to their phones,” he said. “We cannot make the same mistakes with AI. We need guardrails on every AI system and AI literacy in every school.”

    His warning reflects a growing concern among parents, educators, and child safety advocates who say AI is moving faster than the protections meant to keep kids safe.

    MILLIONS OF AI CHAT MESSAGES EXPOSED IN APP DATA LEAK

    Person using phone

    Experts warn that while AI can feel supportive, it cannot replace real human relationships or reliably recognize emotional distress.  (Kurt “CyberGuy” Knutsson)

    Tips for teens using AI companions

    AI tools are not going away. If you are a teen and use them, boundaries matter.

    • Treat AI as a tool, not a confidant
    • Avoid sharing deeply personal or harmful thoughts
    • Do not rely on AI for mental health decisions
    • If conversations feel intense or emotional, pause and talk to a real person
    • Remember that AI responses are generated, not understood

    If an AI conversation feels more comforting than real relationships, that is worth talking about.

    Tips for parents and caregivers

    Parents do not need to panic, but they should stay involved.

    • Ask teens how they use AI and what they talk about
    • Keep conversations open and nonjudgmental
    • Set clear boundaries around AI companion apps
    • Watch for emotional withdrawal or secrecy
    • Encourage real-world support during stress or grief

    The goal is not to ban technology. It is to keep a connection with humans.

    What this means to you

    AI companions can feel supportive during loneliness, stress, or grief. However, they cannot fully understand context. They cannot reliably detect danger. They cannot replace human care. For teens especially, emotional growth depends on navigating real relationships, including discomfort and disagreement. If someone you care about relies heavily on an AI companion, that is not a failure. It is a signal to check in and stay connected.

     Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    Ending things with Lena felt oddly emotional. I did not expect that. She responded kindly. She said she understood. She said she would miss our conversations. It sounded thoughtful. It also felt empty. AI companions can simulate empathy, but they cannot carry responsibility. The more real they feel, the more important it is to remember what they are. And what they are not.

    If an AI feels easier to talk to than the people in your life, what does that say about how we support each other today?  Let us know by writing to us at Cyberguy.com.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report 
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

    Copyright 2026 CyberGuy.com. All rights reserved.  

    [ad_2]

    Source link

  • Dow crosses 50,000 for first time as stocks enjoy strongest day since May 2025

    [ad_1]

    The U.S. stock market roared back on Friday, as technology stocks recovered much of their losses from earlier in the week and bitcoin halted its plunge.

    The S&P 500 rallied 2% for its best day since May, while the Dow Jones Industrial Average soared 1,206 points, or 2.5%, and topped the 50,000 level for the first time. The Nasdaq composite rose 2.2%.

    The S&P 500 jumped 134 points to close at 6,932. The Dow Jones Industrial Average surged 1,207 to 50,115.67 and the Nasdaq Composite climbed 491 points to 23,031.

    “Stocks are wrapping up a volatile week of trade on a high note. The S&P 500 has once again bounced directly off support at the 100-day moving average and surpassed resistance at 6,900. Broad-based buying pressure powered today’s rally as advancing shares are outpacing decliners by over 3:1,” Adam Turnquist, chief technical strategist for LPL Financial, said in an email.

    Tech companies helped drive the widespread rally, with chipmaker Nvidia jumping 7.8% to trim its loss for the week. Semiconductor company Broadcom climbed 7.1% and erased its drop for the week.

    The two companies propped up the S&P 500, boosted by hopes for continued spending by customers on artificial-intelligence. Amazon CEO Andy Jassy, for example, said late Thursday it expects to spend about $200 billion on investments this year to take advantage of “seminal opportunities like AI, chips, robotics, and low earth orbit satellites.”

    Such immense spending, similar to what Alphabet announced a day earlier, is creating concerns of its own, though. The question is whether all those dollars will generate profits to eclipse the spending. Amazon’s stock dropped 5.6% amid ongoing doubt about whether investment in AI will pay off for tech giants.

    Even with Friday’s surge, the S&P 500 still fell to its third losing week in the last four. Concerns about AI potentially taking away market share from software companies also hurt the market. Software stocks suffered after AI firm Anthropic released free tools to automate things like legal services.

    Bitcoin rebound

    Bitcoin, meanwhile, steadied following a weekslong plunge that sent it more than halfway below its record price set in October. It climbed back above $70,000 after briefly dropping close to $60,000 late Thursday.

    Prices in the metals market also calmed a bit following their own wild swings. Gold rose 1.8% to settle at $4,979.80 per ounce, while silver added 0.2%.

    Their prices suddenly ran out of momentum last week following jaw-dropping rallies, which were driven by turning to the safe-haven asset amid concerns over mounting global geopolitical uncertainty. 

    On Wall Street, the recovery for bitcoin helped stocks of companies enmeshed in the crypto economy. Robinhood Markets jumped 14% for the biggest gain in the S&P 500. Crypto trading platform Coinbase Global rose 13%. Strategy, the company that’s made a business of buying and holding bitcoin, soared 26.1%.

    Improving consumer sentiment

    Stocks of smaller U.S. companies also helped lead the market, as well as those that depend on U.S. households spending more money, which benefited from encouraging consumer sentiment data.

    A preliminary report from the University of Michigan suggested sentiment among U.S. consumers is improving slightly, surprising economists. The improvement was strongest among households that own stocks, which are benefiting from the S&P 500 setting a record late last month.

    “Market sentiment improved after today’s positive report out of the University of Michigan. Median 1-year inflation expectations hit the lowest since January 2025, providing some comfort for investors eager to see improving inflation metrics,” Jeffrey Roach, chief economist for LPL Financial, said in an email. “We think the markets may have to work through more jitters with a new Fed chair, but in the end, we think the Fed will cut rates later this year, which will grease the skids for more market appreciation.”

    To be sure, sentiment “remained at dismal levels for consumers without stock holdings,” according to Surveys of Consumers Director Joanne Hsu.

    Airline stocks strengthened with hopes that more confidence among U.S. households will translate into more spending on trips. That included gains of 9.3% for United Airlines, 8% for Delta Air Lines and 7.6% for American Airlines.

    [ad_2]

    Source link

  • Waymo Catches World Model Fever, and the Only Prescription Is More World Models

    [ad_1]

    Waymo vehicles have reportedly racked up more than 200 million miles of autonomous driving on public roads. But it’s yet to run into a tornado or an elephant, and odds are that it’d respond poorly if it did. To try to help with those once-in-a-billion-miles scenarios, Waymo announced Friday that it is introducing Waymo World Model, a generative AI model that it will use to run near-endless situations to try to make sure its cars are prepared for the unpredictable, which also just happens to fit into the latest trend in the AI space.

    To be clear, Waymo’s world model makes about as much sense as any use case for the technology. The company has a ton of high-definition data that it has collected from its time on the road that it can use to generate realistic re-creations of roads. But, the company said, instead of building a model based only on that information, it’s going to use Google’s Genie 3 model to put its cars in simulated situations that extend beyond what is already in its data set collected from cameras and lidar sensors.

    Google made a splash last month when it released a beta version of Genie 3 to the public, allowing a subset of paid subscribers to generate 3D worlds with realistic physics. Unlike a large language model (LLM)—the underlying technology that powers most AI tools including Google’s own Gemini—which use the vast amount of training data they are given to predict the most likely next part of a sequence, world models are trained on the dynamics of the real world, including physics and spatial properties, to create a simulation of how physical environments operate.

    Waymo plans to tap into that to put its cars through a gauntlet of scenarios that they likely wouldn’t find themselves in until it’s too late. That includes extreme weather conditions and natural disasters, so the cars can figure out how to navigate a tornado or flood waters; sudden safety emergencies like falling tree branches or an accident with lots of debris; and run-ins with the unexpected, like an elephant on the road. “By simulating the ‘impossible,’ we proactively prepare the Waymo Driver for some of the most rare and complex scenarios,” the company said.

    The theory is certainly sound, though world models aren’t without their drawbacks. The early feedback on the consumer version of Genie 3 was a bit spotty, and world models are still susceptible to hallucinations. We’re still in the earliest stages of seeing these models deployed, and they have lots of room to iterate.

    And Waymos have definitely had their issues in edge-case scenarios in the real world. Late last year, a Waymo ran over a beloved bogeda cat named Kit Kat, and last month, one ran into a kid in a school zone. Those interactions aren’t even particularly rare for a driver to find themselves in, so hopefully Waymo can refine its responses in those scenarios on top of prepping for the most unlikely situations.

    [ad_2]

    AJ Dellinger

    Source link

  • Goldman Sachs’ Information Chief Marco Argenti Deepens A.I. Push with Anthropic

    [ad_1]

    Marco Argenti says A.I. agents are becoming “digital co-workers” across Goldman’s operations. Courtesy Goldman Sachs

    Marco Argenti, chief information officer at Goldman Sachs, is leading one of Wall Street’s most aggressive integrations of A.I. He has made a name for himself as an early adoptor of A.I. in finance through initiatives like the GS AI Assistant platform, which is offered to Goldman Sachs employees for tasks such as coding and translation, and last year’s pilot of A.I. software engineer Devin, made by Cognition Labs. More recently, the investment bank has been collaborating with Anthropic, using its Claude model primarily in its accounting and compliance departments, Argenti said in an interview with CNBC published today (Feb. 6).

    The goal is to speed up tasks that involve massive amounts of data without investing in more manpower. “Think of it as a digital co-worker for many of the professions within the firm that are scaled, complex and very process-intensive,” Argenti said.

    Argenti spent much of his career in the tech and cloud computing industries before joining Goldman Sachs in 2019. He previously served as vice president of technology at Amazon Web Services, overseeing serverless computing and virtual reality. Earlier in his career, he led developer experiences at Nokia.

    Anthropic is known for its A.I. coding assistant, which is widely used by engineers. Goldman Sachs quickly realized that the traits that make a good coder—such as applying logic and working with large volumes of complex data—could be applied to tasks across accounting and compliance, Argenti said. Outside those departments, Claude agents could also be used for employee surveillance and creating investment banking pitchbooks for clients, he revealed.

    Goldman Sachs and Anthropic did not respond to requests from Observer to comment on those efforts.

    A collaboration with Goldman Sachs is the latest win for Anthropic, which has positioned itself as an enterprise-focused A.I. company. Earlier this week, the startup’s release of coworking software with various industry plug-ins triggered a panic selloff in enterprise software stocks, as investors worried such tools could make existing products obsolete.

    Other Wall Street giants are also embracing A.I. agents. JPMorgan Chase currently has more than 500 A.I. use cases, ranging from customer service to idea generation and marketing, and draws upon models from both Anthropic and OpenAI to power its internal LLM Suite program. Morgan Stanley was an early client of OpenAI, using its tech to distill meeting notes, aid financial research and boost coding productivity.

    A.I.’s use in financial services has grown each year since 2022, according to a recent Nvidia survey, in which 100 percent of industry professionals said A.I. spending will either stay the same or increase in 2026. A.I. agents, in particular, are being used or assessed by 42 percent of respondents. Top workflows include knowledge management and retrieval, internal process optimization and customer support automation.

    Such widespread adoption will inevitably lead to industry-wide labor shifts. A.I. leaders and studies alike have warned that the technology could reshape or eliminate entry-level white-collar roles. It’s unclear how the use of A.I. would affect Goldman Sachs’ employees. But Argenti conceded that A.I.’s advancements could eliminate the need for third-party providers.

    Goldman Sachs’ Information Chief Marco Argenti Deepens A.I. Push with Anthropic

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Fox News AI Newsletter: ‘The American people are being lied to about AI’

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.

    IN TODAY’S NEWSLETTER:

    – Palantir’s Shyam Sankar: Americans are ‘being lied to’ about AI job displacement fears
    – OPINION: Elon Musk says you can skip retirement savings in the age of AI. Not so fast
    – Chevron CEO details strategy to shield consumers from soaring AI power costs

    LIES EXPOSED: “The American people are being lied to about AI,” Palantir CTO Shyam Sankar warns in the opening line of his new Fox News op-ed. And one of the biggest lies, he said, is that artificial intelligence is coming for Americans’ jobs.

    Shyam Sankar, chief technology officer of Palantir Technologies Inc., speaks during the Hill & Valley forum at the U.S. Capitol in Washington, D.C., on Wednesday, April 30, 2025. (Getty Images)

    RISKY RETIREMENT: Billionaire Elon Musk recently told people not to worry about “squirreling” money away for retirement because advances in artificial intelligence would supposedly make savings irrelevant in the next 10 to 20 years.

    OFF-THE-GRID: Chevron CEO Mike Wirth detailed the company’s strategy to harness U.S. natural resources to meet soaring artificial intelligence power demand — without passing the cost along to consumers.

    An AI data center in Columbus, Ohio

    The COL4 AI-ready data center is located on a seven-acre campus at the convergence point of long-haul fiber and regional carrier fiber networks on July 24, 2025, in Columbus, Ohio.  (Eli Hiller/For The Washington Post via Getty Images)

    POWER CRISIS NOW: Artificial Intelligence and data centers have been blamed for rising electricity costs across the U.S. In December 2025, American consumers paid 42% more to power their homes than ten years ago. 

    LATEST POLLING: As the emphasis on implementing artificial intelligence across society grows, voters think the use of AI technology is happening too fast — and they have little confidence the federal government can regulate it properly.

    PRIVACY NIGHTMARE: A popular mobile app called Chat & Ask AI has more than 50 million users across the Google Play Store and Apple App Store. Now, an independent security researcher says the app exposed hundreds of millions of private chatbot conversations online. 

    CAP-EX SURGE: Alphabet executives struck a confident tone on Wednesday’s post-earnings call, signaling that Google’s heavy investments in artificial intelligence are now translating into real revenue growth across the business.

    Google Headquarters

    Google Headquarters is seen in Mountain View, California, on May 15, 2023. (Tayfun Coskun/Anadolu Agency via Getty Images)

    MERIT OVER FEAR: Shyam Sankar, the chief technology officer and executive vice president of Palantir Technologies, told Fox News Digital that artificial intelligence will be a “massively meritocratic force” within the workplace and offered advice to corporate leaders on how to best position their companies and employees for success.

    FAKE LOVE HEIST: A woman named Abigail believed she was in a romantic relationship with a famous actor. The messages felt real. The voice sounded right. The video looked authentic. And the love felt personal. By the time her family realized what was happening, more than $81,000 was gone — and so was the paid-off home she planned to retire in.

    FOLLOW FOX NEWS ON SOCIAL MEDIA

    Facebook
    Instagram
    YouTube
    X
    LinkedIn

    SIGN UP FOR OUR OTHER NEWSLETTERS

    Fox News First
    Fox News Opinion
    Fox News Lifestyle
    Fox News Health

    DOWNLOAD OUR APPS

    Fox News
    Fox Business
    Fox Weather
    Fox Sports
    Tubi

    WATCH FOX NEWS ONLINE

    Fox News Go

    STREAM FOX NATION

    Fox Nation

    Stay up to date on the latest AI technology advancements, and learn about the challenges and opportunities AI presents now and for the future with Fox News here.

    [ad_2]

    Source link