ReportWire

Tag: Technology

  • AI Aims to Please You, but Don’t Let It

    [ad_1]

    In old-school business lingo, AI is a “yes man,” or more coarsely put, a suck-up.

    [ad_2]

    Joel Schwartzberg

    Source link

  • The Global Internet Is Coming Apart

    [ad_1]

    Photo-Illustration: Intelligencer; Graphic: MAX/Apple

    The early rise of the internet is usually told as an extension of globalization. New networking technology made instantaneous communication possible, complementing and accelerating international commerce and cultural exchange. As in the rest of the world economy, the U.S. was unusually influential online, exporting not just technology but culture and political norms with it.

    The alternative story of the rise of the internet was exemplified by China, which limited the reach of western tech companies, maintained strict control over its domestic networks, and started building a parallel internet-centric economy of its own. And contra western reporting suggesting that this was purely an exercise in isolationism and control, in 2025, the international influence of the Chinese internet and tech companies — even here, as evidenced by the growth and semi-seizure of TikTok — is enormous.

    In that context — and the context of America’s renewed trade war — it shouldn’t be surprising that more countries are taking a second look at digital sovereignty and that the global internet as we knew it is pulling apart. Russia, which has a long history of internet censorship and state-aligned tech companies, has taken the extraordinary recent step of interfering with access not just to WhatsApp but also Telegram, the messaging app founded by Pavel Durov, a creator of VK, Russia’s Facebook alternative, who left the country more than a decade ago. The throttling coincided with the launch of MAX, a new government-controlled everything app — basically a messaging app with other features layered on top, modeled on China’s Weixin — and an all-out marketing campaign to get people to switch. “Billboards are trumpeting it. Schools are recommending it. Celebrities are being paid to push it. Cellphones are sold with it preloaded,” the Times reports.

    Russia’s obviously in an … unusual diplomatic position these days, but you can hear a version of its stated position — We should have our own big internet platforms as well as greater control over and access to what people do on them — coming from all over the world. (Indeed, the American government’s rationale for the TikTok deal can be understood as a defensive version of the same argument.) In India, the government is talking more openly about favoring homegrown apps for economic and security reasons and highlighting its own domestic “super-app.” From the Financial Times:

    A chorus of top Indian officials in recent weeks have publicly backed a domestically developed messaging platform, as the country tries to project its ability to create a homegrown rival to US-developed apps. “Nothing beats the feeling of using a Swadeshi [locally made] product,” [Minister of Commerce] Piyush Goyal wrote on X, adding: “So proud to be on Arattai, a Made in India messaging platform.”

    In the aughts, fights over digital globalization were about search engines and popular websites; in the 2010s, they were largely about social networks. Now, they’re about messaging apps, which are different in a number of ways. A lot of messaging traffic is private communication on services like Meta-owned WhatsApp — one of the most popular apps in India, which is WhatsApp’s largest market. Messages on the platform are encrypted by default, meaning that even governments with extensive surveillance capabilities can’t easily see what people are using them for.

    China’s Weixin, which operates internationally as WeChat, demonstrates two tantalizing possibilities for other governments: It’s aligned with the state and surveillable; also, as it grew popular and expanded its ambitions, it became the default interface for shopping, banking, media consumption, and interacting with other businesses. This sort of everything app — which American tech executives have openly lusted after, most recently and explicitly Elon Musk — is appealing to tech companies and governments alike for its total centralization. MAX’s goals are clear, with messaging, calls, ID functionality, and plans to allow users to “connect with government services, make doctors’ appointments, find homework assignments, and talk to local authorities.”

    The looming segmentation of what we colloquially call the “internet” into various national, nationalist, and perhaps compromised messaging apps leaves governments without such ambitions in an awkward position. The European Union, citing some of the same concerns as the Russian and Indian governments — although mostly focusing on child protection — is considering, against widespread opposition from its citizenry and foreign tech companies alike, “chat control” legislation, which would require tech firms to allow messages to be scanned by authorities for offending content. The EU has some leverage here, of course — nobody wants to lose access to such a large and wealthy market — but tech companies based elsewhere insist that such a requirement is impossible to implement without fundamentally breaking their services or violating user privacy. Under the narrower auspices of stopping online sexual abuse, in other words, the EU is asking — or wishing — for a limited version of the same power China wanted when it made onerous demands of American tech companies in the 2010s, preventing them from entering its market: to regulate and control influential applications that have, up until this point, mostly come from somewhere else.

    Taken together, this looks an awful lot like a global shift in how most governments — and their citizens — approach the internet: not as an intrinsically and necessarily global project but as a source of domestic power to be cultivated, protected, and protected against.

    [ad_2]

    John Herrman

    Source link

  • Anthropic warns of AI-driven hacking campaign linked to China

    [ad_1]

    WASHINGTON (AP) — A team of researchers has uncovered what they say is the first reported use of artificial intelligence to direct a hacking campaign in a largely automated fashion.

    The AI company Anthropic said this week that it disrupted a cyber operation that its researchers linked to the Chinese government. The operation involved the use of an artificial intelligence system to direct the hacking campaigns, which researchers called a disturbing development that could greatly expand the reach of AI-equipped hackers.

    While concerns about the use of AI to drive cyber operations are not new, what is concerning about the new operation is the degree to which AI was able to automate some of the work, the researchers said.

    “While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale,” they wrote in their report.

    The operation targeted tech companies, financial institutions, chemical companies and government agencies. The researchers wrote that the hackers attacked “roughly thirty global targets and succeeded in a small number of cases.” Anthropic detected the operation in September and took steps to shut it down and notify the affected parties.

    Anthropic noted that while AI systems are increasingly being used in a variety of settings for work and leisure, they can also be weaponized by hacking groups working for foreign adversaries. The San Francisco-based company, maker of the generative AI chatbot Claude, is one of many tech developers pitching AI “agents” that go beyond a chatbot’s capability to access computer tools and take actions on a person’s behalf.

    “Agents are valuable for everyday work and productivity — but in the wrong hands, they can substantially increase the viability of large-scale cyberattacks,” the researchers concluded. “These attacks are likely to only grow in their effectiveness.”

    A spokesperson for China’s embassy in Washington did not immediately return a message seeking comment on the report.

    Microsoft warned earlier this year that foreign adversaries were increasingly embracing AI to make their cyber campaigns more efficient and less labor-intensive. The head of OpenAI’s safety panel, which has the authority to halt the ChatGPT maker’s AI development, recently told The Associated Press he’s watching out for new AI systems that give malicious hackers “much higher capabilities.”

    America’s adversaries, as well as criminal gangs and hacking companies, have exploited AI’s potential, using it to automate and improve cyberattacks, to spread inflammatory disinformation and to penetrate sensitive systems. AI can translate poorly worded phishing emails into fluent English, for example, as well as generate digital clones of senior government officials.

    Anthropic said the hackers were able to manipulate Claude, using “jailbreaking” techniques that involve tricking an AI system to bypass its guardrails against harmful behavior, in this case by claiming they were employees of a legitimate cybersecurity firm.

    “This points to a big challenge with AI models, and it’s not limited to Claude, which is that the models have to be able to distinguish between what’s actually going on with the ethics of a situation and the kinds of role-play scenarios that hackers and others may want to cook up,” said John Scott-Railton, senior researcher at Citizen Lab.

    The use of AI to automate or direct cyberattacks will also appeal to smaller hacking groups and lone wolf hackers, who could use AI to expand the scale of their attacks, according to Adam Arellano, field CTO at Harness, a tech company that uses AI to help customers automate software development.

    “The speed and automation provided by the AI is what is a bit scary,” Arellano said. “Instead of a human with well-honed skills attempting to hack into hardened systems, the AI is speeding those processes and more consistently getting past obstacles.”

    AI programs will also play an increasingly important role in defending against these kinds of attacks, Arellano said, demonstrating how AI and the automation it allows will benefit both sides.

    Reaction to Anthropic’s disclosure was mixed, with some seeing it as a marketing ploy for Anthropic’s approach to defending cybersecurity and others who welcomed its wake-up call.

    “This is going to destroy us – sooner than we think – if we don’t make AI regulation a national priority tomorrow,” wrote U.S. Sen. Chris Murphy, a Connecticut Democrat, on social media.

    That led to criticism from Meta’s chief AI scientist Yann LeCun, an advocate of the Facebook parent company’s open-source AI systems that, unlike Anthropic’s, make their key components publicly accessible in a way that some AI safety advocates deem too risky.

    “You’re being played by people who want regulatory capture,” LeCun wrote in a reply to Murphy. “They are scaring everyone with dubious studies so that open source models are regulated out of existence.”

    __

    O’Brien reported from Providence, Rhode Island.

    [ad_2]

    Source link

  • Future data centers are driving up forecasts for energy demand. States want proof they’ll get built

    [ad_1]

    HARRISBURG, Pa. (AP) — The forecasts are eye-popping: utilities saying they’ll need two or three times more electricity within a few years to power massive new data centers that are feeding a fast-growing AI economy.

    But the challenges — some say the impossibility — of building new power plants to meet that demand so quickly has set off alarm bells for lawmakers, policymakers and regulators who wonder if those utility forecasts can be trusted.

    One burning question is whether the forecasts are based on data center projects that may never get built — eliciting concern that regular ratepayers could be stuck with the bill to build unnecessary power plants and grid infrastructure at a cost of billions of dollars.

    The scrutiny comes as analysts warn of the risk of an artificial intelligence investment bubble that’s ballooned tech stock prices and could burst.

    Meanwhile, consumer advocates are finding that ratepayers in some areas — such as the mid-Atlantic electricity grid, which encompasses all or parts of 13 states stretching from New Jersey to Illinois, as well as Washington, D.C. — are already underwriting the cost to supply power to data centers, some of them built, some not.

    “There’s speculation in there,” said Joe Bowring, who heads Monitoring Analytics, the independent market watchdog in the mid-Atlantic grid territory. “Nobody really knows. Nobody has been looking carefully enough at the forecast to know what’s speculative, what’s double-counting, what’s real, what’s not.”

    Suspicions about skyrocketing demand

    There is no standard practice across grids or for utilities to vet such massive projects, and figuring out a solution has become a hot topic, utilities and grid operators say.

    Uncertainty around forecasts is typically traced to a couple of things.

    One concerns developers seeking a grid connection, but whose plans aren’t set in stone or lack the heft — clients, financing or otherwise — to bring the project to completion, industry and regulatory officials say.

    Another is data center developers submitting grid connection requests in various separate utility territories, PJM Interconnection, which operates the mid-Atlantic grid, and Texas lawmakers have found.

    Often, developers, for competitive reasons, won’t tell utilities if or where they’ve submitted other requests for electricity, PJM said. That means a single project could inflate the energy forecasts of multiple utilities.

    The effort to improve forecasts got a high-profile boost in September, when a Federal Energy Regulatory Commission member asked the nation’s grid operators for information on how they determine that a project is not only viable, but will use the electricity it says it needs.

    “Better data, better decision-making, better and faster decisions mean we can get all these projects, all this infrastructure built,” the commissioner, David Rosner, said in an interview.

    The Edison Electric Institute, a trade association of for-profit electric utilities, said it welcomed efforts to improve demand forecasting.

    Real, speculative, or ‘somewhere in between’

    The Data Center Coalition, which represents tech giants like Google and Meta and data center developers, has urged regulators to request more information from utilities on their forecasts and to develop a set of best practices to determine the commercial viability of a data center project.

    The coalition’s vice president of energy, Aaron Tinjum, said improving the accuracy and transparency of forecasts is a “fundamental first step of really meeting this moment” of energy growth.

    “Wherever we go, the question is, ‘Is the (energy) growth real? How can we be so sure?’” Tinjum said. “And we really view commercial readiness verification as one of those important kind of low-hanging opportunities for us to be adopting at this moment.”

    Igal Feibush, the CEO of Pennsylvania Data Center Partners, a data center developer, said utilities are in a “fire drill” as they try to vet a deluge of data center projects all seeking electricity.

    The vast majority, he said, will fall off because many project backers are new to the concept and don’t know what it takes to get a data center built.

    States also are trying to do more to find out what’s in utility forecasts and weed out speculative or duplicative projects.

    In Texas, which is attracting large data center projects, lawmakers still haunted by a blackout during a deadly 2021 winter storm were shocked when told in 2024 by the grid operator, the Electric Reliability Council of Texas, that its peak demand could nearly double by 2030.

    They found that state utility regulators lacked the tools to determine whether that was realistic.

    Texas state Sen. Phil King told a hearing earlier this year that the grid operator, utility regulators and utilities weren’t sure if the power requests “are real or just speculative or somewhere in between.”

    Lawmakers passed legislation sponsored by King, now law, that requires data center developers to disclose whether they have requests for electricity elsewhere in Texas and to set standards for developers to show that they have a substantial financial commitment to a site.

    Electricity bills are rising, too

    PPL Electric Utilities, which delivers power to 1.5 million customers across central and eastern Pennsylvania, projects that data centers will more than triple its peak electricity demand by 2030.

    Vincent Sorgi, president and CEO of PPL Corp., told analysts on an earnings call this month that the data center projects “are real, they are coming fast and furious” and that the “near-term risk of overbuilding generation simply does not exist.”

    The data center projects counted in the forecast are backed by contracts with financial commitments often reaching tens of millions of dollars, PPL said.

    Still, PPL’s projections helped spur a state lawmaker, Rep. Danilo Burgos, to introduce a bill to bolster the authority of state utility regulators to inspect how utilities assemble their energy demand forecasts.

    Ratepayers in Burgos’ Philadelphia district just absorbed an increase in their electricity bills — attributed by the utility, PECO, to the rising cost of wholesale electricity in the mid-Atlantic grid driven primarily by data center demand.

    That’s why ratepayers need more protection to ensure they are benefiting from the higher cost, Burgos said.

    “Once they make their buck, whatever company,” Burgos said, “you don’t see no empathy towards the ratepayers.”

    ___

    Follow Marc Levy at http://twitter.com/timelywriter.

    [ad_2]

    Source link

  • Cities and states are turning to AI to improve road safety

    [ad_1]

    As America’s aging roads fall further behind on much-needed repairs, cities and states are turning to artificial intelligence to spot the worst hazards and decide which fixes should come first.

    Hawaii officials, for example, are giving away 1,000 dashboard cameras as they try to reverse a recent spike in traffic fatalities. The cameras will use AI to automate inspections of guardrails, road signs and pavement markings, instantly discerning between minor problems and emergencies that warrant sending a maintenance crew.

    “This is not something where it’s looked at once a month and then they sit down and figure out where they’re going to put their vans,” said Richard Browning, chief commercial officer at Nextbase, which developed the dashcams and imagery platform for Hawaii.

    After San Jose, California, started mounting cameras on street sweepers, city staff confirmed the system correctly identified potholes 97% of the time. Now they’re expanding the effort to parking enforcement vehicles.

    Texas, where there are more roadway lane miles than the next two states combined, is less than a year into a massive AI plan that uses cameras as well as cellphone data from drivers who enroll to improve safety.

    Other states use the technology to inspect street signs or build annual reports about road congestion.

    Every guardrail, every day

    Hawaii drivers over the next few weeks will be able to sign up for a free dashcam valued at $499 under the “Eyes on the Road” campaign, which was piloted on service vehicles in 2021 before being paused due to wildfires.

    Roger Chen, a University of Hawaii associate professor of engineering who is helping facilitate the program, said the state faces unique challenges in maintaining its outdated roadway infrastructure.

    “Equipment has to be shipped to the island,” Chen said. “There’s a space constraint and a topography constraint they have to deal with, so it’s not an easy problem.”

    Although the program also monitors such things as street debris and faded paint on lane lines, the companies behind the technology particularly tout its ability to detect damaged guardrails.

    “They’re analyzing all guardrails in their state, every single day,” said Mark Pittman, CEO of Blyncsy, which combines the dashboard feeds with mapping software to analyze road conditions.

    Hawaii transportation officials are well aware of the risks that can stem from broken guardrails. Last year, the state reached a $3.9 million settlement with the family of a driver who was killed in 2020 after slamming into a guardrail that had been damaged in a crash 18 months earlier but never repaired.

    In October, Hawaii recorded its 106th traffic fatality of 2025 — more than all of 2024. It’s unclear how many of the deaths were related to road problems, but Chen said the grim trend underscores the timeliness of the dashboard program.

    Building a larger AI database

    San Jose has reported strong early success in identifying potholes and road debris just by mounting cameras on a few street sweepers and parking enforcement vehicles.

    But Mayor Matt Mahan, a Democrat who founded two tech startups before entering politics, said the effort will be much more effective if cities contribute their images to a shared AI database. The system can recognize a road problem that it has seen before — even if it happened somewhere else, Mahan said.

    “It sees, ‘Oh, that actually is a cardboard box wedged between those two parked vehicles, and that counts as debris on a roadway,’” Mahan said. “We could wait five years for that to happen here, or maybe we have it at our fingertips.”

    San Jose officials helped establish the GovAI Coalition, which went public in March 2024 for governments to share best practices and eventually data. Other local governments in California, Minnesota, Oregon, Texas and Washington, as well as the state of Colorado, are members.

    Some solutions are simple

    Not all AI approaches to improving road safety require cameras.

    Massachusetts-based Cambridge Mobile Telematics launched a system called StreetVision that uses cellphone data to identify risky driving behavior. The company works with state transportation departments to pinpoint where specific road conditions are fueling those dangers.

    Ryan McMahon, the company’s senior vice president of strategy & corporate development, was attending a conference in Washington, D.C., when he noticed the StreetVision software was showing a massive number of vehicles braking aggressively on a nearby road.

    The reason: a bush was obstructing a stop sign, which drivers weren’t seeing until the last second.

    “What we’re looking at is the accumulation of events,” McMahon said. “That brought me to an infrastructure problem, and the solution to the infrastructure problem was a pair of garden shears.”

    Texas officials have been using StreetVision and various other AI tools to address safety concerns. The approach was particularly helpful recently when they scanned 250,000 lane miles (402,000 kilometers) to identify old street signs long overdue for replacement.

    “If something was installed 10 or 15 years ago and the work order was on paper, God help you trying to find that in the digits somewhere,” said Jim Markham, who deals with crash data for the Texas Department of Transportation. “Having AI that can go through and screen for that is a force multiplier that basically allows us to look wider and further much faster than we could just driving stuff around.”

    Autonomous vehicles are next

    Experts in AI-based road safety techniques say what’s being done now is largely just a stepping stone for a time when a large proportion of vehicles on the road will be driverless.

    Pittman, the Blyncsy CEO who has worked on the Hawaii dashcam program, predicts that within eight years almost every new vehicle — with or without a driver — will come with a camera.

    “How do we see our roadways today from the perspective of grandma in a Buick but also Elon and his Tesla?” Pittman said. “This is really important nuance for departments of transportation and city agencies. They’re now building infrastructure for humans and automated drivers alike, and they need to start bridging that divide.”

    [ad_2]

    Source link

  • Apple is ramping up succession plans for CEO Tim Cook and may tap this hardware exec to take over, report says | Fortune

    [ad_1]

    Apple’s board of directors and senior executives have been accelerating succession plans for Tim Cook, sources told the Financial Times.

    After serving as CEO for 14 years, Cook may step down as early as next year, the report said.

    Apple’s senior vice president of hardware engineering, 50-year-old John Ternus, is widely seen as the most likely successor, but no final decisions have been made yet, sources told the FT.

    The engineer joined Apple’s product design team in 2001 and has overseen hardware engineering for most major products the tech company has launched ever since, according to Ternus’ LinkedIn profile.

    He has also played a prominent role during Apple’s most recent keynotes, introducing products like the new iPhone Air. Ternus had been rumored to be Cook’s potential successor, according to previous reports

    The company is unlikely to name a new CEO before its next earnings report in late January, and an early-year announcement would allow a new leadership team time to settle in before its annual events, the FT said. 

    The succession preparations have been long-planned and are not related to the company’s current performance, which is expecting strong end-of-year sales, people close to Apple told the FT.

    Apple did not immediately respond to Fortune’s request for comment and declined to provide a comment to the FT.

    The $4 trillion company is expecting year-on-year revenue growth of 10% to 12% for its holiday quarter ending in December, fueled by the release of the iPhone 17 model in September.

    Ternus would take the helm of the tech giant at an important time in its evolution. Although Apple has seen sales success with iPhones and new products like Airpods over the past couple of decades, it has struggled to break into AI and keep up with rivals.

    Instead, Apple has even spending significantly less in AI investments compared to Mark Zuckerberg’s Meta, Amazon, Alphabet, and Microsoft

    Apple has been criticized by analysts this year for not having a clear AI strategy. And despite approving a multibillion-dollar budget to run its own models via the cloud in 2026, it was reported in June that Apple is even considering using models from OpenAI and Anthropic to power its updated version of Siri, rather than using technology the company has built in-house. 

    Its AI-enabled Siri, originally slated for 2025, will be delayed until 2026 or later due to a series of technical challenges, the company announced earlier this year.

    Apple has also lost a number of senior AI team members since January, many of whom have joined Meta’s AI and Superintelligence Labs during talent poaching wars this year. The exodus of Apple’s AI execs included Ruoming Pang, former head of Apple’s foundation models and core generative AI team, who joined Meta with a compensation package reportedly worth $200 million.

    The company is also dealing with increased competition from one of its most influential former employees.

    In May, Sam Altman’s OpenAI acquired startup io for about $6.5 billion, bringing in former Apple chief designer Jony Ive to build AI devices. The 58-year-old designer was instrumental in creating the iPhone, iPod, and iPad. 

    Cook, Apple’s former operations chief, turned 65 this month. He has grown the company’s market capitalization to $4 trillion from $350 billion in 2011, when he took over the CEO role from company co-founder Steve Jobs.

    Under Cook, Apple became the first publicly traded company to reach $1 trillion in market capitalization in 2018—then it became the first company to reach $3 trillion in market cap in 2022.

    But more recently, its stock price has been lagging behind Big Tech rivals Alphabet, Nvidia, and Microsoft, though Apple is trading close to an all-time high after strong earnings were reported in October.

    Apple has also dealt with tariff complications as U.S.-China trade tensions have disrupted its supply chain.

    Cook has previously said he’d prefer an internal candidate to replace him, adding that the company has “very detailed succession plans.”

    “I really want the person to come from within Apple,” Cook told singer Dua Lipa last year on her podcast At Your Service.

    [ad_2]

    Nino Paoli

    Source link

  • Unexpected group of artists fight to keep radio alive 100 years after its Golden Age

    [ad_1]

    The Golden Age of Radio began in the 1920s, and a century later, an unexpected group of artists are fighting to keep it relevant amid the rise of podcasts and other popular forms of digital media.

    [ad_2]

    Source link

  • Cash App’s Moneybot might know your spending habits better than you do

    [ad_1]

    NEW YORK (AP) — Imagine if your bank could move money for you with only the slightest of digital nods for your approval. Or that could tell you that you’re overspending but more importantly know how to address that overspending and put you on better financial footing.

    That’s what you’ll get with Moneybot, a new financial services chatbot shown off this week by Cash App that will be slowly introduced into its banking app this winter. Unlike existing bank chatbots, which can handle routine tasks like changing an address, Moneybot can take advanced actions for a customer like creating a savings plan, buying or selling stock, or even evaluating a customer’s spending habits.

    Moneybot is part of the next generation of chatbots using what the tech industry calls “agentic” AI, which turns tools like ChatGPT into an “AI agent” that can take action online on a person’s behalf. That means, instead of just writing text, answering questions or recommending products found online, an “agentic” chatbot could also buy a product.

    Amazon now has Rufus to go with Alexa, which both either provide information on products or can buy things on customers’ behalf. Walmart is rolling out “Chat & Buy” and Microsoft has Copilot Shopping.

    Agentic AI, for being so new, is already causing some controversy. Amazon is suing an AI chatbot company, Perplexity, for alleged computer fraud over AI shopping agents that Amazon says are disguising themselves as human buyers to access customer accounts without Amazon’s permission. Perplexity has denied the claims.

    Traditional banks have had chatbots for a while, notably Bank of America’s “Erica” or “Ask Amex” from American Express, but have hesitated to roll out agentic AI. They worry about possible liability if a chatbot buys a product by mistake for a customer or is maliciously used to buy things it is not supposed to.

    “Our top priority is to keep our customers’ and clients’ data safe above all else,” said Mark Birkhead, chief data officer at JPMorgan Chase, in an interview with the consulting firm McKinsey back in June on the issue of why the bank hasn’t rolled out agentic AI yet to customers.

    Cash App on the other hand is diving in head first.

    One notable feature of Moneybot is its prompts and suggestions. When Moneybot launches, it does an analysis of the the customer’s transactions and spending and gives them independent recommendations on actions they could take. Unlike other bank chatbots, which take you to other parts of a banks’ website, Moneybot’s transactions and analysis happen inside a single screen. Cash App’s executives see Moneybot becoming the primary way people interact with CashApp in the future.

    Want to know your biggest spending categories instantly and how to cut your spending? Moneybot gives several suggestions in a matter of seconds, showing you the merchants you spent with. Need to save $1,000 toward a vacation in six months? Moneybot creates an automated savings plan for you with only a couple of prompts.

    Want to put money into the stock market? It takes only a request and confirmation in Moneybot, which will buy Tesla stock for you or even bitcoin. Moneybot will remind you, however, that it does not give investing advice.

    Moneybot may even anticipate why the customer is opening up the app in the first place.

    “We have such a deep understanding of who you are that it’s almost a failure if we have to rely on customers to ask right questions,” said Owen Jennings, executive officer and business lead at Block, in an interview.

    Company officials pointed out that, despite having these agent abilities, Moneybot will still need active confirmation from the user to do its money-moving tasks. But that confirmation is often just a simple push of a button or a “yes” in a chat box.

    Cash App executives say Moneybot uses three different AI models, choosing the most appropriate one for the customer’s question. Some are easier to recognize, including the eager-to-please tone that often comes with ChatGPT 5.

    A Cash App employee demo’ing Moneybot, much to his chagrin, showed that he spent heavily at Nordstrom last month. Moneybot kindly suggested he might want to cut back on his clothing purchases if he needs to save money.

    There are things Moneybot cannot do because of the legal and privacy questions that have yet to be answered. Moneybot won’t offer you a loan but feels like it could do so if the toggle were ever turned on.

    Because of the way the prompts are written, Cash App employees acknowledged there could be privacy and legal implications with what Moneybot suggests if appropriate guardrails are not put into place.

    Policymakers have raised concerns about how these chatbots could steer customers into one product or another, even if one product may not be in the best interest for the customer. For instance, what’s to stop a future version of Moneybot from favoring a buy now, pay later loan from AfterPay — also owned by Cash App’s parent company Block — for purchases instead of Affirm or Klarna?

    “If firms cannot manage using a new technology in a lawful way, then they should not use the technology,” said Rohit Chopra in 2024, when he was director of the Consumer Financial Protection Bureau. Chopra spent much of his tenure at the bureau raising concerns about the adoption of AI in financial services.

    In the meantime, asking for a loan inside Moneybot will transfer a customer to a human agent.

    Not surprisingly, Moneybot has the usual disclosure found at the bottom of most chatbots these days: Artificial intelligence can make mistakes. Somehow, that feels a bit more important in banking than an AI chatbot accidentally providing the wrong amount of cumin in a fajita recipe or buying the wrong size of shirt.

    __

    An earlier version of this story misspelled Moneybot.

    [ad_2]

    Source link

  • One Tech Tip: iPhone users can now add US passport info to their digital wallets

    [ad_1]

    Just in time for the busy holiday travel season, iPhone users can now add their passport details to their Apple digital wallets.

    The company on Wednesday unveiled its new “Digital ID” system for users to add their U.S. passport information to Apple Wallet, which can be scanned at airport readers if travelers don’t have a Real ID.

    Digital ID acceptance “will roll out first in beta” at Transportation Security Administration checkpoints at more than 250 U.S. airports for “in-person identity verification during domestic travel.”

    The company warned that Digital ID doesn’t replace a physical passport and can’t be used for international travel and crossing borders.

    Apple already allowed people in 12 states and Puerto Rico to add their driver’s license or state ID to Apple Wallet, while TSA already accepts some form of a digital ID in at least 16 states and Puerto Rico.

    “You can breeze through more than 250 TSA checkpoints faster and more securely than ever before,” the agency’s website says.

    Here’s a guide on how to add your passport:

    Setup

    Open your iPhone’s Wallet app, tap the plus sign at the top and then tap the Digital ID option on the menu. If that doesn’t work for you, type in “Digital ID” into the app’s search bar.

    Grab your passport and follow the instructions. You’ll have to use the camera to scan your passport’s photo page. Next, place your iPhone on the chip embedded on the passport’s back page to authenticate the data.

    Finally, you will need to verify your identity, first by taking a selfie and then by carrying out a series of facial and head movements, such as turning your head or closing your eyes.

    Once the verification procedures are done, the Digital ID will be added to the Wallet.

    How to use

    Using your iPhone to present your Digital ID is similar to using it to make a purchase.

    Double-click the phone’s side button, which calls up the Wallet app. On the stack of cards, tap on the Digital ID. When it’s your turn at the TSA kiosk, hold your phone or Apple watch up to the reader.

    The machine will take your picture, and then your phone will let you review the information that’s being requested, such as name and date of birth. In order to authenticate those details, you’ll have to use the phone’s face or fingerprint scanner.

    What about security?

    Apple says your passport data is encrypted and stored on the device, and it can’t see when or where users present their Digital ID or the data that was shown.

    The use of a face or fingerprint scan makes sure that only the person who the ID belongs to can release the info.

    The company says that iPhone users don’t need to unlock, show, or hand over their device to present their Digital ID.

    Where can I use mobile IDs?

    More than a dozen states already accept some form of a mobile ID at airport checkpoints, according to TSA.

    The list includes: Arkansas, Arizona, California, Colorado, Georgia, Hawaii, Iowa, Louisiana, Maryland, Montana, New Mexico, New York, Ohio, Utah, Virginia and West Virginia, as well as Puerto Rico.

    Travelers can go to the TSA website for more details.

    ____

    Is there a tech topic that you think needs explaining? Write to us at [email protected] with your suggestions for future editions of One Tech Tip.

    ___

    AP Airlines and Travel Writer Rio Yamat contributed.

    [ad_2]

    Source link

  • From guardrails to potholes, AI is becoming the new eyes on America’s roads

    [ad_1]

    As America’s aging roads fall further behind on much-needed repairs, cities and states are turning to artificial intelligence to spot the worst hazards and decide which fixes should come first.

    Hawaii officials, for example, are giving away 1,000 dashboard cameras as they try to reverse a recent spike in traffic fatalities. The cameras will use AI to automate inspections of guardrails, road signs and pavement markings, instantly discerning between minor problems and emergencies that warrant sending a maintenance crew.

    “This is not something where it’s looked at once a month and then they sit down and figure out where they’re going to put their vans,” said Richard Browning, chief commercial officer at Nextbase, which developed the dashcams and imagery platform for Hawaii.

    After San Jose, California, started mounting cameras on street sweepers, city staff confirmed the system correctly identified potholes 97% of the time. Now they’re expanding the effort to parking enforcement vehicles.

    Texas, where there are more roadway lane miles than the next two states combined, is less than a year into a massive AI plan that uses cameras as well as cellphone data from drivers who enroll to improve safety.

    Other states use the technology to inspect street signs or build annual reports about road congestion.

    Hawaii drivers over the next few weeks will be able to sign up for a free dashcam valued at $499 under the “Eyes on the Road” campaign, which was piloted on service vehicles in 2021 before being paused due to wildfires.

    Roger Chen, a University of Hawaii associate professor of engineering who is helping facilitate the program, said the state faces unique challenges in maintaining its outdated roadway infrastructure.

    “Equipment has to be shipped to the island,” Chen said. “There’s a space constraint and a topography constraint they have to deal with, so it’s not an easy problem.”

    Although the program also monitors such things as street debris and faded paint on lane lines, the companies behind the technology particularly tout its ability to detect damaged guardrails.

    “They’re analyzing all guardrails in their state, every single day,” said Mark Pittman, CEO of Blyncsy, which combines the dashboard feeds with mapping software to analyze road conditions.

    Hawaii transportation officials are well aware of the risks that can stem from broken guardrails. Last year, the state reached a $3.9 million settlement with the family of a driver who was killed in 2020 after slamming into a guardrail that had been damaged in a crash 18 months earlier but never repaired.

    In October, Hawaii recorded its 106th traffic fatality of 2025 — more than all of 2024. It’s unclear how many of the deaths were related to road problems, but Chen said the grim trend underscores the timeliness of the dashboard program.

    San Jose has reported strong early success in identifying potholes and road debris just by mounting cameras on a few street sweepers and parking enforcement vehicles.

    But Mayor Matt Mahan, a Democrat who founded two tech startups before entering politics, said the effort will be much more effective if cities contribute their images to a shared AI database. The system can recognize a road problem that it has seen before — even if it happened somewhere else, Mahan said.

    “It sees, ‘Oh, that actually is a cardboard box wedged between those two parked vehicles, and that counts as debris on a roadway,’” Mahan said. “We could wait five years for that to happen here, or maybe we have it at our fingertips.”

    San Jose officials helped establish the GovAI Coalition, which went public in March 2024 for governments to share best practices and eventually data. Other local governments in California, Minnesota, Oregon, Texas and Washington, as well as the state of Colorado, are members.

    Not all AI approaches to improving road safety require cameras.

    Massachusetts-based Cambridge Mobile Telematics launched a system called StreetVision that uses cellphone data to identify risky driving behavior. The company works with state transportation departments to pinpoint where specific road conditions are fueling those dangers.

    Ryan McMahon, the company’s senior vice president of strategy & corporate development, was attending a conference in Washington, D.C., when he noticed the StreetVision software was showing a massive number of vehicles braking aggressively on a nearby road.

    The reason: a bush was obstructing a stop sign, which drivers weren’t seeing until the last second.

    “What we’re looking at is the accumulation of events,” McMahon said. “That brought me to an infrastructure problem, and the solution to the infrastructure problem was a pair of garden shears.”

    Texas officials have been using StreetVision and various other AI tools to address safety concerns. The approach was particularly helpful recently when they scanned 250,000 lane miles (402,000 kilometers) to identify old street signs long overdue for replacement.

    “If something was installed 10 or 15 years ago and the work order was on paper, God help you trying to find that in the digits somewhere,” said Jim Markham, who deals with crash data for the Texas Department of Transportation. “Having AI that can go through and screen for that is a force multiplier that basically allows us to look wider and further much faster than we could just driving stuff around.”

    Experts in AI-based road safety techniques say what’s being done now is largely just a stepping stone for a time when a large proportion of vehicles on the road will be driverless.

    Pittman, the Blyncsy CEO who has worked on the Hawaii dashcam program, predicts that within eight years almost every new vehicle — with or without a driver — will come with a camera.

    “How do we see our roadways today from the perspective of grandma in a Buick but also Elon and his Tesla?” Pittman said. “This is really important nuance for departments of transportation and city agencies. They’re now building infrastructure for humans and automated drivers alike, and they need to start bridging that divide.”

    [ad_2]

    Source link

  • Future data centers are driving up forecasts for energy demand. States want proof they’ll get built

    [ad_1]

    HARRISBURG, Pa. — The forecasts are eye-popping: utilities saying they’ll need two or three times more electricity within a few years to power massive new data centers that are feeding a fast-growing AI economy.

    But the challenges — some say the impossibility — of building new power plants to meet that demand so quickly has set off alarm bells for lawmakers, policymakers and regulators who wonder if those utility forecasts can be trusted.

    One burning question is whether the forecasts are based on data center projects that may never get built — eliciting concern that regular ratepayers could be stuck with the bill to build unnecessary power plants and grid infrastructure at a cost of billions of dollars.

    The scrutiny comes as analysts warn of the risk of an artificial intelligence investment bubble that’s ballooned tech stock prices and could burst.

    Meanwhile, consumer advocates are finding that ratepayers in some areas — such as the mid-Atlantic electricity grid, which encompasses all or parts of 13 states stretching from New Jersey to Illinois, as well as Washington, D.C. — are already underwriting the cost to supply power to data centers, some of them built, some not.

    “There’s speculation in there,” said Joe Bowring, who heads Monitoring Analytics, the independent market watchdog in the mid-Atlantic grid territory. “Nobody really knows. Nobody has been looking carefully enough at the forecast to know what’s speculative, what’s double-counting, what’s real, what’s not.”

    There is no standard practice across grids or for utilities to vet such massive projects, and figuring out a solution has become a hot topic, utilities and grid operators say.

    Uncertainty around forecasts is typically traced to a couple of things.

    One concerns developers seeking a grid connection, but whose plans aren’t set in stone or lack the heft — clients, financing or otherwise — to bring the project to completion, industry and regulatory officials say.

    Another is data center developers submitting grid connection requests in various separate utility territories, PJM Interconnection, which operates the mid-Atlantic grid, and Texas lawmakers have found.

    Often, developers, for competitive reasons, won’t tell utilities if or where they’ve submitted other requests for electricity, PJM said. That means a single project could inflate the energy forecasts of multiple utilities.

    The effort to improve forecasts got a high-profile boost in September, when a Federal Energy Regulatory Commission member asked the nation’s grid operators for information on how they determine that a project is not only viable, but will use the electricity it says it needs.

    “Better data, better decision-making, better and faster decisions mean we can get all these projects, all this infrastructure built,” the commissioner, David Rosner, said in an interview.

    The Edison Electric Institute, a trade association of for-profit electric utilities, said it welcomed efforts to improve demand forecasting.

    The Data Center Coalition, which represents tech giants like Google and Meta and data center developers, has urged regulators to request more information from utilities on their forecasts and to develop a set of best practices to determine the commercial viability of a data center project.

    The coalition’s vice president of energy, Aaron Tinjum, said improving the accuracy and transparency of forecasts is a “fundamental first step of really meeting this moment” of energy growth.

    “Wherever we go, the question is, ‘Is the (energy) growth real? How can we be so sure?’” Tinjum said. “And we really view commercial readiness verification as one of those important kind of low-hanging opportunities for us to be adopting at this moment.”

    Igal Feibush, the CEO of Pennsylvania Data Center Partners, a data center developer, said utilities are in a “fire drill” as they try to vet a deluge of data center projects all seeking electricity.

    The vast majority, he said, will fall off because many project backers are new to the concept and don’t know what it takes to get a data center built.

    States also are trying to do more to find out what’s in utility forecasts and weed out speculative or duplicative projects.

    In Texas, which is attracting large data center projects, lawmakers still haunted by a blackout during a deadly 2021 winter storm were shocked when told in 2024 by the grid operator, the Electric Reliability Council of Texas, that its peak demand could nearly double by 2030.

    They found that state utility regulators lacked the tools to determine whether that was realistic.

    Texas state Sen. Phil King told a hearing earlier this year that the grid operator, utility regulators and utilities weren’t sure if the power requests “are real or just speculative or somewhere in between.”

    Lawmakers passed legislation sponsored by King, now law, that requires data center developers to disclose whether they have requests for electricity elsewhere in Texas and to set standards for developers to show that they have a substantial financial commitment to a site.

    PPL Electric Utilities, which delivers power to 1.5 million customers across central and eastern Pennsylvania, projects that data centers will more than triple its peak electricity demand by 2030.

    Vincent Sorgi, president and CEO of PPL Corp., told analysts on an earnings call this month that the data center projects “are real, they are coming fast and furious” and that the “near-term risk of overbuilding generation simply does not exist.”

    The data center projects counted in the forecast are backed by contracts with financial commitments often reaching tens of millions of dollars, PPL said.

    Still, PPL’s projections helped spur a state lawmaker, Rep. Danilo Burgos, to introduce a bill to bolster the authority of state utility regulators to inspect how utilities assemble their energy demand forecasts.

    Ratepayers in Burgos’ Philadelphia district just absorbed an increase in their electricity bills — attributed by the utility, PECO, to the rising cost of wholesale electricity in the mid-Atlantic grid driven primarily by data center demand.

    That’s why ratepayers need more protection to ensure they are benefiting from the higher cost, Burgos said.

    “Once they make their buck, whatever company,” Burgos said, “you don’t see no empathy towards the ratepayers.”

    ___

    Follow Marc Levy at http://twitter.com/timelywriter.

    [ad_2]

    Source link

  • Disney Reaches New Deal With YouTube TV, Ending Dayslong Blackout for Customers

    [ad_1]

    Disney and YouTube TV reached a new deal to bring channels like ABC and ESPN back to the Google-owned live streaming platform Friday, ending a blackout for customers that dragged on for about two weeks.

    “As part of the new deal, Disney’s full suite of networks and stations – including ESPN and ABC – have already begun to be restored to YouTube TV subscribers,” The Walt Disney Co. said in a statement.

    “We are pleased that our networks have been restored in time for fans to enjoy the many great programming options this weekend, including college football.”

    Disney content had gone dark on YouTube TV the night of Oct. 30, after two sides failed to reach a new licensing deal. In the days that followed, YouTube TV subscribers were left without Disney channels on the platform — notably disrupting coverage of top U.S. college football matchups and professional sports games, among other news and entertainment offerings.

    Beyond ESPN and ABC, other Disney-owned content removed from YouTube TV during the impasse included channels like NatGeo, FX, Freeform, SEC Network, ACC Network and more.

    At the time the carriage dispute reached its boiling point, YouTube TV said that Disney was proposing terms that would be too costly, resulting in higher prices and fewer choices for its subscribers. And the platform accused Disney of using the blackout “as a negotiating tactic” — claiming that the move also benefited Disney’s own streaming products like Hulu + Live TV and Fubo.

    Disney, meanwhile, said that YouTube TV had refused to pay fair rates for its channels. The California entertainment giant also accused Google of “using its market dominance to eliminate competition.” And executives blasted the platform for pulling content “prior to the midnight expiration” of their deal last month.

    On Nov. 3, Disney also asked YouTube TV to restore ABC programming for Election Day on Nov. 4 to put “the public interest first.” But YouTube TV said this temporary reprieve would confuse customers — and instead proposed that the entertainment giant agree to restore both its ABC and ESPN channels while the two sides continue negotiations.

    The blackout marked the latest in growing list of licensing disputes in today’s streaming world. And consumers often pay the price.

    From sports events to awards shows, live programming that was once reserved for broadcast has increasingly made its way into the streaming world over the years as more and more consumers ditch traditional cable or satellite TV subscriptions for content they can get online. But amid growing competition, renewing carriage agreements can also mean tense contract negotiations — and at times service disruptions.

    YouTube TV and Disney have been down this road before. In 2021, YouTube TV subscribers also briefly lost access to all Disney content on the platform after a similar contract breakdown between the two companies. That outage lasted less than two days, with the companies eventually reaching an agreement.

    Meanwhile, YouTube TV has removed other networks from its platform after expired agreements. Spanish-language broadcaster Univision has been unavailable on YouTube TV since Sept. 30, for example. At the time, its parent company TelevisaUnivision decried Google’s move — noting it would strip “millions of Hispanic viewers of the Spanish-language news, sports, and entertainment they rely on every day” and called on the platform to reverse course.

    YouTube TV’s base subscription plan costs $82.99 per month — which, beyond Disney content, currently includes live TV offerings from networks like NBC, CBS, Fox, PBS and more. The platform previously said it would give subscribers a $20 credit its dispute with Disney lasted “an extended period of time” — which it reportedly allowed customers to start claiming on Nov. 9.

    Disney also doles out live TV through both traditional broadcasting and its own lineup of streaming platforms. ESPN launched its own streamer earlier this year, starting at $29.99 a month. And other Disney content can be found on platforms like Hulu, Disney+ and Fubo. Disney currently allows people to bundle ESPN along with Hulu and Disney+ for $35.99 a month — or $29.99 a month for the first year.

    Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – Oct. 2025

    [ad_2]

    Associated Press

    Source link

  • Cyber Sovereignty at Risk: How Geopolitics Are Shaping Canada’s Digital Security

    [ad_1]

    From ransomware to quantum disruption, Canada must take urgent steps to defend its institutions and build long-term cyber capacity. Observer Labs

    This Q&A is part of Observer’s Expert Insights series, where industry leaders, innovators and strategists distill years of experience into direct, practical takeaways and deliver clarity on the issues shaping their industries. At a moment when cyber threats are escalating alongside geopolitical tensions, Canada finds itself at a crossroads: how to defend its digital infrastructure, protect its economy and maintain global competitiveness while preserving the values of an open, democratic society.

    Judith Borts, senior director of the Rogers Cybersecure Catalyst at Toronto Metropolitan University, sits at the intersection of policy, security and economic strategy. With a career spanning provincial economic development, national innovation policy and cross-sector collaboration, Borts has become one of Canada’s most vocal advocates for treating cybersecurity not as a niche technical specialty but as a shared societal responsibility—one that will determine the country’s digital sovereignty in the years ahead.

    Her work at the Catalyst focuses on building the talent, partnerships and operational capacity Canada needs to withstand increasingly sophisticated attacks. But it’s her policy background that gives her a panoramic view of what’s at stake. Canada, she argues, can no longer afford a reactive approach to cyber risk. Nation-state adversaries, criminal networks and A.I.-accelerated threats are moving faster than traditional governance models can respond, and the downstream costs to Canadians are already enormous.

    Borts outlines where Canada is falling behind global peers, what a truly unified national cyber strategy would require and why talent development may ultimately matter more than any single technological breakthrough. She also offers a candid look at the sectors most vulnerable today, the policies needed to strengthen resilience and how emerging technologies like A.I. and quantum computing will reshape the country’s digital future. Canada’s prosperity increasingly depends on something once viewed as purely defensive: a secure and trusted digital ecosystem.

    With global alliances shifting and the U.S. pulling back from international cooperation, how are these geopolitical tensions directly reshaping Canada’s cybersecurity priorities and its role in intelligence-sharing networks?

    Even as global alliances shift, intelligence sharing through networks like the Five Eyes, G7 and NATO remains strong. That’s not really where Canada’s biggest challenge is. What we really need to zero in on is building our own sovereign defence and resilience—including in the cyber and digital domains—so we can protect ourselves, respond quickly when threats come up and recover safely and securely.

    Cyberattacks today can come from anywhere (foreign governments, organized groups or even individuals), and they pose real risks to Canadian institutions, businesses and citizens. Our national security and defence strategies need to reflect that reality. We need to invest more in homegrown talent and innovation, from cybersecurity research to advances in A.I. and quantum technologies, so that Canada can stay ahead of the curve. It’s not about losing trust in our allies; it’s about maintaining our strong relationships while also making sure we have the strength and resilience to stand on our own when it matters most.

    Which Canadian sectors are most exposed to cyber risk, and how prepared are they to defend against the sophisticated attacks we’re seeing today?

    Every sector in Canada, as well as around the world, is exposed to cyber risk. Healthcare continues to face some of the most visible and alarming threats. Ransomware attacks have forced hospitals to cancel surgeries and even shut down emergency systems, putting patient safety directly at risk. The energy sector is another major target. And what used to be mainly about stealing data has now shifted to attempts to interfere with the systems that keep our power grid running. As our digital and physical infrastructure becomes more connected, those risks multiply and even a single successful attack can throw essential services across the country into chaos.

    Canada’s economy is powered by small and medium-sized businesses, which make up about 99 percent of all companies in the country and account for more than half of the country’s GDP. These companies are increasingly being targeted but often lack the specialized staff, training and resources to respond effectively. Plus, the impacts of a ransomware attack on an SMB’s bottom line can be massive. 

    We’re seeing progress in some areas, but these are still isolated efforts. Real national cybersecurity and resilience mean a coordinated approach, one that brings strong security standards together with real investment in education, innovation and long-term capacity building. That’s how we keep Canada’s economy secure and competitive in the years ahead.

    What specific policy mechanisms are needed to create a unified national cyber strategy that also respects Canada’s diverse regional priorities?

    A top-down approach alone won’t keep up with how fast threats evolve or be able to address the practical needs of all regions. Real resilience comes from bringing federal, provincial and local efforts together so we can build safe and secure communities, share information faster, respond in real time and build trust across sectors.

    We also need to make it easier for Canadian businesses to operate securely, both at home and abroad. That means creating a more harmonized and less fragmented set of cyber standards and compliance requirements, so companies aren’t forced to navigate a maze of conflicting rules across jurisdictions. Taking a more unified approach that integrates leading global approaches and consistent standards would help Canada stay internationally competitive while keeping our digital ecosystem strong and secure.

    In a nutshell, the federal government should set the national vision and provide the framework and tools while empowering local governments, organizations and innovators to adapt that framework to their realities. When everyone works from the same playbook, security can become part of how we do business—not a barrier to it.

    As cyber threats evolve, is Canada keeping pace with peers like the U.S. and the E.U. in building defensive capabilities, or are governance gaps holding it back?

    It’s an exciting time for cybersecurity in Canada, but the truth is we’re not yet keeping pace with our peers. The United States invests close to $800 billion or 3.5 percent of GDP annually in research and development, while Canada spends less than 2 percent of ours, and only a fraction of that goes toward cyber and defense innovation. That gap matters. The European Union, meanwhile, approaches cybersecurity not just as a security issue but as a pillar of economic resilience, seeing digital protection and competitiveness as two sides of the same coin. 

    Canada has world-leading talent in cybersecurity, A.I. and quantum. We are also building a strong foundation with proposed legislation like the Critical Cyber Systems Protection Act (Bill C-8) and a growing base of innovation, but we need to move faster—connecting our federal, provincial and municipal strategies, strengthening our talent pipeline and investing in homegrown technology. If we treat cybersecurity as both national defence and economic opportunity, we can close the gap and position Canada as a real leader in the digital future.

    What are the most critical lessons from recent high-profile cyberattacks, and how should they guide efforts to build systemic resilience?

    If there’s one thing recent cyberattacks have taught us, it’s that we need to wake up. No one is really paying attention to how serious this has become. We’re seeing massive fraud and data theft happening quietly, every day, and too often the response is weak at best. The impacts are not only felt at the victim’s level; the burden of the costs to Canadians is enormous, and we’re all paying for this. 

    And still, people aren’t changing their passwords, companies still skip basic protections like multi-factor authentication, and we’ve normalized the idea that our data will be stolen eventually. That has to change.

    There’s a common mantra in the cyber community that when it comes to cyber threats: ‘it’s not if, but when.’ But the lesson isn’t that attacks are inevitable. It’s that we need to take preventative action and prepare for potential threats. Complacency is our biggest weakness. 

    We can’t treat cybersecurity as background noise while we rush to adopt new technologies like A.I. A.I. can make systems smarter, but it also makes cyber threats faster, more targeted and harder to detect. At the same time, many organizations are adopting A.I. without fully addressing the very real risks that come with it. Every organization embracing A.I. should be asking: Are we doing this in a way that keeps us secure and our clients/customers safe?

    True resilience isn’t about specific actions by a cyber team; it’s about how fast and effectively we respond and how seriously we take the responsibility to protect ourselves in the first place.

    What role should partnerships between universities, public institutions, government, private industry and Canadian tech companies play in building national cyber resilience?

    No single group can solve Canada’s cybersecurity challenges on its own—the threats are too complex, the digital infrastructure is too vast and diverse and the stakes are too high. True resilience depends on everyone working together: universities driving research and developing talent, government providing intelligence, guidance and coordination, industry building secure systems and helping to generate specialized talent and Canadian tech companies pushing innovation forward.

    But collaboration can’t just happen in boardrooms or policy papers: we also have to meet Canadians where they are. Digital resilience and cyber awareness are no longer specialized skills; they are now basic workplace essentials. Everyone, regardless of their role, needs to understand how to protect information, manage digital tools responsibly, and remain vigilant to evolving threats. If we’re going to reach everyone, it means finding more creative and practical ways to weave cyber awareness and digital resilience into everyday life, whether that’s through local community programs, small business training or more accessible education. 

    When universities, public institutions, government, and industry connect directly with Canadians, cybersecurity stops being an abstract concept and becomes something everyone can take part in.

    That whole-of-society approach is no longer optional. It’s literally the foundation of our national resilience.

    How does developing a skilled and diverse cybersecurity workforce contribute to Canada’s digital sovereignty and long-term competitiveness?

    When we talk about securing Canada’s digital future, the real advantage isn’t just in technology; it’s in people. We need Canadians to protect what matters to Canada and build a robust digital infrastructure that we can rely on to keep our economy and country growing in the face of mounting threats.  This requires a trustworthy and capable workforce. At the Catalyst, we have no delusions about the impacts of A.I. on cybersecurity work. The key question is: what does a skilled cybersecurity workforce look like in the age of A.I.?

    We are hyper-focused on creating not only skilled cybersecurity professionals, but also helping those in other organizational roles across different sectors to better understand the cybersecurity challenges they are facing while maintaining a keen eye on emerging technologies such as A.I. and quantum computing. Through our programs, we’re building job-ready professionals who can address the human, organizational and technical issues of cybersecurity. 

    But in an era where A.I. can automate certain technical functions, the real challenge—and opportunity—is in ensuring that we have an agile workforce and that we educate and support individuals in exercising judgment, creativity, critical thinking, contextual understanding and ethical reasoning that machines can’t replicate. 

    It’s like asking how you maintain a community of great writers when A.I. can draft a paragraph for you: the value shifts to insight, empathy, strategy and human perspective.

    How can Canada’s cyber strategy link security, innovation and economic growth?

    For too long, we’ve talked about cybersecurity as a purely defensive measure. Many still view it as just the cost of doing business. The truth is, in the modern economy, cybersecurity is an investment, and resilience is one of our biggest competitive advantages. It’s the bedrock of national prosperity and our ticket to maintaining our position as a serious player on the global stage.

    Think about it: when we create an environment built on digital trust, with infrastructure that is both robust and secure, everything else follows. It’s what gives international partners the confidence to invest here, and it’s what gives our own innovators in critical sectors like finance, healthcare and technology the secure launchpad they need to bring their best ideas to life. 

    So, the critical question is, how do you intentionally build that kind of environment? It doesn’t happen by accident, and it can’t rest solely on a policy or a plan. It only comes about through action.

    By combining smart government policies and strong intellectual property and patent protections with real incentives for our businesses, we stop treating cybersecurity as a problem to be solved and start seeing it for what it is: a massive opportunity to build our next generation of tech leaders and secure Canada’s role as an innovator.

    How will emerging technologies such as A.I. and quantum computing reshape Canada’s cybersecurity landscape, and what must be done now to ensure a secure, sovereign, and competitive digital ecosystem by 2030?

    A.I. is rewriting the cybersecurity landscape, and quantum computing won’t be far behind. Each one presents both huge opportunities and serious threats. As these technologies start to converge, we will see incredible new possibilities and potential, but also significant power to cause real damage if we’re not prepared.

    A.I. is now an arms race. For every advanced risk detection model we create, our adversaries are using A.I. to launch attacks. And quantum computing is the horizon. This will threaten most of the common encryption used today. 

    This new reality demands a strategic change, including what the industry calls the “shift-left approach.” Traditionally, security testing happened at the end of a project, just before the software was released. Shift-left flips that model by pushing security earlier in the development cycle—essentially “shifting” it to the left on the project timeline. 

    For example, instead of waiting until a new system is fully built to check for vulnerabilities, developers should build security into the design on day one, and then test for risks at each step. This approach comes from modern software engineering, but it’s now essential for cybersecurity: if emerging technologies like A.I. aren’t built with security-by-design, we’re already behind. 

    Ultimately, by investing in talent, targeting the best in R&D, and investing in an innovative ecosystem, Canada can make sure we’re not just reacting to technological change but we are leading the change. 

    Cyber Sovereignty at Risk: How Geopolitics Are Shaping Canada’s Digital Security

    [ad_2]

    Judith Borts

    Source link

  • Anthropic’s Claude Expands Into Industrial A.I. Through Major Partnership

    [ad_1]

    Anthropic is founded and led by CEO Dario Amodei. JULIEN DE ROSA/AFP via Getty Images

    Anthropic, long known in Silicon Valley for its high-performance A.I. models and API ecosystem used across tech, finance and major consumer brands, is now pushing into the industrial sector. The company this week announced a partnership with IFS Nexus Black—the innovation arm of enterprise software giant IFS, whose customers include Lockheed Martin, Exelon and Quanta Services—and unveiled Resolve, an industrial A.I. platform powered by Claude.

    Anthropic already works with household names like BMW, L’Oréal, Sanofi and Panasonic. IFS is Anthropic’s first major customer in heavy industry, where split-second calls can halt production lines or shape how quickly field engineers respond to climate-driven disasters.

    “The true test of A.I. is how it performs when the stakes are high,” IFS CEO Kriti Sharma told Observer. “The partnership allows us to bring advanced models into the physical world responsibly and at scale.”

    Industrial A.I. bridges digital intelligence and real-world machinery. It predicts equipment failures, optimizes complex processes and reduces dangerous or repetitive work. Unlike consumer A.I. assistants, industrial systems must handle chaotic environments, inconsistent conditions and nonstop operational data.

    “It’s about applying A.I. to environments surrounded by sensors and machinery, with people on the ground making high-stakes decisions every minute,” Sharma said. “Industrial A.I. can listen to a turbine and warn of a fault before it happens or ‘see’ subtle changes in a pipeline that would take a human hours to detect. It connects planners, technicians and assets in real time to improve yield, reduce costs and keep frontline operations running safely.”

    Garvan Doyle, applied A.I. lead at Anthropic, said the platform also aims to demonstrate responsible A.I. in practice. “It’s about proving that frontier models (advanced, large-scale A.I. systems) can operate effectively where critical infrastructure is on the line,” he told Observer. Claude’s multimodal analysis and ability to synthesize disparate information “is a natural fit for industrial environments and what frontline workers need: an A.I. that can surface insights humans might miss.”

    Resolve uses Claude to interpret video feeds, audio from rattling machinery, thermal or pressure readings and even technical diagrams. Workers can speak directly to Resolve, which transcribes notes, connects to documentation and creates an automatic decision trail.

    The system’s goal is to reduce busywork, capture institutional knowledge and streamline the exchange between workers and the A.I. tools guiding them.

    “Claude is trained to be honest about uncertainty, avoid confabulation and reason carefully through complex problems,” said Doyle. In industrial contexts, that means technicians can see why Claude surfaces a potential fault or recommends a repair and verify it against their own expertise. “Keeping humans in the loop is key, and it’s especially impactful when A.I. works as a force multiplier for skilled workers,” he added.

    IFS says its customer, William Grant & Sons, the distiller behind Grant’s whisky and Hendrick’s gin, long struggled with fragmented data that forced 38 percent of repairs to be emergency fixes. With an early deployment of Resolve, IFS estimates the distillery will save roughly $11.05 million annually once the new workflows scale.

    Severe weather is also driving demand for industrial A.I. Last year, 27 U.S. natural disasters each caused more than $1 billion in losses. IFS says utilities using Resolve can restore power up to 40 percent faster after storms, floods or wildfires. The system analyzes weather and grid data to predict outages, route crews and coordinate mutual aid.

    “We’re solving the hard problems, not retrofitting generic A.I. into critical industries. The context, the data and the risk are completely different and we understand that at a very intimate level,” said Sharma. 

    Doyle added that Claude’s broad reasoning ability matters in environments where problems “don’t come pre-labeled and edge cases are constant. A narrowly trained system breaks when it encounters something outside its training distribution,” he said. “Claude’s broad intelligence means it can reason through novel situations and synthesize information across domains even when the specific scenario hasn’t been seen before.”

    Anthropic is entering an increasingly competitive industrial A.I. race as rivals invest heavily in infrastructure and sector-specific deployments. OpenAI is building its own network of industrial partnerships, including an alliance with Hitachi that embeds OpenAI models in energy, manufacturing and industrial data systems. Deployments like Mattel’s use of Sora 2 for toy design highlight its push into specialized workflows.

    The IFS partnership gives Anthropic something its competitors lack: direct access to field operations, maintenance workflows and disaster-response systems where reliability is paramount.

    In a sector where scale is often mistaken for capability, Anthropic is betting that trust, precision and resilience will matter most. If early deployments succeed, industrial A.I. could become one of Claude’s most tangible and consequential success stories.

    Anthropic’s Claude Expands Into Industrial A.I. Through Major Partnership

    [ad_2]

    Victor Dey

    Source link

  • Anthropic warns of AI-driven hacking campaign linked to China

    [ad_1]

    WASHINGTON — A team of researchers has uncovered what they say is the first reported use of artificial intelligence to direct a hacking campaign in a largely automated fashion.

    The AI company Anthropic said this week that it disrupted a cyber operation that its researchers linked to the Chinese government. The operation involved the use of an artificial intelligence system to direct the hacking campaigns, which researchers called a disturbing development that could greatly expand the reach of AI-equipped hackers.

    While concerns about the use of AI to drive cyber operations are not new, what is concerning about the new operation is the degree to which AI was able to automate some of the work, the researchers said.

    “While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale,” they wrote in their report.

    The operation was modest in scope and only targeted about 30 individuals who worked at tech companies, financial institutions, chemical companies and government agencies. Anthropic noticed the operation in September and took steps to shut it down and notify the affected parties.

    The hackers only “succeeded in a small number of cases,” according to Anthropic, which noted that while AI systems are increasingly being used in a variety of settings for work and leisure, they can also be weaponized by hacking groups working for foreign adversaries. Anthropic, maker of the generative AI chatbot Claude, is one of many tech companies pitching AI “agents” that go beyond a chatbot’s capability to access computer tools and take actions on a person’s behalf.

    “Agents are valuable for everyday work and productivity — but in the wrong hands, they can substantially increase the viability of large-scale cyberattacks,” the researchers concluded. “These attacks are likely to only grow in their effectiveness.”

    A spokesperson for China’s embassy in Washington did not immediately return a message seeking comment on the report.

    Microsoft warned earlier this year that foreign adversaries were increasingly embracing AI to make their cyber campaigns more efficient and less labor-intensive.

    America’s adversaries, as well as criminal gangs and hacking companies, have exploited AI’s potential, using it to automate and improve cyberattacks, to spread inflammatory disinformation and to penetrate sensitive systems. AI can translate poorly worded phishing emails into fluent English, for example, as well as generate digital clones of senior government officials.

    [ad_2]

    Source link

  • As AI-Artist Xania Monet Climbs the Charts, Victoria Monét’s Caught in the Uncanny Valley

    [ad_1]

    Grammy-winning singer, songwriter, and producer Victoria Monét has built her career on turning emotion into melody, writing hits for stars like Ariana Grande, Blackpink, and Coco Jones, as well as recording her own deeply personal records. Her songs are intimate, intentional, and overtly shaped by her voice, vision, and human collaboration. So when Xania Monet, an AI-powered R&B “artist” bearing a similar name, reportedly landed a $3 million record deal with Hallwood Media and started charting, the corporeal Monét felt uneasy. “Monet” also sonically evokes the name of another musician, Janelle Monáe, adding an additional layer to the confusion. (Hallwood Media did not respond to a request for comment.)

    Monét can’t definitively say that the AI artist was trained on her music, but the resemblance feels uncanny. “It’s hard to comprehend that, within a prompt, my name was not used for this artist to capitalize on,” she tells Vanity Fair. “I don’t support that. I don’t think that’s fair. When that name starts to ring bells in a certain way, it can easily be mixed up with my brand. It’s not ideal.”

    Even if the similarity is just a coincidence, that’s beside the point. Monét says when one of her friends typed a random prompt into ChatGPT, asking it to create a photo of “Victoria Monét making tacos” in a fictional setting, the image generator produced a woman who looked eerily like the emerging AI artist.

    As the first AI artist to hit a US radio airplay chart, Xania Monet has been met with heavy pushback. In an interview last Wednesday with CBS Mornings, Telisha “Nikki” Jones—the woman and lyricist who created the artificial artist and her sound—defended her practice. “Xania is an extension of me, so I look at her as a real person,” she said. “I just feel like AI…it’s the new era that we’re in. And I look at it as a tool, as an instrument, and utilize it.” (Jones has not yet responded to Vanity Fair’s request for comment.)

    The anxiety surrounding AI’s role in music isn’t new: In September, Kehlani decried Xania Monét landing a record deal. Last fall, Beyoncé told GQ an AI song mimicking her voice “scared” her; the year before that, Cher blasted the tech for using her voice. In a January BBC interview, Paul McCartney said AI isn’t all bad, but it shouldn’t “rip creative people off.” Last year, in a public show of solidarity, more than 200 musicians—including Billie Eilish, Stevie Wonder, Kacey Musgraves, and the estates of Frank Sinatra and Bob Marley—signed an open letter demanding protection against AI systems that imitate artists’ likeness, voice, and sound.

    [ad_2]

    Brea Cubit

    Source link

  • Google offers changes to satisfy EU ad-tech case but they don’t include breakup

    [ad_1]

    LONDON — Google has offered to make major changes to its business practices to resolve a European Union antitrust case targeting its ad-tech business, but they don’t include breaking up the company.

    The compliance plan Google submitted to the European Commission — the 27-nation bloc’s top antitrust enforcer — includes “immediate product changes” to end specific practices, the company said in a blog post.

    “Our proposal fully addresses the EC’s decision without a disruptive break-up that would harm the thousands of European publishers and advertisers who use Google tools to grow their business,” the company said Friday.

    Google also said it’s appealing the commission’s decision to slap the company with a 2.95 billion euro ($3.4 billion) fine in September for breaching the bloc’s competition rules by favoring its own digital advertising services. It accused Google of abusing its dominance by favoring its own online display advertising technology services to the detriment of competitors, online advertisers and publishers.

    As part of the punishment, Google was also required to come up with proposals to end what the Commission called “self-preferencing practices” and stop “conflicts of interest.”

    The Commission said it would force Google to sell off parts of its business if it wasn’t satisfied with the company’s proposed remedies.

    Google’s changes include giving publishers more pricing options on its ad management platform. To address conflicts of interest, the company is modifying its ad tools to give publishers and advertisers more choice and flexibility.

    “We will now analyse Google’s proposed measures to assess whether they effectively bring the self-preferencing practices to an end and address the situation of inherent conflicts of interest,” the Commission said in a statement.

    [ad_2]

    Source link

  • DOUBLE LAUNCH: SpaceX to send up two Falcon 9 rockets during same launch window

    [ad_1]

    KENNEDY SPACE CENTER — Hang on to your seats, because this will be a rare treat and a first: SpaceX is attempting to launch two Falcon 9 rockets at the same time on Friday night.

    What a way to kick off the weekend.


    What You Need To Know

    • If all goes well, SpaceX will also launch the Starlink 6-85 mission at the same time
    • A total of 58 Starlink satellites could be launched between the two missions

    One is the Starlink 6-89 mission, which will leave Kennedy Space Center’s Launch Complex 39A on the company’s Falcon 9 rocket, SpaceX stated.

    The launch window will open at 10:01 p.m. ET until 2:01 a.m. ET, Saturday.

    So, it means the California-based company needs to launch its Falcon 9 rocket during that time frame.

    The 45th Weather Squadron has given about a 95% chance of good liftoff conditions, with no forecast concerns.

    Find out more about the weather criteria for a Falcon 9 launch.

    If all goes well, SpaceX will also launch the Starlink 6-85 mission at the same time, with the same number of satellites: 29.

    However, it would not be uncommon for one of these missions to be pushed back to later in the launch window or a different day entirely.

    So, space lovers better cross their fingers for this one.

    Going up

    This will be the eighth mission for this Falcon 9’s first-stage booster called B1092.

    Its previous missions include the following:

    1. Starlink 12-13 mission
    2. NROL-69 mission
    3. Bandwagon-3 mission
    4. GPS III-7 mission
    5. Starlink 10-34 mission
    6. USSF-36 mission
    7. Starlink 10-61 mission

    After the stage separation, the first-stage rocket will land on the droneship A Shortfall of Gravitas that will be in the Atlantic Ocean.

    About the mission

    The SpaceX-owned Starlink company will see 29 of its satellites go to low-Earth orbit to join the thousands already there.

    They will provide internet service to many parts of Earth, once they are deployed and in their orbit.

    Harvard-Smithsonian Center for Astrophysics’s Dr. Jonathan McDowell has been documenting Starlink satellites.

    Before this launch, McDowell recorded the following:

    • 8,942 are in orbit
    • 7,716 are in operational orbit

    [ad_2]

    Anthony Leone

    Source link

  • DOUBLE LAUNCH: SpaceX to send up two Falcon 9 rockets during same launch window

    [ad_1]

    KENNEDY SPACE CENTER — Hang on to your seats, because this will be a rare treat and a first: SpaceX is attempting to launch two Falcon 9 rockets at the same time on Friday night.

    What a way to kick off the weekend.


    What You Need To Know

    • If all goes well, SpaceX will also launch the Starlink 6-85 mission at the same time
    • A total of 58 Starlink satellites could be launched between the two missions

    One is the Starlink 6-89 mission, which will leave Kennedy Space Center’s Launch Complex 39A on the company’s Falcon 9 rocket, SpaceX stated.

    The launch window will open at 10:01 p.m. ET until 2:01 a.m. ET, Saturday.

    So, it means the California-based company needs to launch its Falcon 9 rocket during that time frame.

    The 45th Weather Squadron has given about a 95% chance of good liftoff conditions, with no forecast concerns.

    Find out more about the weather criteria for a Falcon 9 launch.

    If all goes well, SpaceX will also launch the Starlink 6-85 mission at the same time, with the same number of satellites: 29.

    However, it would not be uncommon for one of these missions to be pushed back to later in the launch window or a different day entirely.

    So, space lovers better cross their fingers for this one.

    Going up

    This will be the eighth mission for this Falcon 9’s first-stage booster called B1092.

    Its previous missions include the following:

    1. Starlink 12-13 mission
    2. NROL-69 mission
    3. Bandwagon-3 mission
    4. GPS III-7 mission
    5. Starlink 10-34 mission
    6. USSF-36 mission
    7. Starlink 10-61 mission

    After the stage separation, the first-stage rocket will land on the droneship A Shortfall of Gravitas that will be in the Atlantic Ocean.

    About the mission

    The SpaceX-owned Starlink company will see 29 of its satellites go to low-Earth orbit to join the thousands already there.

    They will provide internet service to many parts of Earth, once they are deployed and in their orbit.

    Harvard-Smithsonian Center for Astrophysics’s Dr. Jonathan McDowell has been documenting Starlink satellites.

    Before this launch, McDowell recorded the following:

    • 8,942 are in orbit
    • 7,716 are in operational orbit

    [ad_2]

    Anthony Leone

    Source link

  • DOUBLE LAUNCH: SpaceX to launch nearly 30 Starlink satellites

    [ad_1]

    CAPE CANAVERAL SPACE FORCE STATION — This might be an extremely rare treat for space lovers: SpaceX plans to send up two Falcon 9 rockets at the same launch time on Friday night. 


    What You Need To Know

    • If all goes well, SpaceX will also launch the Starlink 6-89 mission at the same time
    • A total of 58 Starlink satellites could be launched between the two missions

    One is the Starlink 6-85 mission, which will leave from Space Launch Complex 40 at Cape Canaveral Space Force Station, stated SpaceX

    The launch window will open at 10:01 p.m. ET until 2:01 a.m. ET, Saturday.

    That means SpaceX needs to launch its Falcon 9 rocket during that time frame.

    The 45th Weather Squadron has given about a 95% chance of good liftoff conditions, with no forecast concerns.

    Find out more about the weather criteria for a Falcon 9 launch.

    If all goes well, SpaceX will also launch the Starlink 6-89 mission at the same time, with the same number of satellites: 29.

    However, it would not be uncommon for one of these missions to be pushed back to later in the launch window or a different day entirely.

    Going up

    This will be the 24th mission for the Falcon 9’s first-stage booster B1078. It has had 23 missions under its wide belt, including a crewed one:

    1. Crew-6
    2. SES O3b mPOWER
    3. USSF-124 mission
    4. Bluebird
    5. Starlink 6-4
    6. Starlink 6-8
    7. Starlink 6-16
    8. Starlink 6-31
    9. Starlink 6-46
    10. Starlink 6-53
    11. Starlink 6-60
    12. Starlink 10-2
    13. Starlink 10-6
    14. Starlink 10-13
    15. Starlink 6-76
    16. Starlink 12-6
    17. Starlink 12-9
    18. Starlink 12-16
    19. Starlink 6-72
    20. Starlink 6-84
    21. Starlink 12-26
    22. Starlink 10-26
    23. Nusantara Lima

    After the stage separation, the first-stage rocket will land on the droneship Just Read the Instructions that will be in the Atlantic Ocean.

    About the mission

    The 29 satellites from the Starlink company, owned by SpaceX, will be heading to low-Earth orbit to join the thousands already there.

    Once deployed and in their orbit, they will provide internet service to many parts of Earth.

    Dr. Jonathan McDowell, of Harvard-Smithsonian Center for Astrophysics, has been recording Starlink satellites.

    Before this launch, McDowell recorded the following:

    • 8,942 are in orbit
    • 7,716 are in operational orbit

     

    [ad_2]

    Anthony Leone

    Source link