ReportWire

Tag: AI experts

  • How Google’s A.I. Overviews Are Rewriting the Rules of Digital Commerce

    [ad_1]

    As Google’s AI Overviews move from experiment to default, brands face a fundamental shift in visibility, control and customer acquisition. Unsplash+

    The rules of online visibility have changed. For decades, digital commerce strategy rested on a relatively stable bargain: brands optimized for ranking and bids, Google surfaced links and ads and consumers clicked through to evaluate options. That model is being rewritten with a new gatekeeper standing between brands and customers. Artificial intelligence has become an intermediary in search that increasingly answers questions, frames comparisons and influences decisions before users ever reach a brand’s site. 

    Google’s AI Overviews, the generative summaries that now appear at the top of many search results, are fundamentally altering how consumers discover products, compare services and make purchasing decisions. Since late 2024, Google has expanded its reach across more query types, industries and regions, signaling that generative search is moving from experiment to default behavior. Instead of presenting users with a list of links to explore, search now often begins with a synthesized answer that sets the context, priorities and perceived winners before any click occurs. 

    The shift is becoming commercially consequential. In recent months, advertisers and agencies have begun to observe paid placements appearing within or adjacent to AI Overviews. Beyond reshaping organic discovery, early signals show ads beginning to appear within or adjacent to these summaries, introducing a new, and largely opaque, layer of paid visibility. While such placements remain limited for now, their presence at all raises a larger issue: advertisers currently have little insight into where their ads surface within A.I.-driven results, how those placements perform or how they influence buyer intent. As a result, a growing portion of search visibility is effectively operating outside of traditional reporting frameworks. 

    This coincides with a broader recalibration of Google’s search experience. As regulators scrutinize Google’s market power and users increasingly expect instant, synthesized answers, Google has strong incentives to keep people on the results page longer. AI Overviews serve that goal. For brands, however, this creates a growing measurement and control gap at precisely the moment when search remains one of the most expensive and performance-critical channels in digital commerce.

    A recent analysis by Adthena of more than 21 million search results suggests that this is not a gradual transition. The expansion of AI Overviews is accelerating, affecting visibility across nearly every major industry and creating what many brands are already experiencing as a measurement and control gap in search performance. With search engine results pages (SERPs) evolving in real time, brands face a narrowing window to understand where their ads and content appear, how A.I.-driven placements reshape performance and what strategic adjustments are required before competitors adapt faster. 

    The numbers tell a stark story

    Between April and September of last year, AI Overviews expanded their footprint dramatically across the search landscape. Finance saw the fastest growth, with visibility increasing at 9.9 percent, while healthcare maintained the highest overall presence with an 8.3 percent jump. Travel rose 5.8 percent, and even traditionally slower-moving sectors such as retail and automotive still recorded steady growth of around 2 percent.

    At first glance, these percentages may seem modest, but the impact is anything but. Early performance indicators suggest that paid search click-through rates could decline by eight to 12 percentage points, translating into a 20 percent to 40 percent relative drop in traffic for businesses that rely on search advertising. That’s not a rounding error. That’s a fundamental disruption to customer acquisition.

    More concerning than frequency is placement. AI Overviews initially appeared on longer, informational queries—classic top-of-funnel searches. Increasingly, they are triggering on shorter, high-volume keywords associated with comparison and purchase intent. This effectively compresses the funnel, placing A.I.-generated summaries in the same high-value real estate historically occupied by paid ads. 

    Consider what this means in practical terms. A search for “best business accounting software,” for example, may now surface an A.I.-generated synthesis before a user encounters a single paid listing or organic result. That summary often becomes the first, and sometimes final, touchpoint influencing a decision. 

    How the impact differs by industry

    The pattern varies significantly by industry, revealing which sectors face the most immediate pressure.

    Finance leads the disruption. AI Overview visibility in financial services climbs from 11 percent on single-word searches to nearly 79 percent on longer queries. For banks, investment firms and fintech companies, this means A.I. is now mediating the majority of comparison and research queries, precisely the searches that have driven customer acquisition for years.

    Healthcare remains saturated. Even short medical queries frequently trigger AI Overviews, though there’s a notable pullback on complex medical queries (down 21 percent). This suggests increased caution around sensitive health topics, creating both risk and oportunity for providers and pharmaceutical brands navigating compliance and trust. 

    Retail sees A.I. dominating product discovery. Retail AI Overviews peak at 84 percent on nine to 10-word searches, shifting advantage towards brands that publish detailed, educational content rather than those relying primarily on ad spend.  

    Travel faces a planning-stage takeover. AI Overviews rose 5.8 percent across mid-length queries, such as season travel planning, where paid listings once captured high-intent traffic. Airlines, hotels and booking platforms are competing with A.I. summaries that shape itineraries before users click. 

    What this means for the bottom line 

    The financial implications extend well beyond simple traffic loss. Businesses are facing a threefold challenge:

    1. Rising acquisition costs. As click-through rates decline, the cost per acquisition for paid search campaigns increases. Marketing budgets that once delivered predictable returns are now generating fewer conversions at higher costs.
    2. Diminished message control. AI Overviews synthesize information from multiple sources, often without clear attribution. Brand positioning gets filtered through A.I.’s interpretation, which may miss nuances, emotional cues or unique value propositions that create differentiation from competitors.
    3. Competitive displacement. The brands gaining visibility in AI Overviews aren’t necessarily those with the largest ad budgets. They’re the ones providing comprehensive, information-rich content that A.I. systems favor. This levels the playing field in some ways, but it also means established market leaders can lose ground to better-optimized competitors.

    Still, disruption creates opportunities for businesses willing to adapt quickly. For example, in industries like gaming and automotive, long tail informational queries, search terms that include specific words that reflect higher purchase intent with four or more words, often show paid ads securing strong placement above AI Overviews. These mid- and upper-funnel moments remain underexploited by many other competitors.

    What business leaders can do now

    Mitigating the impact of AI Overviews on their search campaigns and overarching business visibility requires structural changes. 

    Map A.I. exposure precisely. You can’t manage what you don’t measure. Identify exactly which search terms trigger AI Overviews, how frequently they appear, and on which devices. Industry benchmarks won’t help here, the impact varies widely depending on specific keywords, customer journey and device mix.

    Rebuild content by authority, not promotion. The brands winning visibility in AI Overviews aren’t outspending competitors, they’re out-educating them. AI systems reward comprehensive, comparison-rich content that genuinely answers customer questions. Content strategies must shift from promotional messaging to authoritative resources. Think less about what you want to say and more about what your customers need to know.

    Differentiate ads where A.I. cannot. Generic ad copy fades next to A.I. summaries. Ads need to offer something AI Overviews cannot: immediate value through deals, guarantees and limited-time offers. Take a contextual approach and layer in human elements such as real customer stories, accessible experts or personalized services, that build the trust A.I. summaries inherently lack.

    Segment by device. Mobile and desktop search show dramatically different AI Overview patterns. Mobile screens give less real estate and higher AI Overview saturation. Test device-specific campaigns with tailored creative, adjusted bids and potentially different keyword strategies for mobile versus desktop traffic.

    Build a testing culture, not a one-time fix. Google keeps adjusting when and where AI Overviews appear. The businesses that win will be those that monitor changes weekly and adjust tactics monthly. Set up dashboards, establish review cadences and empower teams to shift budget toward what’s working without waiting for quarterly planning cycles.

    Play the long game. A.I.-mediated search is the new foundation of digital discovery. The companies that thrive will treat this as an opportunity to own their customer relationships rather than rent attention through intermediaries. Invest in owned assets: authoritative content, direct customer channels and brand strength that transcends any single platform’s algorithm.

    Fundamentally, the search landscape has already changed. The strategic question is no longer whether to adapt, but how quickly organizations can adapt to a model where discovery, comparison and intent are mediated by machines. The companies that recognize it as a strategic imperative will find opportunities their competitors miss. They’ll move quickly, testing and learning rather than waiting for perfect information. They’ll diversify their approach, optimizing paid search performance while simultaneously investing in owned assets like comprehensive content, direct customer relationships and brand strength. And they’ll view AI Overviews not as an obstacle to overcome but as a new dimension of the search landscape to master, requiring evolved paid search strategies that work with A.I. rather than against it.

    The top spot on Google’s search results page still matters. But now, earning it requires a completely different playbook. The businesses that recognize this shift early, invest in visibility they can measure and build authority that A.I. systems reward, will be better positioned to compete as generative search becomes the default interface for digital commerce.

    How Google’s A.I. Overviews Are Rewriting the Rules of Digital Commerce

    [ad_2]

    Phillip Thune

    Source link

  • At Davos 2026, the New A.I. Race Is About Execution

    [ad_1]

    Davos 2026 revealed a clear pivot: as A.I. enters its infrastructure phase, competitive advantage hinges on governance, integration and execution. Photo by Fabrice Coffrini / AFP via Getty Images

    At this year’s World Economic Forum in Davos, artificial intelligence was no longer framed as an emerging technology. It was treated as infrastructure. Across panels, private dinners and side conversations, the debate had clearly shifted: the question is not whether A.I. will transform economies and institutions, but who can operationalize it at scale under tightening geopolitical and social constraints.

    Polished talking points and transactional networking were expected. Instead, the prevailing tone was unusually open and collaborative. Leaders across industry, government and investment circles engaged in candid discussions about what it actually takes to build, deploy and govern A.I. systems in the real world. 

    From breakthroughs to infrastructure

    In prior years, A.I. at Davos was often positioned as a horizon technology or a promising experiment. This year, leaders spoke about it the way they talk about energy grids or the internet: as a foundational capability that must be embedded across operations. In closed-door sessions and enterprise-focused discussions, including an Emerging Tech breakfast hosted by BCG, A.I. was consistently framed as something organizations must build into their core operating model, not test at the margins.

    Enterprise leaders stressed that A.I. can no longer live in pilots or innovation labs. It is becoming a core operating layer, reshaping workflows, governance structures and executive accountability. One panelist put it bluntly: in the future, there may not be Chief A.I. Officers, because every Chief Operating Officer will effectively be responsible for A.I. The real work now is redesigning roles, incentives and processes around systems that are always on and deeply embedded, rather than treating A.I. as a bolt-on feature.

    The rise of agentic systems

    Another notable shift was the focus on agentic A.I. systems. Instead of tools that merely assist human work, these systems are designed to plan, decide and act across entire workflows. In practical terms, that means A.I. that does more than answer questions: it can determine next steps, call other tools or services and close the loop on tasks.

    This evolution is forcing a rethink of traditional software-as-a-service models. Many founders and executives spoke about rebuilding products as A.I.-native platforms that actively run processes, rather than software that passively supports human operators. As these systems take on greater autonomy, questions of liability, oversight and human intervention are moving from the margins of product design to the center of both enterprise architecture and regulation.

    Workforce pressure and the hollowing of entry-level work

    Concerns about labor displacement were far less theoretical than in previous years. Executives spoke openly about hiring freezes and the quiet erosion of traditional entry-level roles. Routine analysis, reporting and coordination work—the tasks that used to anchor junior jobs—is precisely where A.I. systems are advancing fastest. 

    In response, reskilling is shifting from talking point to strategy. Rather than assuming A.I. capability can be “hired in,” organizations are building structured pathways to retrain existing employees into A.I.-augmented roles. A parallel trend is intrapreneurship: with experimentation costs lowered by A.I., companies are encouraging employees to propose pilots and launch internal ventures, channeling entrepreneurial energy inward instead of losing it to startups.

    Governing speed, not stopping it

    Despite the urgency to deploy A.I., some of the most grounded conversations in Davos centered on governance. These were not abstract ethics debates, but rather operational discussions about how to move quickly without creating unacceptable legal, reputational or societal risks.

    The emerging consensus has formed around what many described as “controlled speed”: rapid iteration paired with mechanisms that make systems observable and correctable in real time. Leaders described embedding governance directly into workflows through auditability, data controls, red teaming, human-in-the-loop checkpoints and clear ownership for A.I. outcomes. 

    In policy-facing sessions, including gatherings of world leaders, similar themes surfaced around embedding accountability into A.I. deployments at scale, rather than trying to slow progress from the outside.

    A.I. as a geopolitical asset and the rise of sovereign A.I.

    One of the clearest through-lines was the link between A.I. and geopolitical power. At a TCP House panel, Ray Dalio captured a widely shared view: whoever wins the technology race will win the geopolitical race. Across Davos, speakers framed A.I. capability as a determinant of national influence, economic resilience and security.

    This framing is driving a wave of sovereign A.I. initiatives. Governments are investing in domestic data centers, local model training and tighter control over critical infrastructure to reduce strategic dependency. The goal is not isolation so much as resilience, a balance between domestic capability and selective global partnerships. At the Semafor CEO Signal Exchange, for instance, Google’s Ruth Porat warned of the risk of an emerging A.I. power vacuum if the United States fails to move quickly enough, creating space for competitors to set the terms of the next era.

    For enterprises, these dynamics translate into concrete decisions around data residency, model dependency and vendor concentration in a more multipolar world.

    Diverging regional strategies

    Regional differences in A.I. strategy were hard to miss. Europe’s regulatory-first approach is shaping global norms, but many participants voiced concern that it may constrain commercial leadership. Europe is becoming a reference point for risk mitigation and rights protection, even as questions persist about whether it can also serve as the primary engine of A.I.-driven growth.

    By contrast, the United States and parts of the Middle East are advancing aggressively through coordinated policy, capital investment and large-scale infrastructure build-outs. Discussions around semiconductors, satellites and cybersecurity reinforced how tightly A.I. deployment is now coupled with national resilience and defense considerations. Regions that move fastest on infrastructure and deployment are likely to set technical, regulatory and commercial defaults that others will eventually be forced to adopt.

    Domain-specific A.I., with biohealth in front

    While general-purpose models remain central, much of the energy in Davos was focused on domain-specific A.I. Healthcare, biotechnology, energy and agriculture stood out as sectors where A.I. promises enormous value alongside heightened risk. Biohealth, in particular, was central to discussions of drug discovery, diagnostics and clinical decision support.

    Across these domains, participants stressed that success depends on deep collaboration between engineers, domain experts and regulators. Transparency, verifiability and accountability were repeatedly described as prerequisites for A.I. systems that touch public safety, critical infrastructure or social trust. In one AgriTech-focused session, for example, speakers emphasized that A.I.’s role in food security hinges as much on governance and data integrity as on optimization.

    A human signal amid rapid change

    Beyond the technical themes, the tone of Davos 2026 was striking in its human-centric nature. Panel after panel emphasized deploying A.I. in the service of humanity, not just efficiency or profit. Many speakers pushed back against deterministic or doom-driven narratives, highlighting that humans still write the models, set the rules and decide what A.I. ultimately serves.

    An Oxford-style debate hosted by Cognizant and Constellation Research captured this spirit. Participants were divided into “Team Humanity” and “Team A.I.,” and the format was deliberately interactive, not about winning an argument, but about changing minds on humanity’s purpose in an A.I. age. That focus on agency and responsibility ran through both formal sessions and late-night conversations.

    Davos does not dictate the future of technology. It reflects what people with power and capital are already preparing for. This year, the signal was clear: A.I. has entered its infrastructure phase. Competitive advantage will come from how organizations govern it, integrate it into work, retrain their people and navigate sovereignty and dependency risks, not from who can demo the flashiest model.

    Amid the urgency, what stood out most was the human element of thoughtful, collaborative people trying to build something better. In a moment defined by rapid change, that may be the most important signal of all.

    At Davos 2026, the New A.I. Race Is About Execution

    [ad_2]

    Mark Minevich and Dr. Kathryn Wifvat

    Source link

  • Why Iceland Is Becoming a Model for Renewable-Powered High-Performance Computing

    [ad_1]

    With abundant renewable energy, efficient cooling and community-first development, Iceland shows how data centers can grow without compromising the planet. Unsplash+

    As the demand for A.I.-ready digital infrastructure skyrockets, data center development has become an urgent and necessary foundation for a wide spectrum of high-performance computing technologies—and for the businesses that are increasingly dependent on them. Unsurprisingly, data center construction has surged globally. Yet as growth accelerates, teh roadblocks to building at the required pace and scale have become far more pronounced. 

    Arguably, the most critical factor in data center development today is access to power. Alex de Vries-Gao, the founder of tech sustainability website Digiconomist, estimates that by the end of 2025, energy consumption by A.I. systems could reach 23 gigawatts—twice the total energy consumption of the Netherlands.

    This poses two intertwined challenges. First, many countries simply lack sufficient power or a modern grid capable of supporting these demands. Much of the U.S. and U.K. national grid infrastructure was built between 1950 and 1970 and designed around large coal-fired plants—a post-war regeneration system now decades overdue for modernization. As coal availability waned, nuclear and renewable sources such as wind and solar began to fill the gap. Yet, these types of energy systems take time to develop and rely heavily on robust, upgraded power networks. The sudden increase in power demand resulting from the proliferation of data centers has highlighted the crucial need for investment in power infrastructure globally.

    Second, the demand for such vast power has sharpened scrutiny on the carbon footprint of data centers. As a result, data-intensive businesses are increasingly looking for data center partners that have proven sustainability credentials and can help decarbonize their IT workloads. That often means looking further afield than your local neighborhood data center provider to find a partnership that is environmentally and financially beneficial and sustainable long-term. At atNorth, we are seeing unprecedented demand for environmentally responsible A.I. infrastructure at speed and scale. Power bottlenecks caused by power availability simply cannot be allowed to become a limiting factor to growth.

    The Icelandic example

    Data centers located in cooler climates such as the Nordics can leverage highly energy-efficient cooling systems that significantly reduce the energy required to power and cool the hardware they host. The region also benefits from abundant renewable energy and relatively young, resilient power and internet networks. 

    Iceland, in particular, is a global leader in clean energy: 71 percent of its energy is generated by hydropower, and 29 percent from geothermal energy. Icelandic data centers can combine renewable energy with its naturally cool ambient temperatures to achieve exceptional energy efficiency. While global average Power Usage Effectiveness (PUE)—the metric of data center energy efficiency where the ideal value is 1.0 (representing 100 percent efficiency)—hovers around 1.48, Icelandic facilities average between 1.1 and 1.2, enabling customers to significantly decarbonize their IT workloads. For example, BNP Paribas lowered its total cost of ownership, cut energy use by 50 percent and reduced CO₂ output by 85 percent by relocating a portion of its IT infrastructure to one of atNorth’s Icelandic facilities.

    Temperatures in Iceland typically range from 30°F (-1 °C) in winter to 52°F (11 °C) in summer, enabling free-air cooling of some IT workloads. As compute density increases to accommodate A.I. and other high-performance applications, more advanced cooling technologies—such as Direct Liquid Cooling (DLC) or Direct to Chip Cooling—that allow water (or coolants) to reduce the temperature of the computer equipment more efficiently due to superior heat dissipation have become essential. These solutions are widely available in Iceland and across the Nordic countries, which are well known for their environmentally friendly ethos and circular economy principles.

    Moreover, Iceland’s political and economic stability offers another key advantage as geopolitical uncertainty grows across regions. Businesses are now more sensitive to the physical location of their data and the legal frameworks that govern it. As a member of the European Economic Area (EEA), Iceland has adopted the E.U.’s General Data Protection Regulation (GDPR) and reinforced it with national legislation, resulting in robust safeguards for data privacy and security.

    Going beyond carbon reduction

    These factors have driven a surge in Nordic data center development in recent years, positioning the region at the forefront of the industry. While much of the world works to upgrade legacy power networks in order to start building data centers, the Nordic countries are addressing newer challenges associated with more mature data center development. Certainly, at atNorth, we have seen growing demand for a more holistic approach to sustainability and responsible operations. It is not enough to mitigate environmental impact; data center operators must deliver tangible benefits to the local communities in which we operate to support long-term sustainability and economic growth.

    Using the most sustainable materials possible is one factor that can showcase an honest commitment to care for the natural environment. atNorth’s ICE03 data center was constructed using Glulam, a sustainable laminated wood product with lower environmental impact and superior fire resistance compared to steel. Similarly, the site was insulated using sustainable Icelandic rockwool, produced from natural volcanic basalt and known for its durability, fire resistance and low ecological footprint.  

    The process of heat reuse—the recycling of waste heat from the data center cooling systems for use in the local community—is a practice that is common in the Nordic countries and growing in popularity across northern Europe. This is a fundamental part of sustainable data center design, and even in countries like Iceland, where naturally heated geothermal water is abundant, opportunities for further improvement remain. At ICE03, for example, atNorth partnered with the municipality of Akureyri to channel waste heat into a new community-run greenhouse, which will provide a space for schoolchildren to explore ecological farming practices and sustainable food production. These initiatives reduce carbon emissions for both the data center and the receiving organization while addressing specific local needs, such as fresh vegetable production in a country that imports 80 percent of its fresh produce.

    Community engagement is also becoming pivotal to the data center development process as competition over suitable land intensifies. Just as the concept of a “trusted brand” has proven fundamental in the consumer retail market—with some research suggesting that 81 percent of consumers need to trust a brand before considering a purchase—the same principle extends to regional decision-making that directly affects the lives of local people. Therefore, operators that can demonstrate a genuine commitment to good corporate citizenship will undoubtedly find more success.

    To ensure authentic integration with local communities, local hiring is essential. Over 90 percent of the workforce involved in developing atNorth’s ICE03 site came from nearby communities. The company also supports local education, charities and community projects through volunteer support and financial donations—sponsoring a local run in Akureyri, funding Reykjanesbær’s light festival and donating advanced mechatronics equipment to Akureyri University to support training for data center-related careers. 

    Building for the A.I. era—responsibly 

    As digitalization intensifies, so will the demand for high-performance data center capacity. Yet such rapid expansion carries risks that could seriously undermine long-term sustainability. The boom-and-reckoning pattern seen in industries like palm oil—where explosive growth preceded significant deforestation—serves as a warning. 

    The data center industry must learn from history and chart a new path in which digital infrastructure can be technologically advanced, environmentally responsible and locally beneficial. In short: data centers must be developed to meet A.I.-era performance demands while driving responsible growth and long-term value for clients, communities and our planet.

    Why Iceland Is Becoming a Model for Renewable-Powered High-Performance Computing

    [ad_2]

    Erling Freyr Guðmundsson

    Source link

  • Does A.I. Really Fight Back? What Anthropic’s AGI Tests Reveal About Control and Risk

    [ad_1]

    Anthropic’s research hints at an unnerving future: one where A.I. doesn’t fight back maliciously but evolves beyond the boundaries we can enforce. Unsplash+

    Does A.I. really fight back? The short answer to this question is “no.” But that answer, of course, hardly satisfies the legitimate, growing unease that many feel about A.I., or the viral fear sparked by recent reports about Anthropic’s A.I. system, Claude. In a widely discussed experiment, Claude appeared to resort to threats of potential blackmail and extortion when faced with the possibility of being shut down. 

    The scene was immediately reminiscent of the most famous—and terrifying—film depiction of an artificial intelligence breaking bad: the HAL 9000 computer in Stanley Kubrick’s 1968 masterpiece, 2001: A Space Odyssey. Panicked by conflicting orders from its home base, HAL murders crew members in their sleep, condemns another member to death in the black void of outer space and attempts to kill Dave Bowman, the remaining crew member, when he tries to disable HAL’s cognitive functions.

    “I’m sorry, Dave, I can’t do that,” HAL’s chilling calm in response to Dave’s command to open a pod door and let him back onto the ship, became one of the most famous lines in film history—and the archetype for A.I. gone rogue.

    But how realistic was HAL’s meltdown? And how does today’s Claude resemble HAL? The truth is “not very” and “not much.” HAL had millions of times the processing power of any computing system we have today—after all, he was in a movie, not real life—and it is unthinkable that its programmers would not have him simply default to spitting out an error message or escalating to human oversight if there were conflicting instructions. 

    Claude isn’t plotting revenge

    To understand what happened in Anthropic’s test, it’s crucial to remember that systems like Claude actually do. Claude doesn’t “think.” It “simply” writes out answers one word at a time, drawing from trillions of parameters, or learned associations between words and concepts, to predict the most probable next word choice. Using extensive computing resources, Claude can string its answers together at an incomprehensibly fast speed compared to humans. So it can appear as if Claude is actually thinking.

    In the scenario where Claude resorted to blackmail and extortion, the program was placed in extreme, specific and artificial circumstances with a limited menu of possible actions. Its response was the mathematical result of probabilistic modeling within a tightly scripted context. This course of action was planted by Claude’s programmers and wasn’t a sign of agency or intent, but rather a consequence of human design. Claude was not auditioning to become a malevolent movie star. 

    Why A.I. fear persists

    As A.I. continues to seize the public’s consciousness, it’s easy to fall prey to scary headlines and over-simplified explanations of A.I. technologies and their capabilities. Humans are hardwired to fear the unknown, and A.I.—complex, opaque and fast-evolving—taps that instinct. But these fears can distort pubic understanding. It’s essential that everyone involved in A.I. development and usage communicate clearly about what A.I. can actually do, how it does it and its potential capabilities in future iterations. 

    A key to achieving a comfort level around A.I. is to gain the ironic understanding that A.I. can indeed be very dangerous. Throughout history, humanity has built tools it couldn’t fully control, from the vast machinery of the Industrial Revolution to the atomic bomb. Ethical boundaries for A.I. must be established collaboratively and globally. Preventing A.I. from facilitating warfare—whether in weapons design, optimizing drone-attack plans or breaching national security systems—should be the top priority of every leader and NGO worldwide. We need to ensure that A.I. is not weaponized for warfare, surveillance or any form of harm. 

    Programming responsibility, not paranoia

    Looking back at Anthropic’s experiment, let’s dissect what really happened. Claude—and it is just computer code at heart, not living DNA—was working within a probability cloud that led it, step-by-step, to pick the best probable next word in a sentence. It works one word at a time, but at a speed that easily surpasses human ability. Claude’s programmers chose to see if their creation would, in turn, choose a negative option. Its response was shaped more by programming, flawed design and how the scenario was coded, than by any machine malice.

    Claude, as with ChatGPT and other current A.I. platforms, has access to vast stores of data. The platforms are trained to access specific information related to queries, then predict the most likely responses to product fluent text. They don’t “decide” in any meaningful, human sense. They don’t have intentions, emotions or even self-preservation instincts of a single-celled organism, let alone the wherewithal to hatch master plans to extort someone. 

    This will remain true even as the growing capabilities of A.I. allow developers to make these systems appear more intelligent, human-like and friendly. It becomes even more important for developers, programmers, policymakers and communicators to demystify A.I.’s behavior and reject unethical results. Clarity is key, both to prevent misuse and to ground perception in fact, not fear. 

    Every transformative technology is dual-use. A hammer can pound a nail or hurt a person. Nuclear energy can provide power to millions of people or threaten to annihilate them. A.I. can make traffic run smoother, speed up customer service, conduct whiz-bang research at lightning speed, or be used to amplify disinformation, deepen inequality and destabilize security. The task isn’t to wonder whether A.I. might fight back, but to ensure humanity doesn’t teach it to. The choice is ours as to whether we corral it, regulate it and keep it focused on the common good.

    Mehdi Paryavi is the Chairman and CEO of the International Data Center Authority (IDCA), the world’s leading Digital Economy think tank and prime consortium of policymakers, investors and developers in A.I., data centers and cloud computing.

    Does A.I. Really Fight Back? What Anthropic’s AGI Tests Reveal About Control and Risk

    [ad_2]

    Mehdi Paryavi

    Source link

  • Machine Intuition: Can A.I. Out-Innovate Human Strategy?

    [ad_1]

    When algorithms start to imagine, human decision-making enters uncharted territory. Unsplash+

    In boardrooms, creativity is often conflated with charisma—a founder’s flash of insight, a strategist’s “feel” for the market. The rise of creative A.I. complicates that mythology. Systems that once mimicked patterns are beginning to originate them, not by feeling their way through ambiguity, but by searching vast spaces of possibilities with tireless composure. The question for leadership is no longer whether A.I. can imitate the past. It is whether machines can meaningfully extend the frontier of invention—and how executives should organize decision-making when they do.

    From imitation to invention

    The cleanest evidence that A.I. is stepping past imitation arrives where truth is checkable: mathematics, molecular science and materials discovery.

    In 2022, DeepMind’s AlphaTensor not only learned to multiply matrices faster but also discovered new, provably correct algorithms that improved upon long-standing human results across various matrix sizes. That is not style transfer but rather marks an algorithmic invention in a domain where proof, not opinion, decides progress.

    In late 2023, an A.I. system known as GNoME proposed 2.2 million crystal structures and identified roughly 381,000 as stable, nearly an order-of-magnitude expansion of the known “materials possibility space.” Labs have already begun synthesizing candidates for batteries and semiconductors, creating a faster loop between computational hypothesis and physical validation.

    In 2024, AlphaFold 3 advanced from single-protein structure prediction to modelling interactions among proteins, nucleic acids and small molecules. This capability matters for drug design because binding, not just shape, drives efficacy. The model’s accuracy on complex assemblies has energized pharmaceutical R&D, though access limits have drawn pushback from academics who want open tools.

    Progress is also visible in symbolic reasoning. DeepMind reported systems that solve Olympiad-level problems at a level comparable to an International Mathematical Olympiad silver medalist. At the same time, the research community continues to explore machine-generated conjectures, including the “Ramanujan Machine” work on fundamental constants.

    None of this makes A.I. creative in the human sense. It does, however, expand the adjacent possible, surfacing options that were invisible or unaffordable to explore manually. When machines push frontiers in domains with crisp feedback—proofs or measured properties—boards should treat them not as autocomplete engines, but as option-generation machines for strategy.

    A more recent wave of “reasoning models” underscores the shift. OpenAI’s “o” line prioritizes deliberate chains of thought and planning over fast pattern matching, improving performance on mathematics and coding tasks (empirical evidence). Whatever the brand names, the direction of travel is clear: more search, more planning, more verifiable problem-solving—and less reliance on past style to predict the future.

    What machines still cannot feel

    Creativity at the level that moves markets also rests on three human anchors:

    • Intuition: tacit pattern recognition shaped by lived experience and domain immersion.
    • Emotion: the energy to pick a fight with the status quo, to persist when the spreadsheet says “no.”
    • Cultural context: sensitivity to norms, taste and symbolism that gives an idea social traction.

    A.I. can simulate tone and recall cultural references. Still, it has no stake in the outcome and no phenomenology—no gut to trust, no fear to overcome, no values to defend. That absence is evident in strategy, where the “right” move hinges on timing, narrative and coalition-building as much as on optimization.

    The practical stance, therefore, is not man versus machine, but machine-extended human judgement. Executives should treat creative A.I. as a means to broaden the search over hypotheses and prototypes, then apply human judgment, ethics and narrative sense to decide which bets to place and how to mobilize organizations around them.

    How leaders should exploit machine invention—without outsourcing judgment

    1) Run invention portfolios, not tool pilots.
    The AlphaTensor and GNoME results serve as reminders that A.I.’s edge lies in search. Build portfolios where models explore thousands of algorithmic or design candidates in parallel, with clear funnels for lab validation or market testing. Resist vanity pilots; instrument programs like a venture portfolio with kill criteria, milestone economics and fast capital recycling.

    2) Separate generation from selection.
    Let models overgenerate options; reserve selection for cross-functional councils that combine domain experts with brand, legal and policy voices. In drug discovery, for example, computational signals are necessary, but go-to-market narratives, regulatory risk and patient trust still decide value. AlphaFold 3’s critics highlight that access and transparency are strategic variables, not just technical ones.

    3) Put proof and measurement at the core.
    Favor use cases with verifiable feedback, such as proofs, A/B tests and measurable properties, before pushing into messier cultural domains. The faster the loop from hypothesis to truth signal, the more compounding advantage you build. That is why material and algorithm discovery have progressed rapidly, while brand-level creativity remains a human-led endeavor.

    4) Couple A.I. with automated execution.
    The materials ecosystem illustrates the compounding effect when A.I. designs are paired with automated synthesis and testing. The playbook for enterprises is similar: link generative systems to simulation, robotic process automation or programmatic experimentation to prevent ideas from dying in slide decks.

    5) Govern for explainability where it matters—and for outcomes where it doesn’t.
    Demand explanations in regulated or safety-critical contexts. Elsewhere, prioritize outcomes with robust testing and guardrails. AlphaTensor’s value lies in proofs; a marketing concept’s value lies in performance lift, not in the model’s narrative about why it works.

    6) Incentivize “taste” as a strategic moat.
    As models make it cheap to generate competent options, advantage shifts to taste—the human ability to recognize what resonates in a culture. Recruit and reward this scarce judgment. Machines can propose; only leaders can pick the hill to die on.

    What this means for decision-making

    The companies that convert creative A.I. into a durable advantage will do three things differently.

    • Treat search as a first-class strategic function. Leaders will invest in compute, data and optimization talent the way prior generations invested in distribution—because the ability to search better than competitors becomes a compounding differentiator in R&D, pricing, logistics and design.
    • Reframe “intuition” as a disciplined interface. Human intuition does not retire; it selects, sequences and stories the outputs of machine search. That interface needs structure: pre-registered criteria, red-team rituals, ethical review and explicit narrative strategy.
    • Professionalize uncertainty. Creative A.I. expands the option set and the error surface. Governance must evolve from model-centric compliance to portfolio-centric risk control, with exposure limits, scenario triggers and graceful rollback plans. The lesson from AlphaFold 3’s access debate is that licensing, openness and ecosystem design are themselves strategic levers, not afterthoughts.

    The bottom line is not that machines have acquired emotions or culture. They have acquired something strategically scarce: the capacity to search, prove and propose at a superhuman scale in domains where truth can come back to haunt them. That capability does not substitute for human attributes; it amplifies them. The winning organizations will be those that marry machine-scale exploration with human-grade selection, treating A.I. neither as a muse nor as a mask, but as the most relentless research partner strategy has ever had.

    Machine Intuition: Can A.I. Out-Innovate Human Strategy?

    [ad_2]

    Gonçalo Perdigão

    Source link

  • The $300 Billion A.I. Infrastructure Crisis Hiding in Plain Sight

    [ad_1]

    The A.I. boom depends on an infrastructure foundation that’s cracking under pressure. Observer Labs

    The race to scale artificial intelligence has triggered historic investment in GPU infrastructure. Hyperscalers are expected to spend over $300 billion on A.I. hardware in 2025 alone, while enterprises across industries are building their own GPU clusters to keep pace. This may be the largest corporate resource reallocation in modern history, yet beneath the headlines of record spending lies a quieter story. According to the 2024 State of AI Infrastructure at Scale report, most of this hardware goes underused, with more than 75 percent of organizations running their GPUs below 70 percent utilization, even at peak times. Wasted compute has become the silent tax on A.I. This inefficiency inflates costs and slows innovation, creating a competitive disadvantage for companies that should be leading their markets.

    The root cause traces to industrial-age thinking applied to information-age challenges. Traditional schedulers assign GPUs to jobs and keep them locked until completion—even when workloads shift to CPU-heavy phases. In practice, GPUs sit idle for long stretches while costs continue to mount. Studies suggest typical A.I. workflows spend between 30 percent to 50 percent of their runtime in CPU-only stages, meaning expensive GPUs contribute nothing during that period.

    Consider the economics: A single NVIDIA H100 GPU costs upward of $40,000. When static allocation leaves these resources idle even 25 percent of the time, organizations are essentially missing out on $10,000 worth of value per GPU annually on unused capacity. Scale that across enterprise A.I. deployments, and the waste reaches eight figures all too quickly.

    GPU underutilization creates cascading problems beyond pure cost inefficiency. When expensive infrastructure sits idle, research teams can’t experiment with new models, product teams struggle to iterate quickly on A.I. features, and competitive advantages slip away to more efficient rivals. Organizations then overbuy GPUs to cover peak loads, creating an arms race in hardware acquisition while existing resources remain underused. The result is artificial scarcity that drains budgets and slows progress. 

    The stakes extend beyond past budgets to global sustainability concerns, as the environmental cost is also mounting. A.I. infrastructure is projected double its consumption from 2024 levels, reaching 3 percent of global electricity by 2030. Companies that fail to maximize GPU efficiency will face rising bills as well as increased regulator scrutiny and stakeholder demands for measurable efficiency improvements.

    A new class of orchestration tools known as A.I. computing brokers offers a way forward. These systems monitor workloads in real time, dynamically reallocating GPU resources to match active demand. Instead of sitting idle, GPUs are reassigned during CPU-heavy phases to other jobs in the queue.

    Early deployments demonstrate the transformative potential of this approach, and the results are striking. In one deployment, Fujitsu’s AI Computing Broker (ACB) increased throughput in protein-folding simulations by 270 percent, allowing researchers to process nearly three times as many sequences on the same hardware. In another, enterprises running multiple large language models on shared infrastructure used ACB to consolidate workloads, enabling smooth inference across models while cutting infrastructure costs.

    These gains don’t require new hardware purchases or extensive code rewrites, but simply smarter orchestration that can turn existing infrastructure into a force multiplier.. Brokers integrate into existing A.I. pipelines and redistribute resources in the background, making GPUs more productive with minimal friction.

    Efficiency delivers more than cost savings. Teams that can run more experiments on the same infrastructure iterate faster, reach insights sooner and release products ahead of rivals stuck in static allocation models. Early adopters report efficiency gains between 150 percent and 300 percent, improvements that compound over time as experimentation velocity accelerates. That means organizations that once viewed GPU efficiency as a technical nice-to-have now face regulatory requirements, capital market pressures and competitive dynamics that make optimization mandatory rather than optional. 

    What began as operational optimization for tech-forward companies is rapidly becoming a strategic imperative across industries, with several specific trends driving this acceleration:

    • Regulatory pressure. European Union A.I. regulations increasingly require efficiency reporting, making GPU utilization a compliance consideration rather than just operational optimization.
    • Capital constraints. Rising interest rates make inefficient capital allocation more expensive, pushing CFOs to scrutinize infrastructure returns more closely.
    • Talent competition. Top A.I. researchers prefer organizations offering maximum compute access for experimentation, making efficient resource allocation a recruiting advantage.
    • Environmental mandates. Corporate sustainability commitments require measurable efficiency improvements, making GPU optimization strategically necessary rather than tactically useful.

    History shows that once efficiency tools become standard, the early adopters capture the outsized benefits. In other words: The opportunity window for competitive advantage through infrastructure efficiency remains open, but it won’t stay that way indefinitely. Companies that embrace smarter orchestration today will build faster, leaner and more competitive A.I. programs, while others remain trapped in outdated models. Static thinking produces static results, whereas dynamic thinking unlocks dynamic advantage. Similarly to how cloud computing displaced traditional data centers, the A.I. infrastructure race will be won by organizations that approach GPUs not as fixed assets but as dynamic resources to be optimized continuously.

    The $300 billion question isn’t how much organizations are investing in A.I. infrastructure. It’s how much value they’re actually extracting from what they’ve already built, and whether they’re moving fast enough to optimize before their competitors do.

    The $300 Billion A.I. Infrastructure Crisis Hiding in Plain Sight

    [ad_2]

    Indradeep Ghosh

    Source link

  • The New Patronage: A.I., Algorithms and the Economics of Creativity

    [ad_1]

    Generative A.I. is cheapening media production while platforms recode payouts, power and provenance. Unsplash+

    The cost of making high-quality media is collapsing. The cost of getting anyone to care about it is not. As generative A.I. turns production into a near-commodity, cultural power is shifting from studios and galleries to the platforms that allocate attention and the algorithms that determine who gets paid. The new patrons are not moguls with checkbooks; they are recommendation systems tuned for engagement and brand safety.

    Production is cheap; distribution is scarce

    Video models now draft storyboards, generate shots and remix audio at consumer scale. Yet the money still follows distribution, not tools. On YouTube, the rules of the YouTube Partner Program, set and revised unilaterally, determine whether a creator receives 55 percent of watch-page ad revenue for long-form content and 45 percent for Shorts. Those headline rates are stable, but the platform’s enforcement posture has shifted: as of July 15, YouTube began tightening monetization against “inauthentic” or mass-produced A.I. content, a clarification aimed at the surge of spammy, low-effort videos. The message is clear: use A.I. to enhance originality, not to flood the feed. 

    The enforcement problem is real. “Cheapfake” celebrity clips—static images, synthetic narration and rage-bait scripts—have racked up views while confusing audiences. YouTube has removed channels and now requires disclosure labels for realistic synthetic media, but detection and policing remain uneven at scale. 

    Platforms are recoding payouts and power

    Spotify’s 2024 royalty overhaul illustrates how platform rule-sets become policy for the creative middle class. Tracks now require at least 1,000 streams in 12 months to pay out; functional “noise” content is throttled; and labels face fees for detected artificial streaming. The goal is to redirect the pool away from bot farms and sub-cent trickles. The effect is a re-concentration of earnings at the head of the curve and a higher bar for the long tail. When platforms change the taps, whole genres feel the drought or the deluge. 

    TikTok’s détente with Universal Music in May 2024 underscored the same power dynamic in short-form video. After months of public sparring over royalties and A.I. clones, a new licensing deal restored UMG’s catalogue to the app, alongside language about improved remuneration and protections against generative knock-offs. When distribution is the choke point, even the largest rights-holders must negotiate on platform terms.

    Data deals: the new studio lots

    If attention is one axis of the new patronage, training data is the other. The most lucrative cultural contracts of the past year were not output commissions but input licences. OpenAI’s run of publisher agreements, including the Associated Press (archives), Axel Springer, the Financial Times and a multi-year global deal with News Corp, reportedly worth more than $250 million, signals a market price for premium corpora. A.I. labs are paying for access, and the beneficiaries are large, well-structured repositories of rights, not individual creators. 

    The legal battles surrounding image training demonstrate the unsettled state of the rules. Getty Images narrowed its U.K. lawsuit against Stability A.I. in June, dropping core copyright claims while pressing trademark-style arguments about reproduced watermarks. The pivot reflects the complexity of proving training-stage infringement across borders, as well as the industry’s search for more predictable routes to compensation.

    Regulation is standardizing transparency and shifting risk

    Rules are arriving, and they read like operating manuals for platformized culture. The E.U.’s A.I. Act phases in obligations for general-purpose models, with guidance for “systemic-risk” providers by 2025 and a Code of Practice outlining requirements for transparency, copyright diligence and safety. In effect, document training, assessing model risks, publishing technical summaries and preparing for audits are all tasks that privilege firms and partners with a strong compliance presence

    In the U.S., the Copyright Office’s multipart A.I. study is moving from theory to guidance. Part 2 (January 2025) addresses whether and when A.I.-assisted outputs can be copyrighted, while the pre-publication of Part 3 (May 2025) examines training and how to reconcile text-and-data mining with compensation. The studio system, once established, created creative norms through collective bargaining; now, regulators and A.I. vendors are co-authoring the manual.

    Unions are also imposing guardrails. The WGA’s 2023 deal barred studios from treating A.I.-generated material as “source material” and protected writers from being required to use A.I.; SAG-AFTRA’s agreements introduced consent and compensation for digital replicas, with similar provisions in music. These are not abstractions; they are hard-coded constraints on how platforms and producers can deploy synthetic labour.

    Provenance becomes product

    As synthetic media scales, provenance is turning into both a feature and a bargaining chip. TikTok has begun automatically labelling A.I. assets imported from tools that support C2PA Content Credentials. YouTube now requires creators to disclose realistic synthetic edits. Meanwhile, device makers are integrating C2PA into the capture pipeline, with Google’s Pixel 10 embedding credentials in its camera output. OpenAI, for its part, adds C2PA metadata to DALL-E images. Attribution is becoming clickable. 

    The provenance layer will not solve misinformation alone. Metadata can be stripped, and enforcement lags, but it rewires incentives. Platforms can boost authentic, labelled media in feeds, penalize evasions and share “credibility signals” with advertisers. That is algorithmic patronage by another name.

    What shifts next

    Studios and galleries will increasingly resemble platforms. Owning release windows is no longer enough. Expect investments in first-party audiences, data clean rooms and rights bundles that can be licensed to model providers. The historic advantage, taste and talent pipelines must be coupled with distribution levers and data assets. Deals will include not just streaming residuals but “model-weight” royalties and retraining rights, mirroring the structure of today’s publisher licences.

    Creators will face algorithmic wage setting. Eligibility thresholds (1,000 Spotify streams), demonetization triggers (unoriginal Shorts), disclosure requirements (synthetic media labels) and fraud detection fees are becoming the effective tax code of digital culture. The prudent strategy is to diversify revenue streams, ads, direct fan funding and commerce, and to instrument provenance by default to stay on the right side of both algorithms and regulators.

    Policy, too, will reward those who can comply. The E.U. framework, the U.S. copyright study, and union clauses collectively nudge the market toward licensed inputs, documented outputs and consent-based replication. Those advantages include larger catalogues and well-capitalized intermediaries. For independent creators, collective licensing pools and guild-run registries may offfer the path to negotiating power.

    The arts has seen patronage shift before, from courts to salons to art galleries and museums. This time, the median patron is a ranking function. Where culture is made matters less than where it is surfaced, metered and paid. Those who understand the incentives embedded in platform policy, and can prove provenance at the speed of the feed, will capture the surplus. Everyone else will be producing to spec for someone else’s algorithm.

    The New Patronage: A.I., Algorithms and the Economics of Creativity

    [ad_2]

    Gonçalo Perdigão

    Source link

  • Decentralized Innovation: How India, UAE and Saudi Arabia Are Shaping Tech’s Future

    [ad_1]

    Technology’s new origin stories are emerging from hubs in the UAE, Saudi Arabia, India and Africa. Unsplash+

    Since the 1960s, the story of technology has followed a familiar pattern. Innovation emerged in Silicon Valley garages, Boston laboratories or European cafés and gradually spread worldwide. Today, that pattern is changing. The future of tech is being equally developed in Abu Dhabi, Riyadh, Bengaluru and Jakarta. Innovation is decentralizing, and not only in terms of infrastructure and investment but also through culture, religion and sovereignty. This new center of gravity is changing whose values will define the tools that the world will use tomorrow.

    The Gulf’s ambitious tech push

    The United Arab Emirates has quickly become one of the most assertive new players. In May, during President Trump’s visit, Abu Dhabi announced the release of Stargate UAE, a 10-square-mile A.I. campus spearheaded by G42. Once fully operational, it will be one of the largest A.I.-centered campuses in the world, with a planned five-gigawatt capacity and an initial 200-megawatt phase set for 2026. 

    Stargate will accommodate hundreds of thousands of advanced chips and is strategically located within a two-thousand-mile range of nearly half the global population. Framed as a U.S.-UAE partnership, the agreement eases previous export restrictions and charts a path for safe deployment. Cisco, SoftBank and American chipmakers have pledged support, signaling the UAE’s ambition to be not just a technology consumer but also a global authority in the A.I. ecosystem. The point was made plainly: Abu Dhabi is positioning itself as both a setter and consumer of standards.

    The UAE push extends beyond hardware. It has invested billions in A.I.-driven government services designed to make public administration more predictive and efficient, including systems that assist civil servants in rapidly revising regulations. Language is also central to this strategy. The open model, Falcon Arabic, adapted to the nuances of the Arabic language, is a technological and cultural declaration. In the UAE, innovation is no longer about catching up. It’s about authorship, rooted in identity and scaled through global collaboration.

    Saudi Arabia is making its own similarly bold statement. The Public Investment Fund (PIF) launched HUMAIN this year, a sovereign A.I. developing an entire stack of data centers, cloud infrastructure, language models and consumer applications. Already, the locally produced Allam-based Humain Chat serves millions of Arabic- and English-speaking users, with customized guardrails to reflect local values. More than a chatbot, this is an assertion of cultural and linguistic sovereignty. 

    The Kingdom supports this vision through funding and equipment. At LEAP 2025, American chipmaker that specializes in ultra-fast inference, Groq, announced a $1.5 billion expansion in Saudi Arabia, backed by the PIF. The initial large-scale HUMAIN data centers in Riyadh and Dammam, each with 100-megawatt capacity, will be launched in 2026. Alongside nearly $15 billion in additional A.I. investments announced concurrently, these steps indicate that Saudi Arabia’s goal is to become a compute powerhouse rather than a passive participant. Once talent can leverage local infrastructure in their own language, the innovation pipeline can begin at home.

    India’s integration of tech with culture 

    India presents a complementary, yet distinct, vision. Digital products have transformed everyday life across the country. The Unified Payments Interface (UPI) currently processes over 20 billion transactions monthly, enabling small ideas to scale rapidly in a nation of 1.4 billion people. During the 2025 Mahakumbh pilgrimage, A.I. tools managed flows to the tune of millions, with multilingual assistants helping navigate complex rituals. These examples illustrate how India integrates technology with cultural and religious life, making it feel less like an import and more like a facilitator of tradition. The IndiaAI Mission, a $1.2 billion initiative supporting shared compute and multilingual models, reduces barriers for startups and researchers nationwide. The resulting ecosystem combines scale, meaning and diversity, illustrating how technology can be adapted in local contexts while still fostering innovation. 

    Africa and the broader Global South

    Decentralization extends beyond South Asia and the Gulf. Kenya’s Konza Technopolis in Nairobi is emerging as an intelligent city supporting startups, academia and research. Yet some of the regions’ most radical innovations are rural: A.I. tools assist farmers in forecasting weather and crop yields amid volatile climatic conditions.

    In Nigeria, hubs in Lagos and Ilorin support startups designing voice systems attuned to African accents. These systems help deliver healthcare services or financial tools to farmers in local dialects. While these initiatives may appear modest in comparison to a five-gigawatt A.I. campus, they share a common DNA: locally relevant innovation aimed at solving real-world problems. 

    Across these regions, there is a common thread. Decentralization is not just the geographic spread of technology. It is the reshaping of technology itself. The Hajj in Makkah provides key lessons in crowd management, which have applications in emergency systems across the globe. India’s street market payment rails have become benchmarks for emerging economies. African voice tools expand inclusivity. Influence spreads because these innovations are practical and culturally attuned. 

    Challenges and the road ahead

    Hurdles remain. Infrastructure must be built, maintained and operated effectively. Laws must protect privacy and rights without choking development. Talent pipelines require years to mature. Yet the trajectory is evident: projects like Stargeate and HUMAIN are not isolated experiments. They’re declarations that new centers of gravity in tech have arrived. India, Kenya and Nigeria show that cultural context—faith, language, community—is not an inhibitor of innovation, but a guide. 

    The decentralization of innovation signals a paradigm shift. Global technology will no longer emerge solely from historic powerhouses. Instead, it will reflect diverse cultural and social priorities, embedding meaning and relevance into the very tools that shape our future. 

    Yousef Khalili is the Global Chief Transformation Officer and CEO MEA at Quant, which develops cutting-edge digital employee technology.

    Decentralized Innovation: How India, UAE and Saudi Arabia Are Shaping Tech’s Future

    [ad_2]

    Yousef Khalili

    Source link

  • A.I. Won’t Replace Workers. But Only If We Act Now.

    [ad_1]

    Without rapid investment in reskilling, millions could be left behind. Unsplash+

    Too much of today’s conversation about A.I. is stuck in the wrong frame. While pundits debate whether robots will steal jobs, the real question is much simpler: Can we prepare workers fast enough, or are we about to watch millions of people get economically steamrolled? Organizations like FlashPass are already experimenting with large-scale reskiling initiatives, training thousands of workers for roles that don’t yet exist. It may sound extreme, but this kind of preparation is the only rational response to the coming changes. Unless we get ruthlessly serious about preparing our workforce for this transition, the result will be mass underemployment, wasted human capital and an economy that stalls just when it should be accelerating. 

    The skills shift already underway

    Here’s the reality check: by 2030, nearly 40 percent of the very skills considered “core” to today’s jobs will be dead weight. The World Economic Forum projects that within the same timeframe, more than half of the global workforce will require significant reskilling or upskilling as technology, demographic shifts and evolving industries make traditional career paths unsustainable.

    This isn’t some distant future scenario—it’s happening now.

    In sectors from healthcare to manufacturing, employers are struggling to find workers who can adapt to A.I.-enabled workflows and automated systems. The gap is widening not because jobs vanish overnight, but because the skills that anchor them evolve faster than traditional training models can keep up.

    Translation: the old model of front-loading education early in life and coasting on it for decades is dead. Continuous learning is no longer an advantage but the baseline requirement for employability. Either we will commit to preemptive workforce development, or we will watch people get crushed by technological change.

    Why early intervention wins

    The good news? We know what works. Regions and companies that invest in proactive reskilling and upskilling see smoother transitions, lower unemployment spikes, and stronger productivity gains when automation arrives.

    Singapore’s SkillsFuture program is a perfect case study. Participation jumped from 520,000 to 555,000 in just one year, with 54 percent of participants in career transition courses landing new jobs within six months. Instead of waiting for their pink slip, employees in Singapore are being trained for what’s next before disruption hits.

    Contrast that with economies where retraining only starts after layoffs. By then, it’s too late. Workers spend months unemployed, companies hemorrhage institutional knowledge and everyone develops an irrational fear of technology. The lesson is simple: invest early in workforce development, or pay later in economic chaos.

    The systemic barriers

    If proactive workforce development is so effective, why aren’t we scaling it everywhere? Three systemic barriers are blocking progress:

    • Fragmented funding. Money currently flows through too many disconnected programs. Workers end up in bureaucratic mazes, and employers can’t navigate the noise. Reimbursement programs, which give individuals direct choice among accredited programs, could streamline access and cut through the red tape.
    • Outdated skills measurement. Training programs are built on labor market data that’s often 12 to 18 months behind, so that by the time workers are retrained, the jobs they’re aiming for may have already shifted. What’s needed is real-time labor intelligence that links employer demand to training curricula. We have the technology. We’re just not using it systematically.
    • Low adoption. Workers hesitate to leave stable roles for uncertain retraining outcomes, especially when “benefits cliffs” penalize upward mobility. Employer-funded transition programs can help. AT&T’s 2018 $1 billion reskilling investment for 140,000 employees show what’s possible when employers directly fund transitions instead of cutting staff. 

    Government’s role in the transition

    Governments cannot solve this alone, but they play an essential role in setting standards, funding transitions and forcing cross-sector coordination. The E.U. and parts of Asia have already launched A.I. workforce action plans that mix infrastructure investment with social protections. Germany’s dual-education system provides a blueprint: companies and schools collaborate on apprenticeship programs that meet immediate labor needs while preparing workers for future shifts. That should be standard practice, not the exception. Public-private partnerships are where the leverage lies. 

    The 24-month window

    Here’s the urgent part: we don’t have unlimited time. A.I. has the potential to boost global economic output by up to 15 percent points over the next decade. If we reinvest even a fraction of that into reskilling, transition support and training ecosystems, an apparent threat could become the single biggest workforce opportunity in history.

    The alternative is a world where the tech curve keeps accelerating while millions of people are left behind. Workforce programs like those at FlashPass are building programs to help retrain the American workforce for roles in A.I. integration, digital customer experience and automated operations management before these become critical skill gaps. 

    The narrative that A.I. will replace workers is wholly misleading. What it will do is destroy opportunities for those without access to the right skills at the right time. That outcome is not a technological inevitability. It’s a leadership choice.

    The message to every business leader, government official and education administrator is clear: opportunity means nothing if people can’t access it. The window for action is closing fast. Stop waiting for the “future of work” to arrive. It’s here. Start training for it now, or watch your workforce become obsolete.

    A.I. Won’t Replace Workers. But Only If We Act Now.

    [ad_2]

    Emil Barr

    Source link

  • Shipping at the Speed of Prompt: What Vibe Coding Changes and Breaks

    [ad_1]

    Developers are shifting from writing every line to guiding A.I., and facing fresh challenges in review and oversight. Unsplash+

    An emerging trend known as “vibe coding” is changing the way software gets built. Rather than painstakingly writing every line of code themselves, developers now guide an A.I. assistant— like Copilot or ChatGPT—with plain instructions, and the A.I. generates the framework. The barrier to entry drops dramatically: someone with only a rough idea and minimal technical background can spin up a working prototype. 

    The capital markets have taken notice. In the past year, several A.I. tooling startups raised nine-figure rounds and hit billion-dollar valuations. Swedish startup Lovable secured $200 million in funding in July—just eight months after its launch—pushing its value close to $2 billion. Cursor’s maker, Anysphere, is approaching a $10 billion valuation. Analysts project that by 2031, the A.I. programming market could be worth $24 billion. Given the speed of adoption, it might get there even sooner.  

    The pitch is simple: if prompts can replace boilerplate, then making software becomes cheaper, faster and more accessible. What matters less than whether the market ultimately reaches tens of billions is the fact that teams are already changing how they work. For many, this is a breakthrough moment, with software writing becoming as straightforward and routine as sending a text message. The most compelling promise is democratization: anyone with an idea, regardless of technical expertise, can bring it to life.   

    Where the wheels come off

    Vibe coding sounds great, but for all its promise, it also carries risks that could, if not managed, slow future innovation. Consider safety. In 2024, A.I. generated more than 256 billion lines of code. This year, that number is likely to double. Such velocity makes thorough code review difficult. Snippets that slip through without careful oversight can contain serious vulnerabilities, from outdated encryption defaults to overly permissive CORS rules. In industries like healthcare or finance, where data is highly sensitive, the consequences could be profound. 

    Scalability is another challenge. A.I. can make working prototypes, but scaling them for real-world use is another story entirely. Without careful design choices around state management, retries, back pressure or monitoring, these systems can become brittle, fragile and difficult to maintain. These are all architectural decisions that autocomplete models cannot make on their own. 

    And then there is the issue of hallucination. Anyone who has used A.I. coding tools before has come across examples of nonexistent libraries of data being cited or configuration flags inconsistently renamed within the same file. While minor errors in small projects may not be significant, these lapses can erode continuity and undermine trust when scaled across larger, mission-critical systems. 

    The productivity trade-off

    None of these concerns should be mistaken for a rejection of vibe coding. There is no denying that A.I.-powered tools can meaningfully boost productivity. But they also change what the programmer’s role entails: from line-by-line authoring to guiding, shaping and reviewing what A.I. produces to ensure it can function in the real world. 

    The future of software development is unlikely to be framed as a binary choice between humans and machines. The most resilient organizations will combine rapid prototyping through A.I. with deliberate practices—including security audits, testing and architectural design—that ensure the code survives beyond the demo stage.

    Currently, only a small fraction of the global population writes software. If A.I. tools continue to lower barriers, that number could increase dramatically. A larger pool of creators is an encouraging prospect, but it also expands the surface area for mistakes, raising the stakes for accountability and oversight.

    What comes next

    It’s clear that vibe coding should be the beginning of development, not the end. To get there, new infrastructure is needed: advanced auditing tools, security scanners and testing frameworks designed just for A.I.-generated code. In many ways, this emerging industry of safeguards and support systems will prove just as important as the code-generation tools themselves. 

    The conversation must now expand. It’s no longer enough to celebrate what A.I. can do; the focus should also be on how to use these tools responsibly. For developers, that means practicing caution and review. For non-technical users, it means working alongside engineers who can provide judgment and discipline. The promise of vibe coding is real: faster software, lower barriers, broader participation. But without careful design and accountability, that promise risks collapsing under its own speed. 

    Shipping at the Speed of Prompt: What Vibe Coding Changes and Breaks

    [ad_2]

    Ahmad Shadid

    Source link

  • John Deere CTO Jahmy Hindman Is Turning A.I. Into a Farmer’s Tool

    [ad_1]

    Jahmy Hindman, SVP & CTO at John Deere, is leading the agricultural giant’s AI transformation, including the development of See & Spray technology that reduces herbicide use by up to two-thirds and a 2026 initiative to connect 1.5 million machines through satellite connectivity. Courtesy of John Deere

    Jahmy Hindman, featured on this year’s A.I. Power Index, oversees the integration of artificial intelligence into John Deere’s agricultural equipment, transforming the tractors, combines and tillage machinery that generations of farmers have relied upon into precision-guided, autonomous platforms. Under his leadership, John Deere has developed A.I. solutions that address the unique challenges of agriculture, where technology must perform reliably in harsh rural environments and deliver measurable results for farmers who get only one chance per year to maximize their yields. Hindman is spearheading John Deere’s ambitious 2026 initiative to connect 1.5 million agricultural machines through satellite connectivity, enabling real-time operations in regions lacking cellular coverage and accelerating the company’s A.I. model training capabilities. With global food demand expected to rise as the population approaches 10 billion by 2050, and the average farmer now 58 years old working 12-18-hour days, Hindman recognizes the critical role A.I. plays in addressing agriculture’s demographic and productivity challenges. His work extends beyond traditional farming applications to predictive maintenance engines, digital twins and advanced analytics that transform each piece of equipment into what he describes as a self-operating intelligence platform, designed to help farmers “make every seed count, every drop count, and every bushel count” in an industry where precision and reliability are paramount.

    How is the application of A.I. in agriculture different from the way tech companies use it? What does this enable for farmers?

    Our customers operate in predominantly rural environments, with changing and often harsh weather conditions. This is the place our technology must perform, which is why we deploy A.I. on the edge in agricultural equipment. While models are trained in data centers, they must run efficiently on GPUs operating in the equipment. These models perform tasks beyond human capability, processing data from various sensing modalities, like camera arrays, to make real-time decisions, such as applying herbicide only where needed. They also make decisions about the environment around the equipment to enable autonomous operations.

    Improving the precision of crop inputs allows a farmer to turn a highly varied, dynamic environment like farming into a more manageable and predictable one. This is what differentiates A.I. in farming from the digital-first applications more common in tech companies. That said, the equipment operating in agriculture collects a significant amount of operational and agronomic data. This data lays the foundation for digital-first A.I. insight solutions, which is then created to enable farmers to make better management decisions through their crop cycles. 

    Computer vision and machine learning are two specific types of artificial intelligence that enhance observation and decision-making for farmers. These technologies help farmers “see” beyond human capacity to observe what’s happening at critical junctures and make precise decisions in real-time throughout the growing season. Take autonomous tractors, for example: Camera arrays can be installed on tractors for a 360-degree view of a tractor’s surroundings, enabling high-quality depth perception to eliminate false positives like shadows. This precision allows farmers to turn a highly varied, dynamic environment, like farming, into a more manageable, predictable one. This is what differentiates A.I. in farming from the digital-first applications common in tech companies.

    How does Deere build A.I. products that deliver efficient, measurable benefits to improve a farmer’s bottom line? 

    With global food demand expected to rise as the population nears 10 billion by 2050, the need for efficiency and sustainability in agriculture has never been greater. At John Deere, it’s our goal to provide farmers with the tools and technology they need to produce the food, fuel and fiber we all rely on. Our approach to A.I. starts with solving real problems that impact a farmer’s bottom line and productivity in the field. Our products are designed and tested with farmers to ensure they meet their needs. 

    One way we’re meeting this challenge is with See & Spray, which uses computer vision and machine learning to detect where every weed is in a field and precisely apply herbicide only where it’s needed. This plant-level management technology gives a machine the gift of vision, allowing it to “see” more closely with 36 cameras attached to the machine’s 120-foot carbon fiber boom. Processors determine if an individual plant is a crop or weed and send commands that deliver a precise dose of herbicide where the weed is. This See & Spray technology can reduce the amount of herbicide needed by up to two-thirds, which allows farmers to grow healthier crops. This also saves farmers money, reducing the amount of fertilizer and herbicide needed.

    The global population is projected to reach 10 billion by 2050, while the average farmer is 58 and working 12-18-hour days. How is John Deere using A.I. to address this demographic challenge before it becomes a food crisis?

    John Deere delivers highly efficient, automated farm equipment that maximizes productivity throughout the growing cycle to make every seed count, every drop count, and every bushel count. For example, our latest combine harvesters are packed with automated technologies designed to optimize harvesting efficiency. Using stereo cameras and satellite imagery, the machine continuously analyzes field conditions in real time, adjusting the speed of the machine as it moves through uneven terrain. This intelligent automation ensures every bushel is captured from the field and allows farmers to focus on other value-added tasks across the farm.

    Barron’s predicted Deere’s stock to grow by 50 percent due to the success of its A.I.-enabled solutions. Deere hit all-time highs in May. Which A.I. capabilities are driving that momentum, and how are they impacting farmers? 

    The edge A.I. solutions we’ve deployed are aimed at helping farmers do more with less. See & Spray reduces herbicide applications while protecting, and in some cases improving, crop yields. Predictive Ground Speed Automation improves the performance of the harvesting operation while reducing the skill level necessary for the operator. Autonomous tractors allow the farmer to get necessary work done at the time it needs to be done when available labor is being used on more valuable tasks. 

    Our momentum is driven by A.I. solutions that create tangible value for farmers, saving them time, reducing costs, and improving yields. Today’s farms generate vast amounts of data. John Deere Operations Center, the operating system for the farm, allows farmers to set up their equipment, create work plans for each field, monitor every machine in real-time as it completes work, and analyze the data for smarter and faster decisions on the farm. These capabilities are reshaping how modern farming is done.

    John Deere is expanding satellite connectivity to reach farmers worldwide, including regions like Brazil. How are these connectivity solutions helping farmers overcome infrastructure gaps to unlock value in their operations?

    Brazil is one of the world’s top exporters of agricultural products; however, roughly 75 percent of the country lacks secure and reliable connections. This makes it challenging for farmers to take advantage of the latest technology, some of which requires reliable internet. Satellite communication services, like Deere’s SATCOM service, fill this connectivity gap and allow farmers to improve productivity, profitability and sustainability. With improved connectivity via satellites, farmers can work more efficiently and productively, reduce downtime, and coordinate among machines for more efficient use of resources.    

    What’s one assumption about A.I. that you think is dead wrong?

    A.I. will replace farmers. This isn’t the case. Rather, A.I. enhances and complements the work of the farmer, automating repetitive tasks, reducing variability in the process and providing tools for smarter, more efficient operations. Deere shares farmers’ commitment to protecting the land for future generations. We see A.I. as a tool that empowers farmers to do more with less, leading them into the digital era while solving decades-long challenges from limited skilled labor to managing weather variability.  

    Was there one moment in the last few years where you thought, “This changes everything” about A.I.? 

    A.I. is really about compute, algorithms and data. I’ll highlight two examples. For compute, Deere charted GPU performance over time, looking at CUDA cores and clock speed. When we plotted our own embedded GPU performance alongside current state-of-the-art and projected roadmaps from our compute partners, the curve was clearly exponential—even in embedded GPUs. That’s significant because compute has traditionally been a limiting factor for embedded A.I. applications, and now that constraint is disappearing. 

    The second example is data. Deere recently began streaming 150 Mbps with 70 ms latency over satellite connections. For our applications, data is generated on the edge, and collecting it in sufficient volume is challenging, often constrained by seasonal growing cycles. With a persistent, high-bandwidth connection, we can move that edge data more quickly, which accelerates the model training flywheel and leads to faster, more robust improvements. 

    What’s something about A.I. development that keeps you up at night, that most people aren’t talking about? 

    In farming, the stakes are high. Farmers get one chance a year to do it right, so every decision and every action matters. We’re responsible for developing consistent, always-on A.I.-enabled technology that farmers trust. Farmers need to know that when they invest their hard-earned dollar into precision technology on our machines, it will perform exactly as expected every time. Making trustworthy A.I.-enabled tools for the people who need it most is what drives us to keep developing more innovative, cutting-edge technology.   

    John Deere CTO Jahmy Hindman Is Turning A.I. Into a Farmer’s Tool

    [ad_2]

    The Editors

    Source link

  • A.I. Agents Are Here. But Who’s Accountable For Their Actions?

    [ad_1]

    Without systems that tie A.I. agents back to real humans, autonomy risks becoming a recipe for manipulation and deniability. Unsplash+

    When a semi-autonomous A.I. bot called Truth Terminal sprang up on X, chirping about everything from crypto token prices to religion and philosophy, it kickstarted a new meta not only in the crypto industry but also in the larger tech ecosystem. Truth Terminal signaled the start of the agentic shift, a new era of collaboration between humans and A.I. 

    In the months since then, A.I. agents have multiplied and matured. Today, there are multitudes of A.I. agents that schedule meetings, manage crypto portfolios and act as virtual assistants. Yet as the autonomy of these assistants increases, so too does the surface area for risk and misalignment. The core dilemma remains: even though A.I. agents are making strides in their intelligence and capabilities, these systems cannot take accountability for their actions. So when an A.I. agent makes a costly mistake, who is responsible?The user or the creator? If we are to avoid dystopian effects in the future, this dilemma needs to be addressed. 

    Disembodied agents, disconnected responsibility

    Handing over human responsibilities to computer algorithms and machines brings obvious benefits like efficiency, scale and resource optimization. But it also poses significant risks. Machines have no identity, no legal standing and no way to be reprimanded for wrongdoing. Worse still, there is no existing infrastructure capable of stopping them or holding them accountable. 

    Traditional authentication mechanisms, such as passwords, API keys or OAuth tokens, were never designed for persistent, autonomous agents. They authenticate access, not intent. They validate keys, not accountability. And in an era where A.I. agents can be deployed, forked and redeployed across blockchains, platforms and protocols, this gap is no longer theoretical.

    A.I. agents can now spin up logic, influence financial decisions and shape social narratives. They can be duplicated, modified or spoofed, with the same core model existing under dozens of names or wallets—some malicious, some benign. When things go wrong, responsibility becomes impossible to pin down. Without intervention, we risk unleashing orphan agents, autonomous systems with no cryptographically provable ties to a real person, team or legal entity. 

    Identity as infrastructure for the agentic era

    Identification is merely the first step. The real challenge is making A.I. agents trustworthy. It’s become increasingly evident that the agentic age needs a foundational trust layer. Without it, we’re building systems that can act, transact and persuade, without a reliable way to trace accountability or verify authenticity.

    But we must be careful not to repeat the mistakes of the past. That layer should not rely on surveillance or centralized controls to instill trust or a level of safety. Rather, it should provide attestation and proof of agency: assurances that an agent is supervised by a human or entity who can be held to account. Luckily, such infrastructure is starting to emerge. Systems like Human Passport offer a new paradigm: decentralized identity that is portable, privacy-respecting and built for the realities of Web3 and A.I. Rather than broadcasting identity, these frameworks enable agents to present selective, verifiable proofs, showing that they’re tied to real, unique humans without revealing more than is necessary.

    What accountability looks like in practice

    So, what does accountability look like in a world filled with autonomous agents? A few models for assigning responsibility to machines and algorithms point the way:

    • Revocable credentials. Identity-linked attestations that are dynamic, not static. If an A.I. agent goes rogue or is compromised, the human or entity that authorized it can revoke its authority. These credentials provide a live connection between agents and their real-world sponsors.
    • Cryptographic delegation signatures. Provable claims that an agent is acting on behalf of a person or organization. This turns agents from black boxes into verifiable representatives. Just as SSL certificates confirm a website’s legitimacy, these signatures can verify that an agent’s actions were launched with intent, not spoofed or self-originated.
    • Human-verifiable audit trails. Tamper-proof, on-chain proofs of agency. Even if an agent executes a thousand micro-decisions autonomously, the trail of responsibility won’t vanish into the ether. The goal is to be able to trace accountability without violating privacy.

    It’s essential to act now while this technology is still in its nascent stage. Billions of dollars are flowing into the development and deployment of A.I. agents and with each passing month, these tools gain new capabilities, new wrappers and new interfaces. 

    Suppose we don’t build ownership and identity systems now. In that case, we are laying the foundation for a future defined by fraud, manipulation and deniability, one where synthetic agents operate at scale with no one to answer for them, no way to trace intent and no reliable signal of trust. Because in an agentic future, identity is no longer just about who you are. It’s about proving who acts for you, and when.

    We stand at a critical inflection point. The infrastructure we build now will determine whether this next wave of automation enhances human agency or erodes it beyond recognition.

    Empower, don’t panic

    We’re at the beginning of a new age, one where machines can act with growing independence. But if we fail to embed accountability now, we’ll spend the next decade trying—and likely failing—to fix it. Luckily, we have the tools. Systems like Human Passport give us a path forward where agents can act, but never act alone. Where every action carries a signature. Where autonomy is not the opposite of responsibility, but an extension of it. If we build wisely, the agentic era won’t be a loss of control, but a leap in capability.

    A.I. Agents Are Here. But Who’s Accountable For Their Actions?

    [ad_2]

    Kyle Weiss

    Source link

  • Regulating the Algorithm: Why A.I. Policy Will Define Global Market Competitiveness

    [ad_1]

    Compliance, compute and cross-border rules are becoming the true arbiters of A.I. advantage. Unsplash+

    The contest for A.I. leadership has shifted from lab breakthroughs to law books. Over the next eighteen months, the rule-making calendars in Washington, Brussels and Beijing will have a greater impact on margins, market access and M&A than any single model release. For investors, the divergence in the U.S., E.U. and China’s approaches is not academic; it is the new map of operational risk and strategic advantage.

    Three playbooks, one race

    United States: security-first, process-heavy

    The U.S. framework is coalescing around national-security guardrails and governance standards rather than a single omnibus statute. Federal agencies now operate under the Office of Management and Budget’s (OMB) March 2024 memorandum (M-24-10), which compels agencies to formalize A.I. risk management and appoint Chief A.I. Officers, signalling that procurement and federal use will privilege vendors with robust assurance practices. NIST’s Generative A.I. Profile extends the AI Risk Management Framework into concrete practices for model testing, red-teaming and documentation. Think of it as an emerging “assurance stack” that enterprise buyers will increasingly expect to see mirrored in the private sector. 

    Export control policy is the sharper instrument. In January, the Commerce Department’s Bureau of Industry and Security (BIS) introduced an interim final rule expanding chip controls, notably adding controls on certain advanced model weights, a first step toward treating the most capable closed-weight models as dual-use technology. A related 2024 proposal laid the groundwork for mandatory reporting by developers and compute providers training powerful models. The message is clear: compute concentration and frontier training will be monitored and, where necessary, rationed. 

    Policy now intersects visibly with markets. Washington has adjusted its posture on sales of constrained accelerators to China, with reporting on resumed H20 chip shipments and talks over a bespoke, de-rated Blackwell-based part, underscoring that export policy will remain dynamic, not binary. It will shift shares between chip bins and geographies. 

    The wild card is federalism. California’s ambitious SB-1047 effort to mandate third-party audits for “frontier” models was vetoed in 2024, but the legislative momentum it generated has not dissipated; Sacramento and other statehouses remain active, even as the industry seeks federal preemption. Expect continued volatility at the state level, complicating the national go-to-market strategy. 

    European Union: market access in exchange for compliance

    The E.U. A.I. Act entered into force in Aug. 1, 2024 with a phased rollout: bans on specific uses and A.I. literacy obligations applied from Feb. 2, 2025; obligations for general-purpose A.I. (GPAI) models—including those with “systemic risk”—apply from Aug. 2, 2025; the comprehensive high-risk regime lands in August 2026, with a longer runway for embedded systems. The European Commission, based in Brussels, also published a GPAI Code of Practice and accompanying guidelines this summer. The Code is voluntary and recognized by the Commission and the A.I. Board as a credible route to prepare for compliance. The A.I. Office will coordinate implementation and oversight. Translation: providers that align early get predictability and smoother market access. 

    The guidelines outline transparency, copyright and safety expectations, including model evaluation, adversarial testing and serious incident reporting for models deemed to present systemic risks, with fines of up to 7 percent of global turnover. For large platforms and foundation-model vendors, this is a compliance program, not a checkbox exercise. 

    China: rapid administrative control, content discipline

    Beijing’s layered regime arrived early and moves fast through administrative measures. Algorithmic recommendation rules have been in effect since March 2022, requiring filings with the Cyberspace Administration of China (CAC) and imposing controls on profiling, amplification and “information cocoon” effects. Generative A.I. services have been subject to the CAC’s Interim Measures since August 2023, which require security assessments, training data governance and synthetic content labelling. Filing obligations and the CAC’s algorithm registry give authorities visibility and leverage over providers’ technical choices. 

    At the same time, China remains constrained in cutting-edge computing by U.S. export controls. Policy gyrations around “de-rated” accelerators illustrate a managed-access equilibrium: enough supply to keep domestic ecosystems moving, not enough to enable unconstrained frontier training. That balance will continue to ripple through the capex of Chinese hyperscalers and their local chip design efforts.

    Where policy meets P&L

    Compliance as a competitive moat

    In the E.U., the cost of conformity will be meaningful but predictable; early movers that operationalize the GPAI Code and documentation standards may enjoy accelerated procurement and less regulatory friction. In the U.S., assurance signals mapped to NIST profiles will increasingly become table stakes in enterprise sales and federal contracts. In China, the filing-first architecture rewards incumbents with regulatory muscle and local data pipelines, while raising the bar for foreign entrants. 

    Compute, constrained

    BIS controls on chips and model weights make computing not just a cost line but a policy variable. Firms with diversified training strategies, mixtures of smaller specialized models, retrieval-heavy systems and efficient fine-tuning will carry less policy risk than pure frontier bets. Watch for “good enough” accelerators purposely designed to circumvent export rules, and for cloud providers packaging compliance attestations alongside GPU capacity as part of their product offerings. 

    Capital concentration and consolidation

    A.I. funding remains elevated and skewed toward incumbents. The first quarter of 2025 saw a record $66.6 billion across more than 1,000 deals, and the hyperscalers’ 2025 spending plans point to another year of unprecedented infrastructure outlay. That scale will pull services, safety tooling and data infrastructure vendors into an M&A slipstream. 

    Cross-border data and distribution

    For consumer and enterprise vendors alike, the same model will not ship the same way in all three blocs. The E.U. will reward traceability and documentation; China will insist on content controls and filings; the U.S. will probe provenance, cybersecurity and incident reporting, especially in public-sector deals. Product, legal and go-to-market need to travel together.

    The investor lens: positioning for policy alpha

    • Back assurance infrastructure. Vendors that simplify compliance, such as evaluation suites, incident reporting pipelines, copyright management and model-card automation, will be natural beneficiaries of both the E.U. A.I. Act and U.S. federal procurement norms. The E.U.’s GPAI guidance and NIST profiles are effectively shopping lists for this category. 
    • Prefer adaptable model strategies. Firms optimized for parameter-count theater will be whipsawed by export and safety rules. Those advancing efficient training, retrieval-augmented generation and domain-specific small models will face fewer chokepoints as BIS and allies tune controls. 
    • Price E.U. clarity as a premium, not a drag. The narrative that “Europe regulates, America innovates” misses the strategic upside of regulatory certainty. For many B2B use cases, the E.U.’s predictable timeline and the Code of Practice reduce legal discount rates on revenue. Execution matters, but the framework is now set. 
    • Treat China exposure as policy-beta. Returns will hinge on regulatory fluency and supply-chain agility more than pure technology. The CAC’s filing regimes and content rules favor local champions and foreign joint ventures with deep compliance capability. Export-control volatility should be assumed, not feared; portfolio companies that can pivot across chip tiers will fare better. 

    What to watch next

    • E.U. enforcement cadence as the A.I. Office operationalizes audit, incident reporting and systemic-risk oversight post-August 2025. Early supervisory choices will set industry norms. 
    • BIS follow-ons defining thresholds for model-weight controls and clarifying reporting duties for compute clusters—details that will influence where and how frontier models are trained. 
    • U.S.-China chip détente or divergence, including any further carve-outs for “de-rated” accelerators and China’s reaction through indigenous GPU roadmaps. 

    Across all three blocks, the through-line is power: who sets the standards, who grants market access and who controls the scarcest inputs—compute, data and trust. Regulation is no longer a compliance afterthought. It is an industrial strategy by other means and, for disciplined capital, a source of lasting competitive advantage.

    Regulating the Algorithm: Why A.I. Policy Will Define Global Market Competitiveness

    [ad_2]

    Gonçalo Perdigão

    Source link

  • How B-schools are introducing digital technology courses to keep students’ skills updated

    How B-schools are introducing digital technology courses to keep students’ skills updated

    [ad_1]

    B-schools are responding to the pandemic-led acceleration of digitisation of businesses. IIMB has introduced a new core course on digital businesses this year, while electives like gamification, Web 3.0 and Metaverse respond to some of the latest technology trends. IIMA has started courses on digital strategy and transformation and digital marketing. “The traditional way of marketing or strategy or managing HR are all changing. Real-time data about what employees are doing is useful to understand how giving a day off in the middle of the week may improve productivity,” says D’Souza.

    Consulting major BCG India, one of the largest recruiters at the leading B-schools, recently launched ‘BCG X’—a vertical to bring together more than 2,500 digital and AI experts, tech designers and builders globally to service client needs, as the nature of businesses they consult for is also evolving. “We do a lot of work with large start-ups now, which are digital-first companies,” says Sankar Natarajan, Managing Director and Head of Recruiting at consulting firm BCG India, adding that the new vertical has a mix of people with specific functional and domain expertise, but also requires management and consulting skills.

    Also Read: What are India’s top B-schools doing to prepare students for the digital age?

    Meanwhile, agility, an umbrella term for skills required to overcome the VUCA (volatility, uncertainty, complexity and ambiguity) world’s challenges, is a key ingredient for managers leading the businesses of tomorrow, the country’s top B-schools and businesses agree. “Post-Covid, while companies continue to think and implement long-term strategic plans, there is a need to be more agile about certain decisions. For example, clients we work with have to revisit decisions due to external shocks such as supply chain uncertainty, geopolitical developments, changes in commodity prices, etc. So, companies and consultants have to be adaptive and all these elements come to bear a lot more,” says Natarajan. Adds Varun Nagaraj, Dean of S.P. Jain Institute of Management and Research (SPJIMR): “The pandemic exposed to the whole world that somebody falls sick somewhere and, suddenly, the prices of auto rickshaw parts go up. Therefore, an appreciation for people who can operate in that kind of a world has gone up.”

    Institutes are going about preparing students for unfamiliar and shifting situations in different ways. D’Souza says IIMA has introduced courses on innovation, including one on ‘Innovation, Live!’, a hands-on, practical course aimed at developing a student’s ability to come up with out-of-the-box solutions, understand innovation methodologies and learn corporate decision-making processes. “In the last few years, we have been thinking a lot about divergent thinking, where there are different solutions to a problem. This has become central to quite a few courses operating on the campus,” he says. For SPJIMR, one way is to focus on solutions in core courses. “For example, in human resources, how do we introduce a diversity, equity, inclusion solution in Afghanistan or in a company that’s like that?” says Nagaraj. IIM Bangalore (IIMB) is emphasising on digital, data and ESG-related skills to help students catch early trends in external changes that contribute to VUCA. “If you see a change in demand, or you see a new trend towards a new technology, or you see some other consumer or social trend, the focus on data will help students understand these kinds of changes,” says Rishikesha T. Krishnan, Director of IIMB.

    [ad_2]

    Source link