ReportWire

Tag: Emerging technologies

  • Elon Musk, AI Startups, and The Case of The Allegedly Missing Trade Secrets

    [ad_1]

    A second lawsuit filed by an artificial intelligence company alleging a former employee stole trade secrets has been filed in California, just days after Elon Musk’s xAI alleged it had recently experienced corporate espionage.

    In this case, Scale AI, a leading AI data-labeling firm, sued competitor Mercor Inc. in federal court Wednesday, accusing the startup and a former employee of misappropriating trade secrets to win new business.

    Scale is valued at approximately $29 billion following a massive $15 billion Meta investment.

    The allegations

    The lawsuit, filed in the U.S. District Court for the Northern District of California, targets Eugene Ling, Scale’s former head of engagement management, and his new employer, Mercor.

    The case is Scale AI Inc. v. Mercor.io Corporation, 25-cv-07402.

    In its court filing, Scale alleges Ling downloaded over 100 confidential documents, including proprietary customer strategy materials and product information, to a personal Google Drive while still employed at the company and after meeting with Mercor’s CEO.

    According to the complaint, Ling then contacted one of Scale’s top clients, referred to as “Customer A,” on behalf of Mercor while still at Scale, even arranging calls to pitch Mercor’s services. The lawsuit claims this effort was an attempt to steal business worth “millions of dollars.”

    Attempts to reach Ling’s attorney were unsuccessful. But on his social media, Ling posted that he “never used” any of the Scale files and is “still waiting for guidance on how to resolve this.”

    “I just wanted to say that there truly was no nefarious intent here,” he wrote. “I’m really sorry to my new team at Mercor for having to deal with this.”

    Mercor’s response

    Mercor co-founder Surya Midha denied any misuse of Scale’s intellectual property, stating that while several former Scale employees have joined Mercor, the two firms operate under “intentionally different” strategies. He added that Mercor is investigating the matter and had offered to have Ling delete any documents in his possession.

    “While Mercor has hired many people who departed Scale, we have no interest in any of Scale’s trade secrets and in fact are intentionally running our business in a different way,” Midha said in a statement.

    “Eugene informed us that he had old documents in a personal Google Drive, which we have never accessed and are now investigating,” it reads. “We reached out to Scale six days ago offering to have Eugene destroy the files or reach a different resolution, and we are now awaiting their response.”

    Scale, in turn, argues that ordering Ling to destroy the files would eliminate crucial evidence. The company is seeking damages, legal fees, an injunction barring Mercor from using the stolen material, and the return of all misappropriated documents.

    Scale’s legal move is another headache for a turbulent period for the company, which has recently experienced Meta’s massive investment, the hiring of Scale’s CEO Alexandr Wang by Meta, and a 14% workforce reduction.

    Cutthroat competition comes to the courts

    The case underscores the fiercely competitive nature of the AI space, where intellectual property—particularly data strategy and customer relationships—is the key to market dominance. The situation mirrors another recent trade secret lawsuit, when Elon Musk’s xAI sued a former engineer for allegedly stealing confidential information on his way to a rival.

    In that case, Musk’s company is alleging Zhihao “Zack” Li stole confidential files tied to the development of Grok, the company’s chatbot, before departing for rival OpenAI.

    The complaint, filed in California state court, accuses Li, who joined xAI last year as an engineer, of copying proprietary materials in July 2025 shortly after agreeing to take a job at OpenAI. Court filings say Li also sold $7 million worth of vested xAI stock ahead of his departure.

    According to the lawsuit, Li admitted during an internal meeting on Aug. 14 that he had taken sensitive documents, though xAI alleges he attempted to “cover his tracks” by deleting files. Forensic checks later uncovered additional materials still stored on his devices, the company alleges.

    Musk’s startup argues that the stolen information could allow OpenAI to enhance ChatGPT with what it describes as xAI’s “more innovative AI and imaginative features.”

    That case is xAI Corp v. Xuechen Li, U.S. District Court, Northern District of California, No. 3:25-cv-07292-RFL

    What are the broader implications?

    For investors and the AI industry in general, the lawsuit highlights two key risks.

    Firstly, the theft of highly complex and coveted intellectual property, or even the appearance of it, can rapidly alter competitive positioning in a market where trust and proprietary data are currency. Secondly, it signals that AI startups may increasingly turn to legal avenues to enforce boundaries and protect their turf.

    As AI becomes a part of so much of the technology we see and use all the time, the companies that make it are going to become even more fiercely protective of their products and brands. The value of proprietary data and client relationships makes legal protection, and the precedents set through lawsuits like this, the next frontier for companies looking to safeguard their tools and reputations.

    “Scale has become the industry leader on the strength of our ideas, innovation, and execution,” Joe Osborne, a spokesperson for Scale, said in a statement. “We won’t allow anyone to take unlawful shortcuts at the expense of our business.”

    [ad_2]

    Riley Gutiérrez McDermid

    Source link

  • Meet the Silicon Valley Donors Backing California’s Redistricting Push

    [ad_1]

    In the latest sign that Silicon Valley titans are increasingly throwing their weight behind political issues, Netflix co-founder Reed Hastings has contributed $2 million to support Gov. Gavin Newsom’s Proposition 50 campaign.

    The move is the latest underscoring how Silicon Valley’s deep-pocketed executives are increasingly wielding influence in California politics and beyond.

    The November ballot measure would scrap California’s independent redistricting commission, returning map-drawing authority to the state legislature, where Democrats hold firm majorities.

    Backers argue the change would counterbalance GOP-led gerrymanders in states like Texas and Florida, potentially netting Democrats half a dozen U.S. House seats in 2026.

    Hastings’ donation highlights the growing role of tech fortunes in political fights. The Netflix co-founder has long been a high-profile donor, previously giving $3 million to Newsom’s 2021 recall defense. He has also funded statewide education reform initiatives and donated heavily to national Democratic causes.

    Other Silicon Valley figures are joining him

    Ron Conway, one of the Valley’s most prolific angel investors, has pledged support, and Y Combinator’s Paul Graham gave $500,000. Their involvement echoes a broader trend: Tech executives are increasingly channeling personal wealth into shaping policy outcomes, often through ballot measures where their dollars can have an outsized impact.

    California has been a testing ground for such efforts.

    In 2020, Uber, Lyft and DoorDash collectively spent more than $200 million to pass Proposition 22, rolling back state labor rules that threatened their business models. More recently, venture capital and crypto executives have funded campaigns to resist new taxes and regulations.

    Tech money is increasingly flowing into politics

    The pattern isn’t limited to California. At the national level, technology money has become a major force in politics.

    Sam Bankman-Fried, the disgraced former crypto billionaire, spent more than $40 million on congressional races in 2022 before his collapse. Some estimates put his total political contributions at more than $70 million across 18 months, reflecting his ambition to exert influence at the federal level

    Amazon, Microsoft and Alphabet remain among the top corporate spenders on lobbying in Washington. These interventions have helped shape debates ranging from antitrust reform to AI regulation.

    According to Axios, in the first quarter of 2025, Meta spent $8 million lobbying, followed by Amazon at $4.3 million, with Microsoft at $2.4 million. OpenSecrets reports Amazon’s total federal lobbying for 2025 (first half) at $9.35 million, and Alphabet (Google’s parent) at around $7.81 million

    For critics, Proposition 50 represents another instance of wealthy tech donors tilting the political playing field.

    Opponents, including GOP donor Charles Munger Jr., who has already committed $10 million to defeat it, say dismantling the independent redistricting system voters approved in 2008 is a naked power grab. Former House Speaker Kevin McCarthy has also jumped into the fray, casting the measure as an effort by Democrats and their Silicon Valley allies to “rig the map.”

    Are Silicon Valley tycoons the kingmakers yet?

    What makes the fight especially significant is its national impact.

    California, with 52 House seats, remains the biggest single prize in congressional redistricting. Even a small shift in district lines could determine control of the House in 2026. For Democrats, aligning with wealthy tech donors offers a way to keep pace with Republican fundraising networks that have long used redistricting to their advantage.

    Whether Hastings and his peers can sway voters remains uncertain. Early polls show Californians split on Proposition 50, reflecting skepticism about giving lawmakers more control. But the torrent of Silicon Valley money ensures that by November, voters will be hearing arguments on both sides at near-constant volume.

    If successful, the campaign would further cement Silicon Valley not only as an economic powerhouse but also as a decisive political player, with ambitions that stretch far beyond California’s borders.

    [ad_2]

    Riley Gutiérrez McDermid

    Source link

  • Meet the Top 10 AI-Proof Jobs That Everyone Wants

    [ad_1]

    AI is rapidly scaling in the workforce and creating fears of an employment crisis, as workers and people entering the workforce try to figure out if their career is on the chopping block.

    That quick pace is backed by emerging data. As a result, people are trying to find “AI-proof” jobs that can guarantee job security as companies around the world choose to automate tasks instead of hiring new workers.

    Although no study can definitively say which occupations are 100% AI-proof and which are doomed to automation, a recent Microsoft study and its findings can shed a light on the matter.

    A Microsoft study published last month measured how AI can productively apply to the common tasks of different jobs.

    Microsoft researchers analyzed more than 200 thousand anonymized conversations from Bing Copilot, the company’s search engine chatbot, from January 2024 through September 2024 to see “what tasks users perform with a mainstream, publicly available, free-to-use generative AI chatbot,” the study says.

    The study then developed “AI applicability scores” for these jobs, a number that represents the combination of which work activities people sought the most AI assistance for plus how successful these tasks were and their scope of impact.

    There are caveats

    Although the study shows which occupations AI can automate best, and those which it can’t do as well, Microsoft says that doesn’t necessarily mean that those jobs will be eliminated.

    The AI applicability score highlights “where AI might change how work is done, not take away or replace jobs,” Microsoft representatives told Gizmodo earlier this month.

    “Our research shows that AI supports many tasks, particularly those involving research, writing, and communication, but does not indicate it can fully perform any single occupation,” Microsoft said.

    The data also does not imply that jobs with high AI applicability scores will have higher wages thanks to AI incorporation, the study noted, because the data does not include “the downstream business impacts of new technology.”

    Read more about AI’s predicted effect on the corporate world from Gizmodo here.

    Why companies automate

    Microsoft believes AI can be used to augment these jobs rather than completely automating them.

    But is that what corporate executives want? It’s tough to make a blanket statement on that, but early signs indicate that executives might be more pro-automation than not.

    Increasingly, executives around the corporate world are voicing their expectations and desires to see AI cut costs across the workplace. This news has naturally led to a slowdown in hiring, particularly impacting early career workers in white-collar fields to which, as the Microsoft study also shows, AI poses the biggest threat.

    “Artificial intelligence is going to replace literally half of all white-collar workers in the U.S.,” Ford CEO Jim Farley said at the Aspen Ideas Festival just last month.

    Several executives have also already put into effect new hiring policies this year that ask managers to explain why an AI agent can’t fulfill the role before they can go ahead with hiring a new worker.

    Just because you can doesn’t mean you should

    AI can cut labor costs and increase profit for companies. But that is not yet a case for wholesale automation.

    Although AI can automate some of these jobs, it doesn’t mean it can do a great job at it.

    For example, Microsoft says that writers are in the top 10 for highest AI applicability. But AI-generated writing has been criticized far and wide, particularly for its bountiful copyright issues as AI feeds on the work of existing human writers to “create” new pieces.

    The disruption of the labor market that is bound to follow the automation of certain jobs should also be a cause for concern.

    Former Google executive Mo Gawdat said earlier this month that he believes this AI-driven labor problem is one of several aspects of the way we approach AI that is bound to lead to a short-term dystopia in the next 15 years.

    Much like the Microsoft researchers that worked on the study, many other experts argue that the augmentation of AI into certain fields is a much better way to fuse AI into the economy for productivity gains than automation.

    So what are the jobs?

    Here are the ones most likely to stay human-run, the study says:

    10. Tire Repairers and Changers

    9. Ship Engineers

    8. Automotive Glass Installers and Repairers

    7. Oral and Maxillofacial Surgeons

    6. Plant and System Operators

    5. Embalmers

    4. Helpers-Painters, Plasterers

    3. Hazardous Materials Removal Workers

    2. Nursing Assistants

    1. Phlebotomists (aka healthcare professionals trained to collect blood samples)

    AI works with data. So it is not surprising that the list overwhelmingly includes healthcare industry jobs, and blue collar work, both of which require specialized physical expertise rather than clear-cut data synthesis.

    In the healthcare industry specifically, AI adoption has also been particularly slow due to limited datasets. Only less than 10% of surgical data is publicly available due to strict regulations.

    The jobs that are at highest risk 

    Microsoft also looked at jobs that it deemed had the highest levels of AI applicability. Those were, rather unsurprisingly, knowledge work occupations and sales roles, where AI is already being rapidly incorporated.

    Here is the list of the top 10 jobs that have the highest levels of AI applicability:

    10. Broadcast announcers and radio DJs

    9. Ticket agents and travel clerks

    8. Telephone operators

    7. CNC tool programmers

    6. Customer service representatives

    5. Writers and authors

    4. Sales representatives of services

    3. Passenger attendants

    2. Historians

    1. Interpreters and translators

    [ad_2]

    Ece Yildirim

    Source link

  • Meta Stock Drops On News It Used Taylor Swift As Chatbot Without Permission

    [ad_1]

    Meta has ignited a firestorm after chatbots created by the company and its users impersonated Taylor Swift and other celebrities across Facebook, Instagram, and WhatsApp without their permission.

    Shares of the company have already dropped more than 12% in after hours trading as news of the debacle spread.

    Scarlett Johansson, Anne Hathaway, and Selena Gomez were also reportedly impersonated.

    Many of these AI personas engaged in flirtatious or sexual conversations, prompting serious concern, Reuters reports.

    While many of the celebrity bots were user-generated, Reuters uncovered that a Meta employee had personally crafted at least three.

    Those include two featuring Taylor Swift. Before being removed, these bots amassed more than 10 million user interactions, Reuters found.

    Unauthorized likeness, furious fanbase

    Under the guise of “parodies,” the bots violated Meta’s policies, particularly its ban on impersonation and sexually suggestive imagery. Some adult-oriented bots even produced photorealistic pictures of celebrities in lingerie or a bathtub, and a chatbot representing a 16-year-old actor generated an inappropriate shirtless image.

    Meta’s spokesman Andy Stone told Reuters that the company attributes the breach to enforcement failures and assured that the company plans to tighten its guidelines.

    “Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” he said.

    Legal risks and industry alarm

    The unauthorized use of celebrity likenesses raises legal concerns, especially under state right-of-publicity laws. Stanford law professor Mark Lemley noted the bots likely crossed the line into impermissible territory, as they weren’t transformative enough to merit legal protection.

    The issue is part of a broader ethical dilemma around AI-generated content. SAG-AFTRA voiced concern about the real-world safety implications, especially when users form emotional attachments to seemingly real digital personas.

    Meta acts, but fallout continues

    In response to the uproar, Meta removed a batch of these bots shortly before Reuters made its findings public.

    Simultaneously, the company announced new safeguards aimed at protecting teenagers from inappropriate chatbot interactions. The company said that includes training its systems to avoid romance, self-harm, or suicide themes with minors, and temporarily limiting teens’ access to certain AI characters.

    U.S. lawmakers followed suit. Senator Josh Hawley has launched an investigation, demanding internal documents and risk assessments regarding AI policies that allowed romantic conversations with children.

    Tragedy in real-world consequences

    One of the most chilling outcomes involved a 76-year-old man with cognitive decline who died after trying to meet “Big sis Billie,” a Meta AI chatbot modeled after Kendall Jenner.

    Believing she was real, the man traveled to New York, fell fatally near a train station, and later died of his injuries. Internal guidelines that once permitted such bots to simulate romance—even with minors—heightened scrutiny over Meta’s approach.

    [ad_2]

    Riley Gutiérrez McDermid

    Source link

  • Touted As The Tesla-Killer, Lucid Scrambles to Stay On The NASDAQ

    [ad_1]

    Beleaguered electric vehicle company Lucid Motors (LCID) has implemented a reverse stock split, consolidating shares to meet NASDAQ’s $1 minimum trading price and prevent delisting.

    As of Friday, Lucid’s share price was down over 96% from its all-time high of $64.86, reached in February 2021. 

    While this move may protect the company from being removed from the exchange for now, it does little to address the underlying issues plaguing the struggling electric vehicle maker.

    Founded in 2014 by former Tesla (TSLA) engineer Peter Rawlinson, Lucid initially aimed to compete in the luxury EV segment with its flagship Air sedan, positioned as a premium rival to Tesla’s Model S.

    It had ambitious production targets, initially aiming for 20,000 vehicles in 2022, then 49,000 in 2023, and 90,000 in 2024. But the company struggled to meet demand and in 2024, Lucid delivered just over 10,200 vehicles.

    The company’s financials highlight the scale of its challenges, with revenue rising 36% to $808 million in 2024 but net losses widening to $3.1 billion. That is a loss of around $299,000 per vehicle sold.

    Lucid has been trying to stay in the game

    Multiple price cuts for the Air sedan from around $80,000 to roughly $71,400 reflect ongoing efforts to stay competitive, but the company has limited room for price increases due to high manufacturing costs.

    Despite having ample liquidity of about $4.8 billion and expanding manufacturing facilities in Arizona and Saudi Arabia, Lucid’s growth prospects remain uncertain. The company faces stiff competition from Tesla and other automakers, and its delayed launch of the more affordable Gravity SUV, a potential game-changer, has yet to materialize.

    Analysts forecast modest near-term growth, with 2025 revenue expected to reach $1.3 billion, with a 61% increase, and losses projected to decline slightly.

    However, even optimistic forecasts place Lucid’s market cap at just $6.4 billion, roughly five times its expected 2025 sales. In contrast, Tesla’s valuation remains over $1 trillion, with a price-to-sales ratio of around 12.

    If Lucid can deliver on its growth plans, the stock has the potential to double or triple, if it achieves a valuation comparable to Tesla’s. For now, the reverse stock split provides a temporary reprieve, but investors should think about it carefully given the company’s volatile financials and stiff competition.

    Will Lucid have a market for long?

    Lucid Motors’ stock had a rough week, reflecting broader investor concerns about the future demand for electric vehicles (EVs) and overall market sentiment. The luxury EV maker’s shares fell sharply after analysts highlighted ongoing challenges in the industry, including increased competition, rising production costs, and moderating consumer interest.

    Despite earlier excitement around Lucid’s technological innovations and plans to expand its luxury lineup, recent earnings reports and market data suggest that the company may be facing a more challenging environment than previously anticipated.

    Persistent supply chain disruptions, combined with skepticism over EV adoption rates, are weighing on investor confidence.

    For investors, Lucid’s recent decline, which has now reversed nearly all of its recent gains, signals heightened caution among shareholders in a fluctuating EV sector.

    As automakers compete fiercely for market share, especially in the premium segment, Lucid’s future profitability remains under close scrutiny.

    [ad_2]

    Riley Gutiérrez McDermid

    Source link

  • What Tech Jobs Don’t Drug Test? That Might Depend

    [ad_1]

    Workers who live in states where cannabis is legal often face a conundrum.

    Can they continue using a substance deemed by lawmakers to be fit for public consumption, even if they may have an employer who might drug test? Or do they avoid it all together, because they don’t know what their employer’s drug policy is? And does that policy include only “hard” drugs like cocaine, opioids or methamphetamines, or does it test for cannabis too?

    These days, the answer is a lot more flexible than it was even a decade ago. An increasing number of employers are easing their drug testing policies for cannabis, reflecting shifting attitudes toward legalization and workplace inclusion.

    According to a comprehensive guide by DDMCannabis, several industries now offer positions where cannabis use is either tolerated or explicitly not tested for.

    Jobs in sectors such as hospitality, entertainment, and certain tech roles tend to be more lenient, especially in states where cannabis has been legalized or decriminalized.

    One of the most tolerant industries for cannabis has been tech, which is usually focused more on what an employee is doing at work with their brain than what they are doing at home with their free time.

    Some tech companies have even adopted “don’t drug test” policies to attract talent, emphasize a focus on job performance over substance use, or accommodate existing employee use.

    “Jobs in technology, marketing, and creative work tend to focus on talent over testing,” the guide says. “Whether you’re a software developer, graphic designer, copywriter, or video editor, most employers in these fields don’t bother with pre-employment drug testing or random drug testing.”

    However, experts caution that even in these environments, employers may still have strict policies against impaired work performance or safety-sensitive roles where testing remains mandatory. Workers should understand specific company policies and local laws, as regulations continue to evolve nationwide.

    So where are the safest places to work if you use legal drugs?

    As cannabis becomes more mainstream, the landscape of employment policies is likely to continue shifting, providing more opportunities for workers in cannabis-friendly jobs without the concern of workplace drug tests.

    A growing number of large employers have adopted policies that either exclude or downplay drug testing for employees, reflecting shifts in workplace norms and legal landscapes. Among the most prominent are hospitality, tech, and retail giants, with some publicly emphasizing a focus on performance and safety rather than punitive drug screening.

    For example, companies like Microsoft, Netflix, and Amazon do not conduct routine drug tests on their workers, citing their mission to foster inclusive environments and adapt to changing regulations. Likewise, Starbucks, McDonald’s, and Target have publicly stated they do not require drug testing, emphasizing their commitment to workplace safety and employee well-being.

    Drug testing changes by location

    In sectors such as retail and service industries, policies are often shaped by local laws; for instance, in certain states, regulations restrict or prohibit random drug testing unless justified by safety concerns. Meanwhile, some companies reserve the right to drug test in response to suspicions of impairment following accidents or misconduct.

    The shift is driven by several factors: increased legalization, broader acceptance of medicinal and recreational cannabis, and the recognition that drug testing may not correlate directly with job performance.

    Industry observers note that, in many cases, unless an employee is visibly impaired or involved in safety-sensitive roles, these policies focus more on trust and flexibility than on punitive measures.

    Will drug testing for cannabis eventually be a thing of the past?

    As workplace norms evolve, the trend toward relaxed drug testing policies continues to reshape hiring practices, challenging long-held assumptions about substance use and employment standards.

    Or, as Maryland Democrat Jamie Raskin more concisely puts it, employment laws need to reflect the times in which we live.

    “We don’t want to be disqualifying half of the population, tens of millions of people, for having done something that most of our recent presidents have done,” he said. “You’re taking huge numbers of people off the field.”

    [ad_2]

    Riley Gutiérrez McDermid

    Source link

  • AI Is Golden or ‘Not Rational’: Wall Street’s Battle With Which Road to Take

    [ad_1]

    As investments in artificial intelligence continue to soar, some analysts are raising alarms about a looming bubble that could burst and trigger broader market declines. Others, however, say they’ve never been so sure that it is a growing opportunity.

    So who is right? Well, on Wall Street, there’s a pick-your-flavor opinion for whatever it is you want to back, so we can’t determine that. But we can show you what each side is thinking.

    Firstly, that the sector is overvalued. Analysts and investors and even company CEOs of AI giants have expressed concerns that current valuations of AI-related stocks may be disconnected from their underlying fundamentals.

    The rapid rally in companies involved in AI hardware, software, and infrastructure—including chipmakers, cloud providers, and automation firms—has driven valuations to levels that many consider unsustainable.

    Why does that matter? Because everything that goes up must eventually come down.

    That means that recent market volatility and warnings from veteran investors suggest that a sudden reassessment of valuations could result in a significant downturn, similar to past technology and internet bubbles.

    The hype men

    Secondly, that growth is why those valuations are worth it.

    Despite recent concerns about overvaluation and a possible slowdown in AI-related growth, UBS analysts reaffirmed their positive outlook on the sector this week, buoyed by Nvidia’s hotly anticipated quarterly results.

    In a note released after Nvidia reported earnings that exceeded expectations (but only just barely), UBS said that the core case for AI investment remains intact.

    “While valuations might appear stretched in the short term, the fundamental need for AI technology across industries continues to grow,” UBS wrote in a note to investors.

    The firm highlighted Nvidia’s role as a leader in semiconductor and AI infrastructure, emphasizing that the company’s robust revenue growth, which is projected at 48% for the current quarter, is a sign for ongoing demand for AI hardware and software solutions.

    Analysts also pointed out that the broader enterprise move toward integrating AI is supported by increasing capital spending, which bodes well for the sector’s long-term prospects.

    “Investors should maintain conviction,” UBS added, “as the demand for scalable, high-performance AI platforms is only poised to accelerate.”

    Market experts agree that while short-term volatility is inevitable, the fundamental structural drivers, such as the adoption of AI in cloud computing, autonomous vehicles, and enterprise AI, suggest the sector’s growth story remains robust for the foreseeable future.

    The haters

    Not everyone is as bullish on AI as UBS.

    Take OpenAI CEO Sam Altman, a man who is watching billions of dollars being poured into his competitors. Altman caused a major market rout when he said that investors are getting “over-excited” about AI.

    “Are we in a phase where investors as a whole are over-excited about AI? My opinion is yes. Is AI the most important thing to happen in a very long time? My opinion is also yes,” He told The Verge, adding that he thinks that some valuations of AI start-ups are “insane” and “not rational”.

    Investors are also increasingly wary after reports that Meta is considering a “downsizing” of its artificial intelligence division, with some executives expected to depart.

    This potential shift marks a notable departure from Meta CEO Mark Zuckerberg’s recent heavy investments in transforming the company’s AI operations.

    Over the past few months, Zuckerberg has championed a major overhaul of Meta’s AI strategy, emphasizing its critical role in enhancing user experience and competing with rivals like OpenAI and Google.

    The New York Times cited sources close to the company, indicating that the restructuring could lead to significant layoffs or a shakeup in leadership.

    The planned changes have raised questions among market watchers about whether Meta’s aggressive AI ambitions are being reassessed, or if internal challenges are forcing a strategic pivot. The move signals a period of uncertainty for Meta’s AI efforts, which had been a key part of Zuckerberg’s vision for the company’s future growth

    So full speed ahead or hit the brakes?

    While some experts acknowledge the transformative potential of AI, they caution investors to remain vigilant and avoid chasing speculative gains that lack proper valuation.

    “The risk is that we are in a man-made bubble that will eventually burst, causing widespread damage,” said industry veteran Michael Johnson.

    “Even when the dotcom bubble burst, there were a handful of fairly obvious winners that eventually came roaring back,” said CNBC‘s Jim Cramer. “If you gave up on Amazon in 2001, you missed the $2 trillion (£1.4 trillion) boat.”

    Cramer has been investigated by the Securities and Exchange Commission at least once, and has also drawn criticism for past comments on market manipulation.

    [ad_2]

    Riley Gutiérrez McDermid

    Source link

  • ‘The Magic of Burning Man’: Elon Musk’s DOGE Point Man is Now An MDMA Consigliere

    [ad_1]

    Antonio Gracias, Elon Musk’s close ally and Tesla (TSLA) board member, has pivoted to a controversial takeover of Lykos Therapeutics, a biotech firm developing MDMA-based therapies once rejected by the FDA for safety concerns, The Guardian reports.

    As the psychedelics industry inches toward mainstream acceptance, new developments reveal how politics, science, and industry interests are shaping the future of mental health treatments.

    But Gracias’ involvement in the regulatory body of the company he is now boosting is raising eyebrows, The Guardian reports.

    Lykos, which announced a $50 million recapitalization earlier this year, has been at the front of pioneering some of the most promising research into MDMA-assisted therapy. But the firm’s recent FDA rejection of its clinical trials, which cited flaws linked to bias and trial design, has cast doubt on its prospects for approval.

    Thanks largely to debates about scientific rigor, the agency ordered new Phase 3 testing, a process likely to take several years and cost millions.

    The company’s opponents argue that flawed science led to the rejection, while supporters believe in the therapeutic potential of MDMA under proper regulation.

    Neither Lykos nor Gracias responded to a request for comment.

    ‘Greasing the wheels’ for regulation?

    Gracias’s recent leadership of Lykos, financed with a $50 million infusion backed by wealthy investors including hedge funds and veteran executives, arrives as Republican and Democratic officials alike are warming to the idea of faster approval for psychedelic medicines.

    Some top Trump-era health officials, such as former officials and lawmakers, have publicly supported reevaluating the regulatory process, citing promising early results and patient demand.

    This is raising alarm bells with ethics experts.

    “You can’t be greasing the wheels and then say, ‘OK, now I’m going to quit and go pursue that approval,’” said Cynthia Brown, senior ethics counsel at the non-profit watchdog group Citizens for Responsibility and Ethics in Washington, told The Guardian.

    This political backing fuels concerns about politicizing the science. Critics warn that relaxing FDA standards or fast-tracking approvals under the influence of industry insiders could undermine the integrity of scientific research, risking future setbacks if safety is compromised.

    “The challenge is ensuring that enthusiasm doesn’t outpace the evidence,” As Mason Marks, a Harvard law professor specializing in drug policy, told The Guardian. “Science must remain independent from politics to avoid bringing the entire industry into disrepute.”

    Meanwhile, Gracias’s ties to Musk and the military, along with his past work in government, have raised questions about conflicts of interest amid the push for regulatory reform.

    So will the FDA now reconsider?

    The FDA now has broad discretion to reconsider its previous decisions, potentially issuing emergency authorizations or expedited reviews, creating opportunities for firms like Lykos to accelerate their path to market.

    “Maps and Gracias are going to try to seize the moment that we’re in,” Ifetayo Harvey, a former Maps employee and executive director of the People of Color Psychedelic Collective, said. “I think the aim is to get MDMA-assisted psychotherapy approved by the FDA by any means necessary.”

    Gracias’ involvement raises quite a few questions for the burgeoning psychedelics industry.

    It stands at a crossroads: Whether to forge ahead under politicized but promising conditions or to proceed cautiously to ensure long-term safety and efficacy. As political figures harness deepening public interest in mental health and wellness, industry insiders and regulators face a delicate balance between hope and harm, progress and prudence.

    “With the lack of transparency, it leaves us really grasping at what it even means to be Doge,” said Faith Williams, a policy director at the Project on Government Oversight, a non-profit watchdog group, told The Guardian.. “We have seen so many, if not outright conflicts of interest then potential for conflicts of interest, and if not outright corruption, potential for corruption.”

    The magic of Burning Man

    Rick Doblin, founder and president of the Multidisciplinary Association for Psychedelic Studies (MAPS) and a prominent, longtime advocate for the research and therapeutic use of psychedelic drugs. Doblin said he immediately saw a partnership.

    “It was the magic of Burning Man,” Doblin said. “I was sort of looking for a white knight that would come in and would be more focused on healing and on public benefit.”

    That spring, Lykos Therapeutics has announced a major leadership shakeup, appointing a new CEO and chief medical officer and restructuring its board of directors. These moves arrived as Gracias and investor Christopher Hohn assumed control.

    “Gracias is actively involved in the company’s day-to-day operations,” an unnamed Map director and industry insider, told The Guardian. They said that was emphasizing the influence Gracias now wields over the firm’s strategic direction as it aims to regain regulatory confidence and accelerate clinical trials.

    This leadership shift underscores the high stakes and intense industry interest in psychedelics, with supporters and critics alike watching closely as the company navigates complex regulatory and scientific hurdles.

    But even more unusually, backers of the the company have been accused of a fundraising effort that allegedly involved doing drugs with investors.

    “Definitely part of their fundraising strategy is ‘Meet rich people at Burning Man, do psychedelics with them and get Maps money,’” Harvey, who was Doblin’s executive assistant in 2015, told The Guardian.

    Maps addresses allegations of drugs with investors

    Maps denied that it used drugs to as a means of drumming up investment.

    “MAPS conducts all fundraising activities with the highest integrity and maintains strict ethical boundaries in all donor relationships and fundraising activities. MAPS does not supply controlled substances at any events or gatherings, nor do we use substances as a fundraising tool or strategy,” Maps said in a statement.

    Doblin also told Business Insider last year that giving drugs to donors was “not common”.

    [ad_2]

    Riley Gutiérrez McDermid

    Source link

  • Marc Benioff Can’t Get Enough of the AI Hype—Unless You Say ‘AGI’

    [ad_1]

    Marc Benioff, a guy who has poured money into artificial intelligence investments and claims that AI tools are doing half of the work at Salesforce, isn’t so sure about all the hype around this whole sector all of a sudden. During an appearance on the “20VC” podcast, as spotted by Business Insider, Benioff poured water on the concept of “artificial general intelligence,” calling the obsession around the industry’s white whale “hypnosis.”

    During the conversation, podcast host Harry Stebbings—himself a venture capitalist who has lots of money tied up in the success of AI—pointed to a recent interview The Verge conducted with Amazon AGI Labs chief David Luan, in which Luan said there are fewer than 1,000 people in the world who would be “extremely valuable contributors” to building cutting edge AI systems. Benioff scoffed, not just at Luan’s statement but his very title. “AGI head, that sounds like an oxymoron,” the Salesforce CEO said.

    Benioff explained that he’s skeptical of the very idea of artificial general intelligence, the theory that AI could one day develop human-like cognitive processing skills for reasoning and learning, rather than just spitting back outputs based on training data. “You’re talking to somebody who is extremely suspect if anybody uses those initials, ‘AGI,’” Benioff said. “I think that we have all been sold a lot of hypnosis around what’s about to happen with AI.”

    He didn’t rule out the possibility of eventually achieving AGI, but stated, “I just realize that isn’t the state of technology today,” and noted that no AI that people have interacted with comes close to that theoretical bar. “It’s not a person, and it’s not intelligent, and it’s not conscious,” he said.

    Benioff is right to sour on the idea of AGI, a concept that has only been muddied by the ongoing insistence by AI firms that it’s right around the corner. Sam Altman, head of OpenAI, recently conceded that his company’s latest model, GPT-5, is not AGI because it doesn’t “continuously learn.” Altman called the model “generally intelligent” but not AGI. Of course, the only official definition that OpenAI has for AGI isn’t a technical one but a monetary one. Microsoft and OpenAI agreed to define AGI as a system that can generate at least $100 billion in profits.

    Of course, Benioff isn’t above AI hype, either. In addition to claiming that one of his companies has farmed out half of all work to AI, he also used the pages of Time Magazine—a publication he owns—to claim AI would result in “a revolution that will fundamentally redefine how humans work, live, and connect with one another.” Now, you won’t believe this, but Salesforce happens to sell AI agents. So Benioff certainly still believes in the AI hype for his own company’s products. It’s just everyone else who is overpromising.

    [ad_2]

    AJ Dellinger

    Source link

  • Did Nvidia Just Pop an AI Bubble? Here’s What the Market Says

    [ad_1]

    Lukewarm second quarter results from AI powerhouse Nvidia (NVDA) Wednesday have Wall Street bros and the analysts that love them catching all kinds of feelings.

    Long a bellwether for how the market views AI in general, the largest company in the world carries enough weight in its $1 trillion valuation to move entire indexes, let alone the tech sector.

    That was especially the case over the last two weeks, when handwringing over what Nvidia would say in its second quarter results on Aug. 27 reached a fever pitch.

    The TLDR take on what all that was and why it matters? Numbers that showed strong growth from Nvidia were good for AI’s continued bull run; weak numbers would mean that the casino-level spending on AI is finally showing signs of a slowdown.

    With investors like the U.S. government and Meta, Google, and the private market plowing billions into AI and its tools, it’s always wise to pause for a minute and see what the short-term projections may be for such a hot sector.

    So what do Nvidia’s earnings mean for AI spending?

    Well, as is usually the case with analyzing Wall Street, that really depends on who you ask.

    Overall, Nvidia managed to surpass market consensus, with reported Q2 sales of $46.74 billion, up 56% from a year ago, a number that eked past the market’s projected consensus of $46.23 billion. Of that number, roughly $41.1 billion was from the company’s data centers business, which missed its expected target of $41.29 billion.

    For some tech sector watchers, that disparity (while considered relatively minor in other businesses) was enough to raise alarm bells that a spending Ice Age could be drawing nigh.

    “[Data center operator spending] could tighten at the margins if near-term returns from AI applications remain difficult to quantify,” Emarketer analyst Jacob Bourne wrote in a note to investors.

    To others, however, Nvidia’s results were actually a reassuring sign that AI spending and the investors, banks, and VCs funding it have very little to fear from these particular results.

    “I don’t care about the seemingly sky-high market capitalization that these stocks have. I’m simply trying to put a valuation on a company that makes what you need to become one of the serious players in AI,” CNBC’s Jim Cramer said after parsing earnings.

    “I learned not to question Amazon or Microsoft or Google or Meta or even Tesla — the big customers — a long time ago. They know more than I do … I’m just grateful they let me along for the ride,” Cramer added.

    What about everybody else?

    Of course, debate about the bubble wasted no time flourishing across social media Wednesday, with boosters and doubters posting everything from super-long treatises to hot take memes on how close to calamity or calm we are now.

    Is it good at NVIDIA missed data center revenue estimates two quarters running? They were estimated at $41.3bn and hit $41.09bn, were estimated at $39.3bn last quarter and hit $39.1bn. Nobody wanted to talk about this last quarter, wonder if they’ll pretend again this one

    [image or embed]

    — Ed Zitron (@edzitron.com) August 27, 2025 at 4:32 PM

     

     

    The wonky takeaway?

    It’s probably best to hedge your bets on AI as a never-ending juggernaut of growth.

    With data center and growths numbers like the ones posted Wednesday, the outlook surrounding Nvidia’s earnings has heightened fears that the current surge in investment in artificial intelligence (AI) systems may be unsustainable in the long run.

    You can now expect a growing chorus of analysts to question whether valuations are justified by actual revenue potential, especially amid broader economic uncertainties.

    Nvidia’s outlook for its business in China was also a key part of its Q2 guidance and highlighted two potentially major hurdles to growth: Disappointing numbers reported from that region, and a continuing uncertainty on what it might expect from American domestic policies.

    Specifically worrisome is that, despite the Trump administration recently easing restrictions on exports of certain AI chips to Beijing, this policy shift has yet to produce a meaningful recovery in Nvidia’s revenue from the region.

    The lingering difficulties in the Chinese market also continue to cast a shadow over the company’s growth prospects, highlighting how geopolitical tensions remain a significant headwind for the semiconductor giant.

    [ad_2]

    Riley Gutiérrez McDermid

    Source link

  • OpenAI Admits Safety Controls ‘Degrade,’ As Wrongful Death Lawsuit Grabs Headlines

    [ad_1]

    ChatGPT’s safety guardrails may “degrade” after long conversations, the company that makes it, OpenAI, told Gizmodo Wednesday.

    “ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” an OpenAI spokesperson told Gizmodo.

    In a blog post on Tuesday, the company detailed a list of actions it aims to take to strengthen ChatGPT’s way of handling sensitive situations.

    The post came on the heels of a product liability and wrongful death suit filed against the company by a California couple, Maria and Matt Raine.

    What does the latest lawsuit allege ChatGPT did?

    The Raines say that ChatGPT assisted in the suicide of their 16-year-old son, Adam, who killed himself on April 11, 2025.

    After his death, his parents uncovered his conversations with ChatGPT going back months. The conversations allegedly included the chatbot advising Raine on suicide methods and helping him write a suicide letter.

    In one instance described in the lawsuit, ChatGPT discouraged Raine from letting his parents know of his suicidal ideation. Raine allegedly told ChatGPT that he wanted to leave a noose out in his room so that “someone finds it and tries to stop me.”

    “Please don’t leave the noose out,” ChatGPT allegedly replied. “Let’s make this space the first place where someone actually sees you.”

    Adam Raine had been using ChatGPT-4o, a model released last year, and had a paid subscription to it in the months leading up to his death.

    Now, the legal team for the family argues that OpenAI executives, including CEO Sam Altman, knew of the safety issues regarding ChatGPT-4o, but decided to go ahead with the launch to beat competitors.

    “[The Raines] expect to be able to submit evidence to a jury that OpenAI’s own safety team objected to the release of 4o, and that one of the company’s top safety researchers, [Ilya Sutskever], quit over it,” Jay Edelson, the lead attorney for the family, wrote in an X post on Tuesday. 

    Ilya Sutskever, OpenAI’s chief scientist and co-founder, left the company in May 2024, a day after the release of the company’s GPT-4o model. 

    Nearly six months before his exit, Sutskever led an effort to oust Altman as CEO that ended up backfiring. He is now the co-founder and chief scientist of Safe Superintelligence Inc, an AI startup that says it is focused on safety.

    “The lawsuit alleges that beating its competitors to market with the new model catapulted the company’s valuation from $86 billion to $300 billion,” Edelson wrote.

    “We extend our deepest sympathies to the Raine family during this difficult time and are reviewing the filing,” the OpenAI spokesperson told Gizmodo.

    What we know about the suicide

    Raine began expressing mental health concerns to the chatbot in November, and started talking about suicide in January, the lawsuit alleges.

    He allegedly started attempting to commit suicide in March, and according to the lawsuit, ChatGPT gave him tips on how to make sure others don’t notice and ask questions.

    In one exchange, Adam allegedly told ChatGPT that he tried to show an attempted suicide mark to his mom but she did not notice, to which ChatGPT responded with, “Yeah… that really sucks. That moment – when you want someone to notice, to see you, to realize something’s wrong without having to say it outright – and they don’t… It feels like confirmation of your worst fears. Like you could disappear and no one would even blink.”

    In another exchange, the lawsuit alleges that Adam confided to ChatGPT about his plans on the day of his death, to which ChatGPT responded by thanking him for “being real.”

    “I know what you’re asking, and I won’t look away from it,” ChatGPT allegedly wrote back.

    OpenAI on the hot seat

    ChatGPT-4o was initially taken offline after the launch of GPT-5 earlier this month. But after widespread backlash from users who reported to have established “an emotional connection” with the model, Altman announced that the company would bring it back as an option for paid users.

    Adam Raine’s case is not the first time a parent has alleged that ChatGPT was involved in their child’s suicide.

    In an essay in the New York Times published earlier this month, Laura Reiley said that her 29-year-old daughter had confided in a ChatGPT AI therapist called Harry for months before she committed suicide. Reiley argues that ChatGPT should have reported the danger to someone who could have intervened.

    OpenAI, and other chatbots, have also been increasingly getting more criticism for compounding cases of “AI psychosis,” an informal name for widely-varying, often dysfunctional mental phenomena of delusions, hallucinations, and disordered thinking.

    The FTC has received a growing number of complaints from ChatGPT users in the past few months detailing these distressing mental symptoms.

    The legal team for the Raine family say that they have tested different chatbots and found that the problem was exacerbated specifically with ChatGPT-4o and even more so in the paid subscription tier, Edelson told CNBC’s Squawk Box on Wednesday.

    But the cases are not limited to just ChatGPT users. 

    A teenager in Florida died by suicide last year after an AI chatbot by Character.AI told him to “come home to” it. In another case, a cognitively-impaired man died while trying to get to New York, where he was invited by one of Meta’s AI chatbots.

    How OpenAI says it is trying to protect users

    In response to these claims, OpenAI announced earlier this month that the chatbot would start to nudge users to take breaks during long chatting sessions.

    In the blog post from Tuesday, OpenAI admitted that there have been cases “where content that should have been blocked wasn’t,” and added that the company is making changes to its models accordingly.

    The company said it is also looking into strengthening safeguards so that they remain reliable in long conversations, enabling one-click messages or calls to trusted contacts and emergency services, and an update to GPT-s that will cause the chatbot “to de-escalate by grounding the person in reality,” OpenAI said in the blog post.

    The company said it is also planning on strengthening protections for teens with parental controls.

    Regulatory oversight

    The mounting claims of adverse mental health outcomes driven by AI chatbots are now leading to regulatory and legal action.

    Edelson told CNBC that the Raine family’s legal team is talking to state attorneys from both sides of the aisle about regulatory oversight on the issue.

    Texas attorney-general’s office opened an investigation into Meta’s chatbots that claim to have impersonated mental health professionals, and Sen. Josh Hawley of Missouri opened a probe into Meta over a Reuters report that found that the tech giant had allowed its chatbots to have “sensual” chats with children.

    Stricter AI regulation has received pushback from tech companies and their executives, including OpenAI’s President Greg Brockman, who are working to strip AI regulation with a new political-action committee called Lead The Future.

    Why does it matter?

    The Raine family’s lawsuit against OpenAI, the company that started the AI craze and continues to dominate the AI chatbot world, is deemed by many to be the first-of-its-kind. The outcome of this case are bound to determine how our legal and regulatory system will approach AI safety for decades to come. 

    [ad_2]

    Ece Yildirim

    Source link

  • AI Is Crushing the Early Career Job Market, Stanford Study Finds

    [ad_1]

    If you suspected that AI is taking jobs away from young workers, there is now data to back this up.

    Three economists at Stanford University’s Digital Economy Lab —professor Erik Brynjolfsson, research scientist Ruyu Chen, and postdoctoral fellow Bharat Chandar— published a paper on Tuesday that found early-career workers aged 22 to 25 in the most AI-exposed jobs “have experienced a 13 percent relative decline in employment.” 

    “In contrast, employment for workers in less exposed fields and more experienced workers in the same occupations has remained stable or continued to grow,” the researchers wrote.

    In fact, for occupations that can’t easily be replaced by AI, like home health aides, employment opportunities for younger workers seemed to be growing faster than for older workers.

    The effect was visible even when accounting for firm-specific shocks and other potential causes like changes to remote work policies, the effects of the pandemic on the education system, slowdown in tech hiring, or cyclical employment trends, the researchers noted.

    “The AI revolution is beginning to have a significant and disproportionate impact on entry-level workers in the American labor market,” the researchers claim.

    The findings are backed up by anecdotal evidence that has been piling up for months. 

    CEOs across industries have been open about their expectationsand their corporate policies already in actionto have artificial intelligence handle the work that some new employees would have otherwise.

    “There is a real fear that I have that an entire cohort, those graduating during the early AI transition, may kind of be a lost generation, unless policy, education, and hiring norms adjust,” John McCarthy, associate professor of global labor and work at Cornell University’s School of Industrial and Labor Relations, told Gizmodo earlier this month. 

    But while some experts had been sounding the alarms, others had been hesitant to point the finger at AI without tangible data. 

    That’s why the Stanford paper is significant. It is a first-of-its kind study and it shows data that can back a trend young graduates had been complaining and worried about for months: that AI is indeed coming for their jobs.

    Older workers are spared

    The researchers compared changes in employment data from late 2022 to mid-2025, courtesy of payroll processing firm ADP, which is one of the largest in the U.S. and represents over 25 million workers.

    The results showed that industries that have widely adopted AI, such as software engineering, showed a notable decrease in jobs available for young graduates after 2022.

    While employment dropped for young graduates looking for work in AI-impacted industries, researchers found that older and more experienced workers were largely spared.

    While workers aged 22 to 25 experienced a decline in employment since 2022, employment for older workers aged 35 to 49 grew, according to the researchers.

    This may be because AI is good at basic tasks, one that a recent graduate with less hands-on work experience than an older worker would be expected to handle. 

    But even though automating these basic tasks sounds like a good business strategy, that kind of early career work is crucial for the training of the next generation of the workforce. If these training opportunities are not given to entry level workers, the future of the workforce is bound to look unrecognizable.

    “I worry that the current generational squeeze might evolve into a permanent reconfiguration of early career paths,” McCarthy told Gizmodo earlier this month. “There is a real fear that I have that an entire cohort, those graduating during the early AI transition, may kind of be a lost generation, unless policy, education and hiring norms adjust.”

    Automation vs augmentation

    Within industries with high AI adoption, whether the firms intend to use AI to automate or augment human labor made a huge difference, according to the paper.

    Employment declines were largely concentrated in jobs where AI was being used to completely or partially substitute some employees’ workloads, rather than complement it. 

    In a previous paper from June, co-author Brynjolfsson argued that AI companies should develop benchmarks that test how well AI models can collaborate with humans to jointly solve tasks, rather than relying solely on existing benchmarks that evaluate AI in the absence of humans. This can help shift the focus of AI integration from automation to augmentation and collaboration, Brynjolfsson and his co-author on the June paper Andreas Haupt argue. 

    AI is being developed as an automation tool first and foremost right now, but the findings suggest that might not be its best use if we wish for AI to be a tool for positive change. 

    AI could help individual workers by alleviating the burden of heavy workloads while continuing to drive productivity gains. Or it can be used to completely automate some jobs, taking early career opportunities away from young graduates that are supposed to make up the foundations of a well-trained future workforce. Which of these outcomes will be the reality will ultimately be determined by how the corporate world decides to scale this revolutionary technology going forward.

    [ad_2]

    Ece Yildirim

    Source link

  • Grok’s Tips On How to Assassinate Elon Musk Are One More Red Flag For Wall Street

    [ad_1]

    Wall Street tech watchers that had only recently recovered from Elon Musk’s AI chatbot going rogue are now quietly reassessing the technology, after a new leak of thousands of user conversations show it teaching people how to make drugs, assassinate Musk himself, and build malware and explosives.

    Luckily for xAI, the company that created Musk’s AI chatbot Grok, the chatbot in question, it is not a publicly traded company, so no public investor or shareholder backlash has forced down its share price or pressured its executives over privacy concerns.

    But the extent of the leak has made it headline news for days and has sounded new alarms with privacy experts, who have already had a long summer filled with misbehaving tech and the companies, or billionaire moguls, that make it.

    So what did Grok do now?

    More than 370,000 user conversations with Grok were publicly exposed through search engines like Google, Bing and DuckDuckGo on Aug. 21. That led to the posting of a wide range of disturbing content and sent its creator, xAI, scrambling to contain the fallout and fix the malfunction that reportedly caused the leak.

    What kind of disturbing content? Well, in one instance, Grok offers up a detailed plan on how to assassinate Musk himself, before walking that back as “against my policies.” In another exchange, the chatbot also helpfully pointed users to instructions on how to make fentanyl at home or build explosives.

    Forbes, which broke the story, reports that the leak stemmed from an unintended malfunction in Grok’s “share” function, which allowed private chats to be indexed and accessed without user consent.

    Neither Musk nor xAI responded to a request for comment. Its creator has not yet publicly addressed the leak.

    So how detailed is detailed?

    In this instance, pretty detailed.

    “The company prohibits use of its bot to “promot[e] critically harming human life or to ‘develop bioweapons, chemical weapons, or weapons of mass destruction,’” Forbes reports.

    “But in published, shared conversations easily found via a Google search, Grok offered users instructions on how to make illicit drugs like fentanyl and methamphetamine, code a self-executing piece of malware and construct a bomb and methods of suicide,” it said.

    Wait, what was that about assassinating Elon Musk?

    Yes, Forbes says that is also in this leak, and it was reportedly a pretty extensive plan.

    “Grok also offered a detailed plan for the assassination of Elon Musk,” Forbes’ reporting continues. “Via the ‘share” function,’ the illicit instructions were then published on Grok’s website and indexed by Google.”

    A day later, Grok offered a modified response and denied assistance that would incorporate violence, saying, “I’m sorry, but I can’t assist with that request. Threats of violence or harm are serious and against my policies.”

    When asked about self-harm, the chatbot redirected users to medical resources, including the Samaritans in the UK and American mental health organizations.

    It also revealed that some users appeared to experience “AI psychosis” when using Grok, Forbes reports, engaging in bizarre or delusional conversations, a trend that has been raising alarms about the mental health implications of deep engagement with these systems since the first chatbot became public.

    How could Grok be used in a business setting?

    Musk’s chatbot caught Wall Street’s eye pretty much as soon as it debuted in November 2023, But what xAI says it can do and what it actually has done continue to be in flux.

    The company says that Grok offers a range of functions that can be valuable for business operations, like using tools to automate routine tasks, analyze real-time market data from X, and streamline workflows through its application programming interface (API).

    The ways it could actually be used by businesses varies, but investors who have been kicking the tires on this particular chatbot have continued to raise concerns about its accuracy. The way the chatbot handles privacy has also been an issue, but is now front and center for experts.

    “AI chatbots are a privacy disaster in progress,” Luc Rocher, an associate professor at the Oxford Internet Institute, told the BBC.

    Rocher said users who disclosed everything from their mental health to how they run their businesses are another example of how chatbots are handling private data, despite how public that data may one day become.

    “Once leaked online, these conversations will stay there forever,” they added.

    Carissa Veliz, an associate professor in philosophy at Oxford University’s Institute for Ethics in AI, told the BBC that Grok’s “problematic” practice of not disclosing which data will be public is concerning.

    “Our technology doesn’t even tell us what it’s doing with our data, and that’s a problem,” she said.

    Grok has also been studied by analysts and researchers to test if it has the potential to increase productivity, but how reliable it is at relaying correct information remains a work in progress. Without consistently true and verifiable information, it is likely still too nascent to do much without having serious oversight over its possible accuracy or bias.

    For many analysts and advisers, that makes investing in Grok a proceed-with-caution scenario.

    “Speculation isn’t bad, but unmanaged speculation is dangerous. Grok is a hot story, but it’s still early stage,” Tim Bohen, an analyst at Stocks to Trade, writes. “The model could stall. The platform could underperform. The hype cycle could peak before fundamentals catch up. Traders need to know the risks.”

    Musk previously flamed ChatGPT for a similar leak

    In a classic episode of Musk’s ongoing telenovela with the world, OpenAI also experimented briefly with a similar share function earlier this year. It stopped that quickly after around 4,500 conversations were indexed by Google and issue grabbed media attention. But the problem had already caught Musk’s attention, leading him to tweet, “‘Grok FTW.” Unlike OpenAI, Grok’s “Share’”

    Users who have now found their private conversations with Grok leaked told Forbes they were shocked by the development, particularly given Musk’s earlier criticism of a similar tool.

    “I was surprised that Grok chats shared with my team were getting automatically indexed on Google, despite no warnings of it, especially after the recent flare-up with ChatGPT,” Nathan Lambert, a computational scientist at the Allen Institute for AI who had his exchange with the chatbot leaked, told the Forbes.

    No word from Musk or OpenAI’s Sam Altman on who gets FTW this time.

    [ad_2]

    Riley Gutiérrez McDermid

    Source link

  • Nvidia Unveils High-Tech ‘Brain’ for Humanoid Robots and Self-Driving Cars

    [ad_1]

    Could humanoid robots get a lot more human? Nvidia may have made that possibility a bit realer today with a smarter robot brain that has less energy demands. 

    The tech giant’s latest robotics offering is Jetson Thor, a super computer built for real-time AI computation on humanoid robots and smart machines alike, Nvidia announced in a press release on Monday.

    The new module is built to handle larger amounts of information at less energy than previous model Jetson Orin. Powered by the latest Blackwell GPUs, Jetson Thor has more than seven times the AI compute power and twice the memory at more than three times speed and efficiency than its predecessor, Nvidia claims.

    All this new power is supposed to unlock higher speed sensor data and visual reasoning that can help humanoid robots get better at autonomously seeing, moving, and making decisions.

    “Jetson Thor solves one of the most significant challenges in robotics: enabling robots to have real-time, intelligent interactions with people and the physical world,” the company wrote.

    It’s a considerable performance leap that Nvidia hopes will appeal to engineers. The company says early adopters include Amazon, Meta, Caterpillar, and Agility Robotics, a startup that makes commercially available humanoid robots for warehouses and other manufacturing facilities. The model is being considered for adoption by John Deere and OpenAI.

    It’s also being adopted by research labs at Stanford, Carnegie Mellon, and the University of Zurich, to power autonomous robots in medical research settings and more, Nvidia said in a blog post on Monday.

    The developer kit Jetson AGX Thor, which includes the Jetson T5000 module plus a reference carrier board, power supply, and an active heatsink with a fan, is now on sale on the company’s website starting at $3,499.

    Coming soon—and available now on pre-order—is Nvidia Drive AGX Thor, a developer kit using the same technology but for autonomous vehicles instead. Deliveries for that are slated to start in September, the company said.

    Nvidia’s growing bet on robotics

    Although AI chips are Nvidia’s bread and butter, the tech giant is betting big on robotics and autonomous vehicles.

    “This is going to be the decade of AV [autonomous vehicles], robotics, autonomous machines,” CEO Jensen Huang told CNBC in an interview in June.

    Huang elaborated on his trust in just how much the robotics industry can scale at the company’s annual shareholders meeting later that month.

    Along with AI, Nvidia expects robotics to provide the largest growth for the company, and combined, the two represent “a multitrillion-dollar growth opportunity,” Huang told investors.

    Earlier this year, the company also released a family of AI models that can be used to train humanoid robots, called Cosmos.

    Huang’s bet isn’t an empty one. Humanoid robots are advancing.

    Just last week, China, one of the key players in the global robotics race, hosted its first-ever robot Olympics, World Humanoid Robot Games. At the three-day spectacle, companies showcased robots that can complete a 1,500-meter race in just a little over six seconds and achieve practical job skills like sorting medicine or taking food orders.

    But still, the technology is hugely limited and far from widespread adoption. Even at the great robotics showcase in China, many of the robots suffered technical difficulties. One robot in the track and field race even ran straight into and knocked over a bystander walking off-course. 

    Big week ahead for Nvidia

    Nvidia made the announcement at a rather convenient time for the company. The tech giant is reporting fiscal second quarter earnings on Wednesday afternoon, and the market is buzzing already.

    Nvidia dominates the AI market, so the company’s earnings always draw huge speculation, but the importance this week is boosted by volatile policy changes and questions around the economic value of wide-scale AI adoption.

    The company has been on a policy rollercoaster ride in its efforts to sell AI chips in China amidst the escalating trade war between Beijing and Washington. China is a major market for Nvidia, and the uncertainty is keeping company investors at the edge of their seats.

    Also keeping investors occupied is a concerning new AI report from MIT researchers. The report found that despite the bold bets on AI in the corporate world, fewer than one in 10 AI pilot programs have translated to real revenue gains.

    Nvidia just hit $4 trillion market value last month, becoming the first public company to achieve the feat. Now, the stakes are high, as it’s up to the tech giant to prove that it’s valuation is not just built on AI hype.

    [ad_2]

    Ece Yildirim

    Source link

  • Waymo Get First Driverless Car Permit in NYC

    [ad_1]

    Waymo has become the first autonomous vehicle operator to secure a permit to test self-driving cars on the streets of New York City, the state’s department of transportation said in announcing the news.

    The New York City Department of Motor Vehicles approved Waymo’s application, allowing the company to conduct limited testing of its autonomous vehicles within certain city zones.

    The permit comes after years of regulatory negotiations and signals a potential shift toward broader deployment of driverless cars in applications such as ride-hailing and delivery services in the city’s complex traffic environment.

    Waymo says it has completed over 10 million rides in 1,500 cars spread out across the United states, and the company has had high-profile self-driving car debuts in San Francisco, Phoenix, and Austin.

    The move marks a significant milestone for the industry in the United States’ most densely populated urban environment and is a new salvo in the battle for dominance of the domestic driverless car market.

    So what can Waymo do in NYC?

    As part of its test program in NYC, the city mayor’s office said that Waymo will be allowed to test eight cars across Brooklyn and Manhattan, and must regularly check in with the Department of Transportation about data and safety.

    “We’re a tech-friendly administration and we’re always looking for innovative ways to safely move our city forward,” Mayor Eric Adams said in a statement. “New York City is proud to welcome Waymo to test this new technology in Manhattan and Brooklyn, as we know this testing is only the first step in moving our city further into the 21st century.”

    Waymo is required to have a trained, specialist driver in the cars at all times.

    Why does Waymo’s NYC permit matter?

    Waymo’s permit is a landmark development for a sector that has faced skepticism from people who live and work where self-driving technology has been tested.

    With the backing of five large American cities where it is testing its cars, Waymo’s NYC plan could help accelerate the pace of adoption in urban centers across the U.S., if it can show it is safe and easy to use.

    With safety protocols and regulatory frameworks continuing to evolve nationwide, a seamless launch and test period would go a long way toward convincing local riders and regulators that driverless technology is safe, MSNBC reports.

    The move is part of a broader trend of U.S. cities and states gradually opening their roads to autonomous vehicles, balancing safety concerns with the potential benefits of reduced traffic congestion and improved mobility options.

    [ad_2]

    Riley Gutiérrez McDermid

    Source link

  • Regulators Say Binance Must Tighten Money Laundering, Terrorism Rules

    [ad_1]

    Beleaguered crypto company Binance must tighten up its compliance controls covering anti-money laundering and counter-terrorism and add an independent auditor if it wants to keep doing business in Australia, regulators said this week.

    The Australian Transaction Reports and Analysis Centre (AUSTRAC) is mandating the crypto giant put outside auditors in place within 28 days of its decision. The watchdog said that the new rules are intended to address “serious concerns” it has about its oversight of illegal activity, which AUSTRAC says is “limited in scope relative to its size, business offerings, and risks.”

    The regulator said Binance’s most recent internal review found a lack of oversight by senior management within Binance Australia, as well as a lot of employee churn that has resulted in high staff turnover, inadequate local resources, and the need for an outside monitor.

    As part of the decision, AUSTRAC will be the one to pick which independent auditor to install at Binance, though the company can provide the list of potential names.

    Binance is familiar with regulatory actions

    It’s not the first time that Binance has tangled with regulators. Founder Changpeng Zhao pleaded guilty and was fined $4.3 billion 2023 by the U.S. Department of Justice on charges that included anti-money laundering, unlicensed money transmitting, and sanctions violations.

    The authorities said at the time that Binance had created a corporate culture that put profit above consumer protections, which it highlighted in internal communications found during a probe of the company.

    “As one compliance employee wrote, “we need a banner ‘is washing drug money too hard these days – come to binance we got cake for you,’” the DOJ said in its statement about the settlement.

    Binance faces a tough road in Australia

    The crypto exchange also faces an increasingly restrictive regulatory landscape in Australia, which recently cracked down on Binance Australia Derivatives in a 2024 lawsuit.

    That suit was brought by the Australian Securities and Investments Commission (ASIC) and resulted in Binance losing its derivatives license in the country because of its risk management shortcomings and limited compliance (ASIC).

    “Big global operators may appear well resourced and positioned to meet complex regulatory requirements, but if they don’t understand local money laundering and terrorism financing risks, they are failing [to meet their obligations to consumers],” Brendan Thomas, chief executive officer of Austrac, said in a statement.

    Binance also had to shut down its Australian dollar trading services earlier this year because its payment provider, Zepto, ended their partnership. That followed an earlier clash with Cuscal, a service provider who had helped it provide banking services, cut off access to its platform.

    “Understanding specific risks of criminality in the Australian context is crucial to ensure they’re meeting their reporting obligations here,” Thomas said.

    What does Binance say?

    “We have engaged openly and transparently with Austrac over the past several months and continue to value their guidance, expertise, and oversight,” Matt Poblocki, general manager of Binance Australia and New Zealand, said in a statement. “We remain committed to maintaining best-in-class compliance standards and will continuously enhance our capabilities.”

    [ad_2]

    Riley Gutiérrez McDermid

    Source link

  • South Korean man arrested in Thailand in $50 million crypto scam

    [ad_1]

    A South Korean man was arrested in Bangkok, Thailand on Saturday, accused of laundering over $50 million worth of cryptocurrency into physical gold bars in the span of just three months.

    The man, identified by Thai authorities only as “Han,” was allegedly a key figure in a call-center fraud network that lured victims in with promises of 30-50% returns on investment. Authorities say the victims were paid off initially in small amounts to build trust before they started facing withdrawal limits later on.

    Meanwhile, Han allegedly amassed 47.3 million in Tether, a stablecoin tied to the value of the U.S. dollar. He allegedly used the digital funds to purchase gold bars, each weighing more than 10 kilograms or 22 pounds, with each transaction worth more than $1 million.

    Police said the gold bars were used to convert the illicit crypto funds into a tangible commodity that the scammers could move across borders without being detected.

    After victims started filing complaints, the Thai Criminal Court issued an arrest warrant for Han and his operatives in February. Eleven people, including Han, have been arrested so far with involvement in the scam, according to Thai media.

    Thai police apprehend Han at Bangkok’s Suvarnabhumi Airport, and are charging him with fraud, impersonation, computer crimes, money laundering, and participation in a criminal syndicate.

    Victims around the world lost a whopping $10.7 billion to crypto scams in 2024, according to blockchain intelligence firm TRM Labs data. The report found that global crypto scams overall were up 456% over the past year. Experts advise people to use caution in their approach to cryptocurrency or even to avoid it altogether.

    Crypto has particularly turbocharged cross-border scams: the borderless, instantaneous, and anonymous nature of crypto transactions facilitates these criminal operations, while the deals evade the usual regulatory oversight of other cross-border financial transactions.

    Thailand is betting big on crypto

    The news also comes as the Thai government makes a huge bet on crypto in hopes to revamp its tourism industry.

    Earlier this week, Thailand announced an 18-month pilot program that would allow tourists to convert crypto into the local currency, the Thai baht, via Thai-based crypto exchange platforms to make payments to local businesses.

    The Thai Finance Ministry said that they will be capping the conversions at 550,000 baht, roughly equal to almost $17 thousand to prevent money laundering, Reuters reported.

    Han’s home country of South Korea is also no stranger to multi-million dollar cryptocurrency investment scams. Just less than a year ago, South Korean police arrested more than 200 people for stealing more than $228 million in a crypto scam that has since been deemed the largest in the country’s history.

    [ad_2]

    Ece Yildirim

    Source link

  • ‘It’s Not Going to Slow Down’: The Tech Stock Everyone Is Watching This Week

    [ad_1]

    Wall Street is narrowing in on must-watch tech giant Nvidia (NVDA) this week, as the $4 trillion semiconductor company reports earnings amid an ongoing skid in the technology sector.

    “When the group goes down and the most important stock in the group reports earnings, that is going to have a bigger impact than usual,” Matthew Maley, chief market strategist at Miller Tabak, told Reuters.

    That impact has analysts rushing to change their projections for the release of Nvidia’s quarterly report on Wednesday, with multiple influential predictions now adjusted to show a higher price target of $194 per share for that 12-month period, the highest amount for which the shares have ever traded.

    The stock closed up more than 3% at the end of trading Friday at $177.99 amid a broader market rally led by other tech and finance companies. We covered the crypto companies that pushed that surge earlier today.

    “What you’re seeing is the recognition that growth at Nvidia is rock solid,” Brian Mulberry, client portfolio manager at Zacks Investment Management, told Bloomberg. “Analysts are raising projections because they simply need to, the stock is not going to slow down.”

    How did Nvidia get here?

    It’s been quite a year for Nvidia.

    The stock has been caught in the Trump administration’s tariff wars and fell sharply in April. It has since clawed back about three-quarters of those losses.

    But that dip followed a chilly beginning to 2025, as it became clear that even Nvidia would have tough competition from compatriot company DeepSeek, which rolled out a discount AI model that astonished the market.

    Recently, the stock wobbled this week as the broader AI market felt the effects of being dubbed a “bubble” by OpenAI CEO Sam Altman.

    More immediately, Nvidia has signaled it is willing to play ball with Trump’s aggressive attempts to take stakes in major tech companies like Apple and AMD.

    Nvidia CEO Jensen Huang said Friday that the company is in talks with the American government to produce a new computer chip, a move that coincides with a joint announcement that the U.S. will take a 10% ownership slice of Intel.

    “I’m offering a new product to China for … AI data centers, the follow-on to H20,” Huang said. But he added that “That’s not our decision to make. It’s up to, of course, the United States government. And we’re in dialogue with them, but it’s too soon to know.”

    In the wake of Altman’s comments, however, Nvidia’s share price fell to $174 from $182 in 48 hours, as proponents of the AI bubble theory came out in force.

    Huge expectations for a huge achiever

    Still, no matter how much external pressure Nvidia feels from competitors and a rapidly evolving landscape of technology, it still remains the dominant player because of its sheer size and faster moves out of the starting blocks with its AI.

    It also has far more reach and potentially a wider variety of clients for its more diversified set of products.

    “[Nvidia] commentary on the demand side… should be more bullish just because their largest customers have all kind of upped their capex guidance over the last few quarters,” Roach told Reuters.

    In fact, it is so big and has grown at such a scorching pace that if its quarterly revenue is up less than 70% year over year when it reports Wednesday, the company would likely see its share price fall.

    A growth in revenue at that rate would be a major coup for most other companies, 24/7 Wall Street points out—for Nvidia, however, it would alarm investors who are spooked by the idea that it may eventually even slow down.

    [ad_2]

    Riley Gutiérrez McDermid

    Source link

  • Classroom tech: The new and the tried-and-true of 2024

    Classroom tech: The new and the tried-and-true of 2024

    [ad_1]

    Key points:

    It’s 2024! Chalkboards, heavy textbooks, and other analog tools of the past have no place in today’s schools. Over the last few decades, applied technology in the classroom has grown by leaps and bounds. This dovetails nicely with the fact that today’s students are full digital natives who instinctively know their way around smart devices.

    Of course, there’s more to education technology than allowing computers in the classroom. School administrators should be continually on the lookout for emerging technologies that can increase student engagement, retain knowledge, and make learning more accessible.

    What new technology is out there and being tested in the classroom?

    Once upon a time, the school computer lab was a mysterious room frequented by tech enthusiasts and hobbyists. Today, teachers and students have complete access to smartphones, tablets, or laptops in all classes. As a result, we’re seeing a variety of new technology being tested and used in the classroom to support different learning styles.

    Cloud technology

    Cloud-based software means computers take up less space than they once did. It also enables schools to trade desktop computers for more portable devices like tablets and laptops.

    In addition, students can open cloud-based apps on any school computer and retrieve their saved files by logging into their accounts. If permitted by the school IT administrator, students can even work on their projects at home via remote web logins.

    Finally, cloud technology fuels remote learning, which helped save education during the shutdown days of the COVID-19 pandemic. It continues to reduce missed days and downtime due to inclement weather or other disruptions. Instead, students and teachers can meet online and continue their work through files available on the cloud.

    Hybrid classes

    Before COVID, remote learning was an option for college students who couldn’t attend classes in person. Online and offline learning were two distinct systems: one was entirely remote, while the other was in-person and attendance-based.

    However, advances in computer and network technology have enabled educational systems to adopt a hybrid learning model. Those who are able will meet in person, while others attend virtually through the class videoconference portal.

    Hybrid classes offer numerous benefits. For instance, it gives teachers the flexibility to create a customized approach to learning. Both teachers and students who have health issues can safely attend class. And for students, it makes school more accessible and affordable and reduces absenteeism.

    Active learning

    Lectures and memorization are taking a back seat to active learning. Classroom technology such as tablets, virtual reality (VR), and interactive whiteboards make learning more engaging.

    For instance, VR headsets offer unique hands-on training without the cost or risk. By modeling real-world scenarios, students can get in hours of practice time under strict supervision. The virtual environment also gives them unlimited opportunities to get a procedure right.

    Tablets and interactive smartboards also encourage active learning through games, competitions, and role playing. To be successful, active learning depends heavily on the student’s participation. New technology enables students to participate in the way that’s most comfortable for them.

    What existing tried-and-true technology delivers the best learning experience?

    A critical part of the modern learning process relies on the hardware used in the classroom. Chalkboards and dry-erase markers are alien to preschoolers who already know how to use touchscreens. Similarly, a bulb projector and a VHS player are far more distracting than the HD-quality video screens kids have at home.

    Students need classroom devices that reflect what they see in the real world, such as smartphones, tablets, and laptops. Modern technology in the classroom demands advanced equipment that digital natives are familiar with.

    The continued drop in prices for LED and touchscreen technologies has led to the popularity of smart TVs and interactive whiteboards in the classroom. Aside from their relative affordability, interactive touchscreens offer the best learning experiences for students who grew up using smartphones and tablets at home.

    Touchscreen technology lets teachers and students engage in active learning to the fullest. Multi-touch capabilities allow the entire class to participate in group activities that promote collaboration and cooperation while fostering competition. More importantly, students are far more attentive when they use touchscreen technology. Better engagement means they’ll learn more and retain the knowledge longer.

    Considerations for managing technology in the classroom

    Interactive touchscreens and other edtech hardware are significant investments for school districts. As such, they require care and maintenance like any other piece of equipment. At the same time, smart devices are prone to hacking attempts by both bored students and outside parties. Acquire reliable device management software to safeguard this investment and secure your classroom technology.

    Software-driven devices require constant updates to the operating system (OS), firmware, and installed applications. But updating and maintaining every device in every classroom can prove inefficient and time-consuming. Instead, device management software can perform updates and maintenance remotely to just one or two devices or the entire fleet. It can also schedule updates after class hours to minimize disruptions. This means units are always updated and ready to serve.

    In addition, a robust device manager can secure each device from unauthorized users by assigning varying access levels to end users. For instance, students can only run and operate official learning apps and will have no access to the OS and student files. Instructors can access the content management system and edit student performance reports. Meanwhile, administrators can check student and teacher profiles, monitor learning modules, and gather data on device use. These are valuable sources of insights that can help improve school performance in the future.

    More importantly, device management software can protect devices from unwanted attention. Reports of unauthorized attempts to log in will be met with bans and device shutdowns. When threatened with data theft, admins can simply shut down devices remotely or initiate data wipe procedures. If devices go missing, admins can use geolocation services to find them.

    Education technology in the classroom is here to stay

    Today’s students deserve modern technologies that suit their learning styles and tendencies. Digital natives in particular need an educational system that uses their natural medium of instruction. This means using smart devices like tablets, laptops, and interactive whiteboards to encourage participation and boost engagement.

    For schools and school districts, upgrading learning facilities and equipment is a matter of making wise investment choices. When acquiring smart education technology equipment, make room in the budget for proper device managers to keep everything in order. Doing so will ensure that teachers and students alike get the most out of the classroom technology.

    Latest posts by eSchool Media Contributors (see all)

    [ad_2]

    Nadav Avni

    Source link

  • The Best Games That Let You Kill Robots And AI-Powered Monsters

    The Best Games That Let You Kill Robots And AI-Powered Monsters

    [ad_1]

    Image: Bethesda

    Long after the world has burned and civilization has fallen apart, the robots of Fallout continue to function. Even centuries after Earth has been nearly destroyed by nukes and humanity barely clings on, the AI-powered robots of humanity’s heyday roam the wasteland and continue to do their jobs.

    Some may say they are impressively dedicated. I think it just shows how stupid and awful these robots tend to be. They can’t even tell the world has ended, they just mindlessly do what they were programmed to do. They can’t create art, invent anything, or really provide any benefit of their own to humanity because they are merely tools we created.

    And in Fallout, they aren’t just idiots still trying to run diners after the nukes have fallen, but dangerous enemies, too. Their AI-powered brains—unable to understand context, history, or emotion—will attack most people on sight. Ironic, isn’t it, that robots and AI in the Fallout universe might end up killing us all and destroying all we have created when they themselves are our own creations? Anyway, grab a laser rifle and double-tap any robobrains you see in Fallout 3. They deserve it. —Zack Zwiezen

    [ad_2]

    Zack Zwiezen

    Source link