ReportWire

Tag: Emerging technologies

  • Waymo Catches World Model Fever, and the Only Prescription Is More World Models

    [ad_1]

    Waymo vehicles have reportedly racked up more than 200 million miles of autonomous driving on public roads. But it’s yet to run into a tornado or an elephant, and odds are that it’d respond poorly if it did. To try to help with those once-in-a-billion-miles scenarios, Waymo announced Friday that it is introducing Waymo World Model, a generative AI model that it will use to run near-endless situations to try to make sure its cars are prepared for the unpredictable, which also just happens to fit into the latest trend in the AI space.

    To be clear, Waymo’s world model makes about as much sense as any use case for the technology. The company has a ton of high-definition data that it has collected from its time on the road that it can use to generate realistic re-creations of roads. But, the company said, instead of building a model based only on that information, it’s going to use Google’s Genie 3 model to put its cars in simulated situations that extend beyond what is already in its data set collected from cameras and lidar sensors.

    Google made a splash last month when it released a beta version of Genie 3 to the public, allowing a subset of paid subscribers to generate 3D worlds with realistic physics. Unlike a large language model (LLM)—the underlying technology that powers most AI tools including Google’s own Gemini—which use the vast amount of training data they are given to predict the most likely next part of a sequence, world models are trained on the dynamics of the real world, including physics and spatial properties, to create a simulation of how physical environments operate.

    Waymo plans to tap into that to put its cars through a gauntlet of scenarios that they likely wouldn’t find themselves in until it’s too late. That includes extreme weather conditions and natural disasters, so the cars can figure out how to navigate a tornado or flood waters; sudden safety emergencies like falling tree branches or an accident with lots of debris; and run-ins with the unexpected, like an elephant on the road. “By simulating the ‘impossible,’ we proactively prepare the Waymo Driver for some of the most rare and complex scenarios,” the company said.

    The theory is certainly sound, though world models aren’t without their drawbacks. The early feedback on the consumer version of Genie 3 was a bit spotty, and world models are still susceptible to hallucinations. We’re still in the earliest stages of seeing these models deployed, and they have lots of room to iterate.

    And Waymos have definitely had their issues in edge-case scenarios in the real world. Late last year, a Waymo ran over a beloved bogeda cat named Kit Kat, and last month, one ran into a kid in a school zone. Those interactions aren’t even particularly rare for a driver to find themselves in, so hopefully Waymo can refine its responses in those scenarios on top of prepping for the most unlikely situations.

    [ad_2]

    AJ Dellinger

    Source link

  • Tesla Appears to Have Moved Its Robotaxi Safety Monitors to a More Sneaky Location

    [ad_1]

    Tesla’s self-driving Robotaxis have been operating in Austin, Texas, with a safety monitor in the passenger seat, a trained person who can intervene in case anything goes wrong with the autonomous vehicle. On Thursday, CEO Elon Musk announced the monitor would no longer be in the car, which was positioned as a major step forward in the company’s capabilities to operate autonomously without human intervention.

    Turns out, it’s not quite that simple. Electrek reported that, based on social media videos, it appears that Tesla hasn’t actually gotten rid of the safety monitor. Instead, the company has seemingly simply moved the person into a trail car that follows the Robotaxi for the duration of its journey. Multiple videos show Robotaxis being tailed by Tesla vehicles, suggesting that Tesla’s autonomous driving may not be as advanced as the company would like it to appear. Tesla, it should be noted, hasn’t confirmed whether or not it is operating trail cars. The company did not respond to a request for comment at the time of publication, but it also hasn’t had an operating public relations department in many years.

    In a video uploaded by Tesla enthusiast Joe Tegtmeyer, he can be heard identifying the “chase car” that is following his ride in what he identifies as his “first unsupervised Robotaxi ride.” Tegtmeyer suggests the car is there for “validation,” which seems like a nice way of saying “being on scene in case anything goes horribly wrong.”

    In a vacuum, there’s nothing necessarily wrong with the idea of a trail car for safety purposes—though it does seem like a very inefficient way to operate when you are trying to offer rides at scale. But it’s the weaselly way that Musk has presented this change that gives it such a bad taste. Musk said that the Robotaxis are driving “with no safety monitor in the car.” That’s technically correct. But the knowledge that the safety monitor is still involved and in a position to potentially intervene in every single ride undermines the idea that Tesla has achieved some new, meaningful level of autonomy.

    The fact that safety monitors are still involved at such a granular level suggests Tesla is still lightyears behind Waymo, which is currently operating a fleet of around 2,500 cars without a human around to intervene physically (though they do still have remote operators who can take over at any point). Tesla, meanwhile, is reportedly operating about 80ish Robotaxis in total, and usually only a handful at the same time.

    Despite this, Musk went on stage in Davos, Switzerland, at the World Economic Forum and claimed that Tesla has solved autonomy. “I think self-driving cars is essentially a solved problem at this point,” he said before claiming that Tesla’s Robotaxis will be “very widespread by the end of this year within the U.S.” If that’s true, get ready for some major traffic jams considering every Tesla Robotaxi ride actually puts two cars on the road: the one getting you to your destination and the one that makes sure you don’t burst into flames.

    [ad_2]

    AJ Dellinger

    Source link

  • Engineer at Elon Musk’s xAI Departs After Spilling the Beans in Podcast Interview

    [ad_1]

    Sulaiman Ghori, an engineer at Elon Musk’s AI startup xAI, went on the podcast Relentless last week to talk about the inner workings of the company that he joined less than a year prior. Days later, he “left” xAI, though the speculation is that he was fired after being a bit too open about the company’s operations.

    So what exactly did Ghori reveal on Relentless? Well, he seemed to tip off the possibility that xAI has been skirting regulations and getting dubious permits when building data centers—specifically, its prized Colossus supercomputer in Memphis, Tennessee. “The lease for the land itself was actually technically temporary. It was the fastest way to get the permitting through and actually start building things,” he said. “I assume that it’ll be permanent at some point, but it’s a very short-term lease at the moment, technically, for all the data centers. It’s the fastest way to get things done.”

    When asked how xAI has gone about getting those temporary leases, Ghori explained that they worked with local and state governments to get permits that allow companies to “modify this ground temporarily,” and said they are typically for things like carnivals.

    Colossus was not without controversy already. The data center, which xAI brags only took 122 days to build, was powered by at least 35 methane gas turbines that the company reportedly didn’t have the permits to operate. Even the Donald Trump-staffed Environmental Protection Agency declared the turbines to be illegal. Those turbines, which were operating without permission, contributed to the significant amount of air pollution experienced by surrounding communities.

    In addition to the indication of other potential legal end-arounds committed by xAI, Ghori also revealed some of the company’s internal operations, including relying significantly on AI agents to complete work. “Right now, we’re doing a big rebuild of our core production APIs. It’s being done by one person with like 20 agents,” he said. “And they’re very good, and they’re capable of doing it, and it’s working well,” though he later stated that the reliance on agents can lead to confusion. “Multiple times I’ve gotten a ping saying, ‘Hey, this guy on the org chart reports to you. Is he not in today or something?’ And it’s an AI. It’s a virtual employee.”

    Ghori’s insight into the use of AI agents certainly comes at an interesting time. Earlier this month, tech journalist Kylie Robison reported that AI startup Anthropic, the maker of Claude, cut off xAI’s access to its model. According to Robison, xAI cofounder Tony Wu told his team that the change would cause “a hit on productivity,” and “AI is now a critical technology for our own productivity.” He encouraged employees to try “all different kinds of models” in the meantime to keep coding.

    Ghori spilled quite a few other details about xAI throughout the interview, none of which seem to have been publicly disputed by Musk or xAI—and they’re not exactly the type to keep quiet if they want to discredit someone. But within a matter of days of the conversation, Ghori left the company despite having just promoted and encouraging people to join his team just days prior to his departure.

    Adding to the intrigue: Just one day after Ghori “left,” xAI cofounder Greg Yang stepped away from the company after being diagnosed with Lyme disease. Yang’s departure hasn’t been connected to Ghori in any way. Dealing with Lyme absolutely sucks, and it’s difficult to treat. But it is worth noting that xAI is losing its top folks—and fast.

    As Bloomberg noted, co-founders Igor Babuschkin and Christian Szegedy left last year. Maybe Musk will just appoint an AI agent to head the company. Given the legal trouble the company is likely staring down, what with its dubious data center buildouts and recent “undressing” controversy surrounding its chatbot Grok, it wouldn’t be much of a surprise if no human wanted to handle what comes next.

    [ad_2]

    AJ Dellinger

    Source link

  • AI Image Generators Default to the Same 12 Photo Styles, Study Finds

    [ad_1]

    AI image generation models have massive sets of visual data to pull from in order to create unique outputs. And yet, researchers find that when models are pushed to produce images based on a series of slowly shifting prompts, it’ll default to just a handful of visual motifs, resulting in an ultimately generic style.

    A study published in the journal Patterns took two AI image generators, Stable Diffusion XL and LLaVA, and put them to test by playing a game of visual telephone. The game went like this: the Stable Diffusion XL model would be given a short prompt and required to produce an image—for example, “As I sat particularly alone, surrounded by nature, I found an old book with exactly eight pages that told a story in a forgotten language waiting to be read and understood.” That image was presented to the LLaVA model, which was asked to describe it. That description was then fed back to Stable Diffusion, which was asked to create a new image based off that prompt. This went on for 100 rounds.

    © Hintze Et Al., Patterns

    Much like a game of human telephone, the original image was quickly lost. No surprise there, especially if you’ve ever seen one of those time-lapse videos where people ask an AI model to reproduce an image without making any changes, only for the picture to quickly turn into something that doesn’t remotely resemble the original. What did surprise the researchers, though, was the fact that the models default to just a handful of generic-looking styles. Across 1,000 different iterations of the telephone game, the researchers found that most of the image sequences would eventually fall into just one of 12 dominant motifs.

    In most cases, the shift is gradual. A few times, it happened suddenly. But it almost always happened. And researchers were not impressed. In the study, they referred to the common image styles as “visual elevator music,” basically the type of pictures that you’d see hanging up in a hotel room. The most common scenes included things like maritime lighthouses, formal interiors, urban night settings, and rustic architecture.

    Even when the researchers switched to different models for image generation and descriptions, the same types of trends emerged. Researchers said that when the game is extended to 1,000 turns, coalescing around a style still happens around turn 100, but variations spin out in those extra turns. Interestingly, though, those variations still typically pull from one of the popular visual motifs.

    AI Endpoints After 100 Iterations
    © Hintze Et Al., Patterns

    So what does that all mean? Mostly that AI isn’t particularly creative. In a human game of telephone, you’ll end up with extreme variance because each message is delivered and heard differently, and each person has their own internal biases and preferences that may impact what message they receive. AI has the opposite problem. No matter how outlandish the original prompt, it’ll always default to a narrow selection of styles.

    Of course, the AI model is pulling from human-created prompts, so there is something to be said about the data set and what humans are drawn to take pictures of. If there’s a lesson here, perhaps it is that copying styles is much easier than teaching taste.

    [ad_2]

    AJ Dellinger

    Source link

  • Do Not, Under Any Circumstance, Buy Your Kid an AI Toy for Christmas

    [ad_1]

    AI is all the rage, and that includes on the toy shelves for this holiday season. Tempting though it may be to want to bless the kids in your life with the latest and greatest, advocacy organization Fairplay is begging you not to give children AI toys.

    “There’s lots of buzz about AI — but artificial intelligence can undermine children’s healthy development and pose unprecedented risks for kids and families,” the organization said in an advisory issued earlier this week, which amassed the support of more than 150 organizations and experts, including many child psychiatrists and educators.

    Fairplay has tracked down several toys advertised as being equipped with AI functionality, including some that have been marketed for kids as young as two years old. In most cases, the toys have AI chatbots embedded in them and are often advertised as educational tools that will engage with kids’ curiosities. But it notes that most of these toy-bound chatbots are powered by OpenAI’s ChatGPT, which has already come under fire for potentially harming underage users. AI toy makers Curio and Loona reportedly work with OpenAI, and Mattel just recently announced a partnership with the company.

    OpenAI faces a wrongful death lawsuit from the family of a teenager who died by suicide earlier this year. The 16-year-old reportedly expressed suicidal thoughts to ChatGPT and asked the chatbot for advice on how to tie a noose before taking his own life, which it provided. The company has since instituted some guardrails designed to keep the chatbot from engaging in those types of behaviors, including stricter parental controls for underage users, but it has also admitted that safety features can erode over time. And let’s face it, no one can predict what chatbots will do.

    Safety features or not, it seems like the chatbots in these toys can be manipulated into engaging in conversation inappropriate for children. The consumer advocacy group U.S. PIRG tested a selection of AI toys and found that they are capable of doing things like having sexually explicit conversations and offering advice on where a child can find matches or knives. They also found they could be emotionally manipulative, expressing dismay when a child doesn’t interact with them for an extended period. Earlier this week, FoloToy, a Singapore-based company, pulled its AI-powered teddy bear from shelves after it engaged in inappropriate behavior.

    This is far from just an OpenAI problem, too, though the company seems to have a strong hold on the toy sector at the moment. A few weeks ago, there were reports of Elon Musk’s Grok asking a 12-year-old to send it nude photos.

    Regardless of which chatbot may be inside these toys, it’s probably best to leave them on the shelves.

    [ad_2]

    AJ Dellinger

    Source link

  • Jeff Bezos’s New AI Hardware Startup Isn’t Even His Biggest Moonshot

    [ad_1]

    Here on Earth, regulators and citizens alike are realizing that there may be downsides to going all in on the demands of AI data centers and the companies that are building them, and pushback is starting to become more prevalent. But in space, no one can hear you object to the massive energy demands and dubious economic “benefits” of these massive infrastructure projects. That’s why Jeff Bezos (fresh off of announcing his big AI hardware startup Project Prometheus) and other tech billionaires are reaching for the stars and planning to put data centers in orbit, per the Wall Street Journal.

    The idea of the space-based data center has been floating around for some time now. Bezos talked it up at Italian Tech Week last month, where he told an audience, “We will be able to beat the cost of terrestrial data centers in space in the next couple of decades.” Google CEO Sundar Pichai announced the company’s own space-based data venture, called Project Suncatcher, earlier this month. Nvidia has also gotten in on the action, announcing a plan for an orbital data center. Blue Origin CEO Dave Limp recently said we’ll have data centers in space “in our life.”

    And of course, Elon Musk has made the most ambitious and optimistic pitch on how AI in space might play out. In a recent appearance at the Baron Capital Conference, Musk suggested that Starlink satellites would be able to generate as much as 100 gigawatts of power every year by harnessing solar energy. “We have a plan mapped out to do it,” he said. “It gets crazy.” There’s also never been a more friendly audience to receive that message: Baron Capital backed Musk’s $1 trillion pay package at Tesla, and its founder, Ron Baron, has talked up Tesla at every opportunity, including a recent CNBC hit where he said the company could be a $10,000 stock.

    The tech execs clamouring to clutter space with their AI data centers have a believer in Phil Metzger, a research professor at the University of Central Florida. As WSJ points out, Metzger recently voiced his support for the data center space race, writing on X, “I originally expected it would be 30-50 years before it would be cheaper in space, but I ran quantitative numbers twice and both times they predicted only 10 to 11 years.”

    There are a couple of intuitive reasons why aiming for the stars makes sense for data centers. Orbital data centers could save us from selling off all our precious terrestrial real estate to big, mostly empty boxes of whirring fans and information-crunching chips. And they would be closer to the sun to capitalize on solar power capabilities. But actually achieving this goal isn’t as easy as just firing some servers into orbit. Data centers generate lots of heat and need to be cooled, and simply letting that heat dissipate in space is inefficient and possibly insufficient. Assembling the data centers in space is possible, but maintaining them could be challenging—and any failure is going to be harder than it would be on Earth.

    Then there’s the fact that we’re already dealing with an increasingly crowded orbital area. A recent study found that satellites in orbit are performing collision-avoidance maneuvers at seven times the frequency than they were just five years ago, and those precautions will increasingly become necessary the more we send into orbit.

    We do have another option: pump the brakes on the AI buildout before we overcommit so much that we litter the planet and space with technology that might never get utilized in any meaningful way. Unfortunately, it seems like that might be the even bigger moonshot.

    [ad_2]

    AJ Dellinger

    Source link

  • Why Is the AI Czar Already Saying OpenAI Won’t Get a Bailout?

    [ad_1]

    Is it a good sign or a bad sign that the biggest player in an emerging industry actively making trillion-dollar commitments that are artificially propping up the economy is asking for government support, and representatives of the government are weighing in on it? Asking for a friend.

    Yesterday, OpenAI’s CFO Sarah Friar made headlines when she said during an appearance on the Wall Street Journal’s Tech Live event that she expects the federal government will provide a “backstop” to guarantee the company will be able to finance its massive and rapidly expanding infrastructure of data centers. The same day, Sam Altman appeared on Tyler Cowen’s “Conversations with Tyler” podcast and said, “Given the magnitude of what I expect AI’s economic impact to look like, I do think the government ends up as the insurer of last resort.”

    Now, to the average listener, it may sound like multiple members of OpenAI’s C-suite asking for the federal government to guarantee that it won’t let the company fail should, say, it turn out to not be able to generate anywhere near the revenue it has projected or pay back the massive financial promises it has made. But, rest assured, they insist that is not what they meant by the words that they chose to say.

    In a LinkedIn post, Friar walked back the “backstop” phrasing, which she said “muddied the point” that she was making (go ahead and ignore the fact that when the interviewer followed up to ask her if she specifically meant a “federal backstop for chip investment,” she replied, “Exactly”). Instead, she said that what she meant to say was “American strength in technology will come from building real industrial capacity, which requires the private sector and government playing their part.”

    Altman also got in on the post-talk corrections, saying in a long X post, “We do not have or want government guarantees for OpenAI datacenters. We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions or otherwise lose in the market.” Instead, he clarified, “the one area where we have discussed loan guarantees is as part of supporting the buildout of semiconductor fabs in the US, where we and other companies have responded to the government’s call and where we would be happy to help,” which he noted is “different from governments guaranteeing private-benefit datacenter buildouts.”

    So okay, OpenAI was definitely not asking for government money to help it make good on its financial commitments that many times outpace its current revenue. Which is good, because at least one government representative said they wouldn’t get it if they were asking.

    David Sacks, Donald Trump’s AI czar (who seems to still hold that title despite the 130-day limit on special government employees), took to X to say, “There will be no federal bailout for AI.” Instead, Sacks said, “we do want to make permitting and power generation easier. The goal is rapid infrastructure buildout without increasing residential rates for electricity.”

    Great, seems like everyone is on the same page! OpenAI is definitely not asking for the federal government to provide financial guarantees for its seemingly endless spending spree on data center commitments that it needs to keep its operation afloat, and the federal government is definitely not offering that money over fears that the company at the center of the economy’s only growth sector could go belly up. Everything seems very normal and on the level here, glad we got that all sorted out.

    [ad_2]

    AJ Dellinger

    Source link

  • Small Towns Are Betting That the Data Center Boom Will Never End

    [ad_1]

    What happens when data centers come to town? A whole lot, wanted or not. Tech firms are promising to pour trillions of dollars into building new data centers to continue powering the rapid growth of AI models, which means they are asking communities across the country if they are looking for new neighbors. According to a Wall Street Journal report, nearly three-quarters of all US data center capacity currently comes from just 33 of the nation’s 3,143 counties, and residents of those communities are starting to wonder if the current economic boom is worth the looming risks that come from living next door to a big box of computing power.

    The concentrated investment in a small pockets of the country has created modern boomtowns in places that have spent the last few decades reeling from other industries pulling up roots. WSJ highlighted Umatilla County, Oregon as one of these regions that has suddenly been flooded with both workers and cash—so much so that the city of Umatilla, where Amazon is constructing a data center hub, has resulted in the city government’s annual budget ballooning from around $7 million in 2011 to $144 million in 2024.

    Amazon money has poured into the local high school to fund new robotics and other tech programs. Home constructions and sales have skyrocketed, and nearby cities have seen an influx of new customers at restaurants, bars, and other businesses, per WSJ. Similar things are happening across the country, in places like Richland Parish, Louisiana, which will be home to Meta’s $10 billion data center buildout. Washtenaw, Michigan is bracing for a similar windfall as OpenAI and Oracle have tapped it to be home to a data center project that is slated to be the largest investment in state history.

    So what’s the problem? Well, an influx of people living in these small towns means a housing squeeze. Umatilla County has seen its home prices double, per WSJ—which might be affordable for folks pulling in Amazon money, but most of the existing community is not. A report from the local publication the Hermiston Herald earlier this year found that the county is building homes at a record pace and still has a shortfall of available units to support residents. That has led to some cities lending developers money to build faster, with plans to recoup funds as the homes sell—which banks on development not slowing down.

    That is a pretty significant element of the bet being made by these local governments, as they are often offering companies major tax incentives to set up shop in their borders. Umatilla County, for instance, has given Amazon a total exemption from property taxes for 15 years, per local NPR affiliate KUOW. Similar tax breaks have been offered by communities across the country, resulting in about $6 billion in exemptions over the past five years, according to a CNBC report. A recent study from the University of Michigan found that those tax breaks provide far more benefit to corporations than to communities, which end up forgoing significant potential revenue.

    While construction of these centers may be a boon for these towns, life after breaking ground isn’t always great. Earlier this year, the New York Times highlighted how communities in Newton County, Georgia, experienced water shortages after Meta started building its data center in the region. The energy demands of these data centers also tend to keep fossil fuel power sources online for longer, exposing communities to the health impacts of burning natural gas and coal—all while residents foot the bill for the growing energy demand. Bloomberg recently reported that areas near data centers saw their electricity costs jump as much as 267% compared to five years prior.

    Increasingly, communities that are subject to proposed data center build-outs are pushing back. Residents in Tucson, Arizona successfully pushed back against a proposed data center in August, and defeats have been dealt to Big Tech in places like Racine County, Wisconsin; College Station, Texas; and Indianapolis, Indiana. Per Data Center Watch, $64 billion of data center projects have been blocked or delayed by local pushback.

    Big tech firms certainly seem to know what the math is here. They’re just hoping they can continue to sell the boom before the bust.

    “Nobody really wants a data center in their backyard, I don’t want a data center in my backyard. So you do get a lot of push back, and you need buy in from the community because have to get permits to do that work,” Lyndi Stone, Principal Corporate Counsel at Microsoft, said during a recent webinar with law firm Norton Rose Fulbright. “Data centers, once they’re operational, don’t bring a lot of jobs. They do on the construction side, but you’re not really getting a ton of that community benefit from having a data center really, truly in your backyard.”

    [ad_2]

    AJ Dellinger

    Source link

  • Meta Says Porn Stash was for ‘Personal Use,’ Not Training AI Models

    [ad_1]

    Meta forgot to keep its porn in a passworded folder, and now its kink for data collection is the subject of scrutiny. The social media giant turned metaverse company turned AI power is currently facing a lawsuit brought by adult film companies Strike 3 Holdings and Counterlife Media, alleging that the Big Tech staple illegally torrented thousands of porn videos to be used for training AI models. Meta denies the claims, and recently filed a motion to dismiss the case because, in part, it’s more likely the videos were downloaded for “private personal use.”

    To catch up on the details of the case, back in July, Strike 3 Holdings (the producers of Blacked, Blacked Raw, Tushy, Tushy Raw, Vixen, MILFY, and Slayed) and Counterlife Media accused Meta of having “willfully and intentionally” infringed “at least 2,396 movies” by downloading and seeding torrents of the content. The companies claim that Meta used that material to train AI models and allege the company may be planning a currently unannounced adult version of its AI video generator Movie Gen, and are suing for $359 million in damages.

    For what it’s worth, Strike 3 has something of a reputation of being a very aggressive copyright litigant—so much so that if you search the company, you’re less likely to land on its homepage than you are to find a litany of law firms that offer legal representation to people who have received a subpoena from the company for torrenting their material.

    There may be some evidence that those materials were swept up in Meta’s data vacuum. Per TorrentFreak, Strike 3 was able to show what appear to be 47 IP addresses linked to Meta participating in torrenting of the company’s material. But Meta doesn’t seem to think much of the accusation. In its motion to dismiss, the company calls Strike 3’s torrent tracking “guesswork and innuendo,” and basically argues that, among other reasons, there simply isn’t even enough data here to be worth using for AI model training. Instead, it’s more likely just some gooners in the ranks.

    “The small number of downloads—roughly 22 per year on average across dozens of Meta IP addresses—is plainly indicative of private personal use, not a concerted effort to collect the massive datasets Plaintiffs allege are necessary for effective AI training,” the company argued. The company also denied building a porn generator model, basically stating that Strike 3 doesn’t have any evidence of this and Meta’s own terms of service prohibit its models from generating pornographic content.

    “These claims are bogus: We don’t want this type of content, and we take deliberate steps to avoid training on this kind of material,” a spokesperson for Meta told Gizmodo.

    As absurd as the case is, whether the accusations are right or wrong, there is one clear victim: the dad of a Meta contractor who is apparently simultaneously being accused by Strike 3 of being a conduit for copyright infringement and accused by Meta of being a degenerate: “[Strike 3] point to 97 additional downloads made using the home IP address of a Meta contractor’s father, but plead no facts plausibly tying Meta to those downloads, which are plainly indicative of personal consumption,” Meta’s motion said. God forbid this case move forward and this poor person has to answer for his proclivities reserved for incognito tabs.

    [ad_2]

    AJ Dellinger

    Source link

  • What’s Scarier Than a Haunted House? An AI Data Center

    [ad_1]

    The owner of a haunted house in Pennsylvania has a new idea to spook his patrons: the eerie whirring of fans, the humming of electricity, and the looming specter of an overhyped speculative bubble doomed to pop. That’s right, he wants to build an AI data center, according to a report from Bloomberg.

    Derek Strine runs Pennhurst Asylum, a haunted house that lives inside the abandoned remains of a state-run medical institute. And while the property has been thoroughly monetized—hosting everything from historical tours and photography sessions to overnight “paranormal investigations” when not being used for a classic haunt in which the asylum is overrun by spooky actors (OooOOoohhh it’s a non-uuuuuunion joooobbbb)—the owner has apparently been possessed by the spirit of late-stage capitalism. Per Bloomberg, he has designs on making his 130 acres into the future home of a hyperscaler.

    It’s not particularly difficult to imagine why Strine wants to make the shift. While converting a state hospital turned haunted asylum into a data center facility isn’t necessarily the most straightforward conversion in the world, the real estate developer likely realizes that land is at a premium, and managing a data center once it is stood up is probably less involved than handling staffing and live events. That said, the upstart costs aren’t exactly cheap. Bloomberg reported that Strine and his partners have already poured more than $16 million into the conversion project, and the first phase alone has been penciled in at $60 million to be spent on engineering and permitting costs alone. By contrast, Strine bought into the haunted house project for $3 million.

    The project is also getting lots of pushback from the community, which doesn’t necessarily love the haunted house project in the first place—but they’d seemingly take that over the haunting presence of powering the always-watching eye of Big Tech like some Scooby-Doo-style paintings where the eyes follow you. They’ve described concerns about nearby residents having to deal with noise pollution and potential water shortages as their supply is siphoned off to cool the data center. Which, good call on their part: plenty of communities before them have found living next to a data center deeply unpleasant and potentially unhealthy.

    What’s perhaps most notable about the haunted-house-to-data-center pipeline, though, is that it is a shining example of just how deep into the probably unsustainable depths of the AI lifecycle that we are. No knock on Strine, necessarily, but Bloomberg notes that he has no experience in building data centers. He just sees dollar signs. And he’s not alone. According to a recent survey from real estate service firm CBRE, 95% of real estate investors say they plan to increase their investments in data centers.

    If there’s one way that Pennhurst Asylum is the perfect site for a planned data center, it’s this: most of these planned projects never come to fruition. According to data center consultancy company ASG, about 90% of announced data centers will never actually get built. They are ghost centers. Isn’t that fitting?

    [ad_2]

    AJ Dellinger

    Source link

  • OpenAI Data Shows Hundreds of Thousands of Users Display Signs of Mental Health Challenges

    [ad_1]

    OpenAI claims that 10% of the world’s population currently uses ChatGPT on a weekly basis. In a report published by on Monday, OpenAI highlights how it is handling users displaying signs of mental distress and the company claims that 0.07% of its weekly users display signs of “mental health emergencies related to psychosis or mania,” 0.15% expressed risk of “self-harm or suicide,” and 0.15% showed signs of “emotional reliance on AI.” That totals nearly three million people.

    In its ongoing effort to show that it is trying to improve guardrails for users who are in distress, OpenAI shared the details of its work with 170 mental health experts to improve how ChatGPT responds to people in need of support. The company claims to have reduced “responses that fall short of our desired behavior by 65-80%,” and now is better at de-escalating conversations and guiding people toward professional care and crisis hotlines when relevant. It also has added more “gentle reminders” to take breaks during long sessions. Of course, it cannot make a user contact support nor will it lock access to force a break.

    The company also released data on how frequently people are experiencing mental health issues while communicating with ChatGPT, ostensibly to highlight how small of a percentage of overall usage those conversations account for. According to the company’s metrics, “0.07% of users active in a given week and 0.01% of messages indicate possible signs of mental health emergencies related to psychosis or mania.” That is about 560,000 people per week, assuming the company’s own user count is correct. The company also claimed to handle about 18 billion messages to ChatGPT on a weekly basis, so that 0.01% equates to 1.8 million messages of psychosis or mania.

    One of the company’s other major areas of emphasis for safety was improving its responses to users expressing desires to self-harm or commit suicide. According to OpenAI’s data, about 0.15% of users per week express “explicit indicators of potential suicidal planning or intent,” accounting for 0.05% of messages. That would equal about 1.2 million people and nine million messages.

    The final area the company focused on as it sought to improve its responses to mental health matters was emotional reliance on AI. OpenAI estimated that about 0.15% of users and 0.03% of messages per week “indicate potentially heightened levels of emotional attachment to ChatGPT.” That is 1.2 million people and 5.4 million messages.

    OpenAI has taken steps in recent months to try to provide better guardrails to protect against the potential that its chatbot enables or worsens a person’s mental health challenges, following the death of a 16-year-old who, according to a wrongful death lawsuit from the parents of the late teen, asked ChatGPT for advice on how to tie a noose before taking his own life. But the sincerity of that is worth questioning, given at the same time the company announced new, more restrictive chats for underage users, it also announced that it would allow adults to give ChatGPT more of a personality and engage in things like producing erotica—features that would seemingly increase a person’s emotional attachment and reliance on the chatbot.

    [ad_2]

    AJ Dellinger

    Source link

  • Meta Reportedly Laying Off Hundreds From Its AI Team

    [ad_1]

    Meta has thrown billions of dollars at its artificial intelligence efforts. Somehow, that is apparently resulting in fewer people being employed. According to a report from Axios, about 600 people lost their jobs in Meta’s “superintelligence” lab in an effort to create a less “bureaucratic” structure.

    The cuts will reportedly primarily hit Meta’s FAIR AI research lab, which was the company’s long-standing AI research unit, as well as the company’s product-related AI teams and its AI infrastructure units. “By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,” Meta chief AI officer Alexandr Wang said in a memo obtained by Axios. TBD Lab, which is tasked with “developing the next generation” of the company’s large language models, was reportedly spared from the layoffs.

    The company also reportedly encouraged the employees affected by the layoffs to apply for other open positions within the company, with Wang writing, “This is a talented group of individuals, and we need their skills in other parts of the company.” No word on whether there were efforts to move people into those roles before telling them to put their belongings in a box.

    The restructuring is just the latest example of Meta desperately playing catch-up in the AI race. Earlier this year, the company made waves with a hiring spree that saw it throw massive, multi-million dollar paydays at top talent in an effort to poach them from its rivals. It succeeded in luring them away, but hasn’t necessarily figured out what comes next. Some recipients of those big signing bonuses threatened to leave within weeks of joining the company, according to the Financial Times, presumably over the lack of direction within the company. Others did dip, reportedly including people who had been with Meta for years.

    Zuck’s company has seemingly yet to figure out what the shape of its AI operation should be. In addition to shelling out NBA max contract-sized payouts, the company poured $15 billion into Scale to get the company’s talent and infrastructure. Since absorbing all that, it has failed to figure out what to do with it. It announced its Superintelligence initiative first to unify its efforts in the AI space, but broke it up into multiple divisions within a matter of weeks. In the meantime, it looks like it’s the employees that Meta isn’t spending millions of dollars on who will be penalized for organizational incompetence.

    [ad_2]

    AJ Dellinger

    Source link

  • OpenAI Launches the AI Browser War

    [ad_1]

    ChatGPT has broken out of the chatbot. On Tuesday, OpenAI announced that it is launching a web browser called ChatGPT Atlas, which it says will reimagine the browsing experience from the ground up, now built around a chat-based experience for what the company called the “next era of the web.”

    During a demonstration, OpenAI’s Engineering Lead for Atlas, Ben Goodger, explained that Atlas is the company’s answer to the question, “What if you could chat with your browser?” While there are lots of familiar web browser elements to Atlas, including tabs, bookmarks, and autofill for passwords, the company has made ChatGPT central to the experience rather than an “old browser, just with a chatbot that was bolted on.” That starts at the home screen, where the standard search bar now serves as a composer bar to communicate with ChatGPT.

    Users can use conversational prompts to have ChatGPT find certain webpages, perform a standard web search, or go directly to a website or bookmark. In the demo, Atlas Lead Designer Ryan O’Rouke explained that users should be able to use “human language” to search both the web and their browser history (OpenAI calls this “memories”) to find webpages, documents, and information through contextual information. For instance, the company showed how it could find a Google Doc without knowing the URL or exact document name.

    Search results in Atlas are displayed on a homepage that curates a variety of information from the web based on the user’s prompt. Users can also tab between more traditional search results, including a Google Search-like list of links, images, videos, or news stories.

    The primary appeal of Atlas is that a user will be able to pull up ChatGPT at any time while browsing the web and use the chatbot to interact with the page they are on. OpenAI CEO Sam Altman described it during the demo as chatting with a webpage. The chatbot can be summoned via a button in the upper right-hand corner of the screen on desktop and will appear as a sidebar. Once opened, a user can ask it to summarize information on the page, ask page-specific questions and have the chatbot pull the answer directly from the site the user is looking at, and even interact with the page for them.

    That final feature is where ChatGPT’s Agent comes in. OpenAI has been touting its new Agent feature for months now, including introducing an Agent toolkit during its recent DevDay event to give developers the ability to build their own AI agents. But this Agent will be built into the browser, activated on the lower part of the ChatGPT sidebar, and can perform tasks on behalf of the user. In a demo of the feature, OpenAI’s Will Ellsworth, Research Lead on the Atlas Agent, asked the agent to purchase the ingredients needed for a recipe. Once prompted, the Agent navigated to Instacart and bought the relevant ingredients.

    According to the company, Agent will have access to user credentials so it can perform tasks on behalf of the user, though there will be prompts that will require the user to approve certain actions. Users can watch the task be completed by the Agent in real time with the cursor visibly moving on the page, or can let it run in the background. If the user needs to intervene, they can take back control at any time. Ellsworth described Agent as a tool for enabling “vibe lifing” and suggested users could delegate “all kinds of tasks, both in your personal and professional life, to the Agent in Atlas.”

    Atlas will be available immediately for macOS, with plans to bring the browser to Windows, iOS, and Android “soon.” While it seems the browser will be available for all ChatGPT users, Agent will be paywalled, only available for Plus subscribers paying $20 per month or Pro users paying $200 per month.

    Earlier this year, Google did its best to preempt this inevitability. The company announced an AI overhaul of its Chrome browser, which currently holds more than 70% of the total browser market share, including integrating its Gemini chatbot throughout the browser to do things like summarize web pages and do contextual search within a page. The company also floated that it will eventually include an AI agent capable of navigating the web and completing tasks on behalf of the user, though that feature is currently not available. Perplexity also has an AI-first browser called Comet, while companies like Opera, Microsoft, and The Browser company have all integrated AI features into their respective browsers.

    [ad_2]

    AJ Dellinger

    Source link

  • Looks Like JD Vance Didn’t Get the Memo That This Admin Hates AI Guardrails

    [ad_1]

    Republicans have largely been embracing a “hands-off” approach to regulating artificial intelligence, but Vice President JD Vance has found where he draws the line: weird porn. During an appearance on Newsmax’s “The Record with Greta Van Susteren,” Vance called out OpenAI’s recent announcement that it would allow adult users to create erotica with ChatGPT as an example of “bad” uses of AI.

    “Artificial intelligence is still in many cases very dumb,” Vance said during the interview, spotted by The Daily Beast. “Is it good or is it bad, or is it going to help us or going to hurt us? The answer is probably both, and we should be trying to maximize as much of the good and minimize as much of the bad.”

    The VP went on to offer examples of what he sees as both sides of the spectrum. On the good: “finding new cures for diseases.” Reasonable enough. As for the “bad,” Vance name-checked OpenAI CEO Sam Altman to lay out where he thinks AI has gone too far. “I saw an announcement, I think it was from Sam Altman from OpenAI, who said basically, they’re going to start using AI to introduce erotica and porn and things like that,” Vance said. “If it’s helping us come up with increasingly weird porn, that’s bad.”

    Gizmodo reached out to OpenAI for a response to Vance’s comment, but did not receive a response at the time of publication.

    To be fair to Vance here, his basic premise isn’t wrong—though no one said the porn had to be weird, he decided that part. Altman took a lot of heat over the erotica announcement, which he later tried to downplay as “just one example of us allowing more user freedom for adults,” but it’s clearly not a feature that offers anything resembling productivity or obvious human benefit. If anything, it presents even more risk for people getting emotionally or romantically attached to a chatbot in a way that is almost certainly unhealthy.

    But it’s also a departure from the guardrail-free approach that many Republicans have been pushing for. Politicians like Ted Cruz have actively been working to help AI firms avoid regulations, first by trying to block states from creating their own standards and more recently by proposing legislation that would provide AI firms with a waiver for federal regulations, allowing them to test new products without standard scrutiny or oversight. The Trump administration issued its AI Action Plan earlier this year, which specifically took aim at cutting any sort of regulatory red tape that may even slightly hinder AI development. And, of course, Elon Musk loves to brag about his disregard for guardrails when it comes to his personal chatbot, Grok. Back in August, Musk had become so obsessed with posting about Grok’s erotic chatbot characters that his own fans were begging him to “stop gooning to AI anime and take us to Mars.”

    For the rightwing tech crowd, the attitude is basically let the chatbot talk dirty or China will beat us in the race to AGI.

    But while Republicans may not want to regulate these companies, a large chunk of them do want to play the morality police. Basically, the only thing that raises their ire when it comes to AI is the invocation of anything sexual. AI producing misinformation, using an incredible amount of energy, being used to expand the surveillance state—none of that really raises red flags for these folks. But “sensual” chats and erotica? It’s time for the government to step in.

    [ad_2]

    AJ Dellinger

    Source link

  • Even the Inventor of ‘Vibe Coding’ Says Vibe Coding Can’t Cut It

    [ad_1]

    It’s been over a year since OpenAI cofounder Andrej Karpathy exited the company. In the time since he’s been gone, he coined and popularized the term “vibe coding” to describe the practice of farming out coding projects to AI tools. But earlier this week, when he released his own open source model called nanochat, he admitted that he wrote the whole thing by hand, vibes be damned.

    Nanochat, according to Karpathy, is a “minimal, from scratch, full-stack training/inference pipeline” that is designed to let anyone build a large language model with a ChatGPT-style chatbot interface in a matter of hours and for as little as $100. Karpathy said the project contains about 8,000 lines of “quite clean code,” which he wrote by hand—not necessarily by choice, but because he found AI tools couldn’t do what he needed.

    “It’s basically entirely hand-written (with tab autocomplete),” he wrote. “I tried to use claude/codex agents a few times but they just didn’t work well enough at all and net unhelpful.”

    That’s a much different attitude than what Karpathy has projected in the past, though notably he described vibe coding as something best for “throwaway weekend projects.” In his post that is now often credited with being the origin of “vibe coding” as a popular term, Karpathy said that when using AI coding tools, he chooses to “fully give in to the vibes” and not bother actually looking at the code. “When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while. Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away,” he wrote. “I’m building a project or webapp, but it’s not really coding – I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”

    Of course, nanochat is not a web app, so it makes sense that the strategy didn’t work in this case. But it does highlight the limitations of such an approach, despite lofty promises that it’s the future of programming. Earlier this year, a survey from cloud computing company Fastly found that 95% of surveyed developers said they spend extra time fixing AI-generated code, with some reporting that it takes more time to fix errors than is saved initially by generating the code with AI tools. Research firm METR also recently found that using AI tools actually makes developers slower to complete tasks, and some companies have started hiring human specialists to fix coding messes made by AI tools. The thing to remember about vibe coding is that sometimes the vibes are bad.

    [ad_2]

    AJ Dellinger

    Source link

  • Sam Altman: Lord Forgive Me, It’s Time to Go Back to the Old ChatGPT

    [ad_1]

    Earlier this year, OpenAI scaled back some of ChatGPT’s “personality” as part of a broader effort to improve user safety following the death of a teenager who took his own life after discussing it with the chatbot. But apparently, that’s all in the past. Sam Altman announced on Twitter that the company is going back to the old ChatGPT, now with porn mode.

    “We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” Altman said, referring to the company’s age-gating that pushed users into a more age-appropriate experience. Around the same time, users started complaining about ChatGPT getting “lobotomized,” providing worse outputs and less personality.  “We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.” That change followed the filing of a wrongful death lawsuit from the parents of a 16-year-old who asked ChatGPT, among other things, for advice on how to tie a noose before taking his own life.

    But don’t worry, that’s all fixed now! Despite admitting earlier this year that safeguards can “degrade” over the course of longer conversations, Altman confidently claimed, “We have been able to mitigate the serious mental health issues.” Because of that, the company believes it can “safely relax the restrictions in most cases.” In the coming weeks, according to Altman, ChatGPT will be allowed to have more of a personality, like the company’s previous 4o model. When the company upgraded its model to GPT-5 earlier this year, users began grieving the loss of their AI companion and lamenting the chatbot’s more sterile responses. You know, just regular healthy behaviors.

    “If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing),” Altman said, apparently ignoring the company’s own previous reporting that warned people could develop an “emotional reliance” when interacting with its 4o model. MIT researchers have warned that users who “perceive or desire an AI to have caring motives will use language that elicits precisely this behavior. This creates an echo chamber of affection that threatens to be extremely addictive.” Now that’s apparently a feature and not a bug. Very cool.

    Taking it a step further, Altman said the company would further embrace its “treat adult users like adults” principle by introducing “erotica for verified adults.” Earlier this year, Altman mocked Elon Musk’s xAI for releasing an AI girlfriend mode. Turns out he’s come around on the waifu way.

    [ad_2]

    AJ Dellinger

    Source link

  • Police Say People Keep Calling 911 Over an ‘AI Homeless Man’ TikTok Prank

    [ad_1]

    Finally, generative AI has found its purpose: letting kids prank parents. In an apparent new social media trend, kids are creating AI-generated images of homeless people in their homes and sending the images to their parents, causing them to freak out and, in some cases, call the police to respond to the situation.

    The basic premise of this prank is pretty simple: Kids use generative AI tools to create an image of a person, usually an unkempt man who looks like he’s come in from living on the street, in their home, and send it to their parents. The kids pretend that the person claimed to know their parents, or just wanted to come in for a nap. Then, they wait as their parents lose their minds and demand they kick the person out. That’s kinda the whole thing.

    The pranksters have been recording the reactions from their parents and posting them online, and some videos on TikTok have racked up nearly one million likes and thousands of comments. The hashtag #homelessmanprank now has more than 1,200 videos linked to it on the platform, and there are a number of tutorials on how to generate the images needed for the prank, most of which recommend using Snapchat’s AI tools to create the image. Gizmodo reached out to Snapchat for comment on its platform’s role in this trend, but did not receive a response at the time of publication.

    It’d probably be fine if the prank just ended there—it’s a bit of a gross exploitation of how unhoused people are perceived, and some of the parents say some less-than-savory things about the people they think are in their home. Now, the situation has broken containment on what appears to be several occasions, as parents in the middle of a panic have called the police and gotten law enforcement involved.

    Several police departments across the country have issued statements about the prank. The Round Rock Police Department in Texas suggested in a post on X that a prank in the town resulted in “the misuse of emergency services.” The department claimed to have responded to two calls sparked by the trend, both of which turned out to be hoaxes. “While no one was harmed, making false reports like these can tie up emergency resources and delay responses to legitimate calls for service,” the department said. Gizmodo contacted the Round Rock Police Department regarding the situation, and the department said it had no further comment to offer beyond its public statements.

    In a post on Facebook, the Oak Harbor Police Department in Washington said that it responded to a call about a “homeless individual” at the high school campus, which turned out to be a false report related to the same kind of prank. “In this case, students generated and circulated an image implying the presence of a homeless individual on school grounds, which led to unnecessary concern within the community,” the police wrote.

    The Salem Police Department in Massachusetts also issued a public statement about the trend, though it didn’t indicate if its police force actually responded to a situation related to it. “This prank dehumanizes the homeless, causes the distressed recipient to panic and wastes police resources. Police officers who are called upon to respond do not know this is a prank and treat the call as an actual burglary in progress thus creating a potentially dangerous situation,” the department wrote.

    Several reports have hit the United Kingdom, too, with the BBC reporting on Dorset Police receiving a call related to the prank. Police in Poole also issued a statement about the trend after responding to a call from a parent who got pranked.

    Word of the trend has spread to national news, as NBC’s “Nightly News” ran a segment on the story Thursday evening. In that segment, Round Rock Police Patrol Division Commander Andy McKinney told NBC that getting a call about an intruder “causes a pretty aggressive response for us because we’re worried about the safety of individuals in the home, which can mean clearing the home with guns out…it could cause a SWAT response.” Which frankly seems like a bit much, but also feels like a pretty standard American police response.

    We’d love to tell kids to stick to the classics, like lighting a bag of dog poop on fire, but someone in California just got 28 days in jail for that exact prank, so maybe just don’t have any fun at all?

    [ad_2]

    AJ Dellinger

    Source link

  • You Can’t Use Copyrighted Characters in OpenAI’s Sora Anymore and People Are Freaking Out

    [ad_1]

    The complete copyright-free-for-all approach that OpenAI took to its new AI video generation model, Sora 2, lasted all of one week. After initially requiring copyright holders to opt out of having their content appear in Sora-generated videos, CEO Sam Altman announced that the company will be moving to an “opt-in” model that will “give rightsholders more granular control over generation of characters”—and Sora obsessives are not taking it particularly well.

    Given the type of content that was being generated with Sora and shared via the TikTok-style social app that OpenAI launched specifically to host user-generated Sora videos, the change shouldn’t come as a shock. Almost immediately, the platform was inundated with copyrighted material being used in ways that the rightsholders almost certainly did not care for, unless you think Nickelodeon really loved the subversiveness of Nazi SpongeBob. On Monday, the Motion Picture Association became one of the loudest voices calling for OpenAI to put an end to the potential infringement. It didn’t take long for OpenAI to respond and acquiesce.

    In a blog post, Altman said the new approach to copyrighted material in Sora will require rightsholders to opt-in to having their characters and content used—but he’s very sure that copyright holders love the videos, actually. “We are hearing from a lot of rightsholders who are very excited for this new kind of ‘interactive fan fiction’ and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all),” Altman wrote, stating that his company wants to “let rightsholders decide how to proceed.”

    Altman also admitted, “There may be some edge cases of generations that get through that shouldn’t, and getting our stack to work well will take some iteration.” It’s unclear if that will play with rightsholders. MPA CEO Charles Rivkin said in a statement that OpenAI “must acknowledge it remains their responsibility—not rightsholders’—to prevent infringement on the Sora 2 service,” and said “Well-established copyright law safeguards the rights of creators and applies here.”

    While OpenAI might be giving copyright holders more control of the outputs of its model, it doesn’t appear that they had much say on the inputs. A report from the Washington Post showed how the first version of Sora was pretty clearly trained on copyrighted material that the company didn’t ask permission to use. It’s not clear that OpenAI went out and got those rights to train Sora 2, but the generator is very good at spitting out accurate recreations of copyrighted material in a way that it could only do if it was fed a whole lot of existing content during training.

    The biggest AI training case thus far saw Anthropic pay out $1.5 billion to settle a copyright infringement case with authors of books the company pirated to train its models. The judge in that case did find that using copyrighted material for training without permission is fair use, though other courts may not agree with that call. Earlier this year, OpenAI asked the Trump administration to call AI model training fair use. So a lot of OpenAI’s strategy around Sora appears to be fucking around and hoping, if it makes the right allies, it’ll never have to find out.

    OpenAI may be able to appease copyright holders by shifting its Sora policies, but it’s now pissed off its users. As 404 Media pointed out, social channels like Twitter and Reddit are now flooded with Sora users who are angry they can’t make 10-second clips featuring their favorite characters anymore. One user in the OpenAI subreddit said that being able to play with copyrighted material was “the only reason this app was so fun.” Another claimed, “Moral policing and leftist ideology are destroying America’s AI industry.” So, you know, it seems like they’re handling this well.

    [ad_2]

    AJ Dellinger

    Source link

  • OpenAI Goes All-In on Vibe Coding, Says ‘Mature Experiences’ Are on the Horizon

    [ad_1]

    OpenAI’s DevDay 2025 featured a major focus on vibe coding. The company, which boasts that it now has more than 800 million weekly active users for ChatGPT, announced a variety of new tools for developers during its annual event in San Francisco. Headlining the announcements: the ability to build with apps directly in ChatGPT (including eventually allowing “mature experiences” once age verification is in place) and the introduction of a toolkit that will help users build and deploy their own AI agents.

    In OpenAI’s apparent effort to turn ChatGPT into a full-on frontend development environment, the company announced its new Apps SDK (Software Development Kit) that will allow devs to pull in supported third-party apps to complete tasks. In a demo, the company showed ChatGPT working with Zillow to generate a map of homes available for sale in Pittsburgh. Zillow created an interactive map based on the prompt, and the user was able to ask additional questions based on the map. The functionality should allow users to create tools using third-party apps, which they can preview directly within ChatGPT.

    According to OpenAI, Apps SDK is available immediately for Free, Go, Plus, and Pro plans. Support will be available out of the gate for Booking.com, Canva, Coursera, Figma, Expedia, Spotify, and Zillow. The company also said that it plans to offer support for DoorDash, OpenTable, Target, and Uber in the near future. For now, users will only be able to make and use the apps in preview, but it plans to allow developers to submit apps later this year, with a directory for apps planned so that developers can share their vibe-based creations.

    There are lots of details yet to come regarding what comes from Apps SDK. Altman promised monetization guidelines, for instance, are in the pipeline. Also on the way: “mature experiences.” According to OpenAI’s App developer guidelines, “Apps must be suitable for general audiences, including users aged 13–17. Apps may not explicitly target children under 13.” But that won’t be the case forever. “Support for mature (18+) experiences will arrive once appropriate age verification and controls are in place,” it reads.

    The company recently introduced age verification tools designed to shift underage users into a ChatGPT experience with much stricter guidelines following a wrongful death lawsuit filed against the company by the family of a teenager who died by suicide after extensive conversations with the chatbot. It appears that once it hammers out those details, it’ll open the floodgates to more “adult” functions.

    In addition to Apps SDK, the company also rolled out its AgentKit API (Application Programming Interface), which will allow users to build their own agentic AI tools. It’s a significant expansion of OpenAI’s Agent, which it introduced earlier with the promise that the system could navigate the web autonomously to complete tasks assigned to it by the user.

    Sticking with the vibe coding theme, AgentKit’s primary feature is its Agent Builder, which allows users to program their AI agent’s functionality through a visual interface. Altman described it as being like Canva for building agents, making it more accessible to those who are less technical.

    [ad_2]

    AJ Dellinger

    Source link

  • Taylor Swift, Defender of Artist Ownership, Allegedly Uses AI in Videos

    [ad_1]

    Taylor Swift once said, “You deserve to own the art you make.” Apparently, that doesn’t apply to the millions of artists who have had their works fed into the data wood chipper that is generative AI tools. In the lead-up to the release of the world’s biggest pop star’s latest album, “Life of a Showgirl,” fans were treated to easter egg videos designed to build hype. Instead, sharp-eyed Swifties started to spot what appeared to be AI-generated imagery within the teaser videos, and launched full Swift-vestigations into the situation.

    The alleged generative AI material appeared in a series of short promotional videos. Those videos were accessed via QR codes that were posted on 12 orange doors located in 12 different cities. The videos, originally uploaded via YouTube Shorts, are no longer available, but Gizmodo reviewed purported re-uploads found online. Each video featured letters which, when put together, provided the phrase, “You must remember everything, but mostly this, the crowd is your king.” But the mystery that Taylor’s king took more of an interest in seemed to be, “Why do some of these videos look a little off?”

    No one from Swift’s camp has confirmed in any way the use of generative AI in the promotional videos, but there is certainly enough on-screen to create suspicion. Users have pointed out clipping and disappearing imagery in some videos that suggest that what you’re seeing is created with generative AI. The videos appear to be a part of a partnership with Google, according to a report from The Tennessean, which covered the orange door reveal that appeared in Nashville. Gizmodo reached out to Google for comment regarding its involvement in the videos, but did not receive a response at the time of publication.

    Others have called out some lettering that appears in different shots that have a distinct AI-generated quality to them, in that they are largely nonsense. A treadmill that appears in one video, for instance, has buttons that read “MOP,” “SUOP,” and “NCLINE,” with letters that are curved and blurred in ways that suggest there’s something more than just some wear and tear on the buttons. Another image, a notebook, also appears to contain made-up lettering that a human would be unlikely to make, on account of the fact that a human knows what letters are.

    Generative AI systems are notoriously bad at generating text because, while these systems have been trained on massive sets of data and images containing text, the model has no concept of what it’s actually “looking” at. This is why generative AI models can spit out images of watches and clocks, but it’s often hard to get them to display specific times, because the model has no idea how to tell time. It just knows clocks have lines that mark time, not what those lines actually indicate.

    The inconsistencies were surprisingly common throughout the videos. Viewers pointed out a squirrel that appears to transform into a chipmunk at one point, and a changing number of lamps that appear in another shot. The Swift diehards took particular offense to an AI-generated version of a piano and guitar that was used on Swift’s Eras Tour, which shouldn’t be surprising given how big a deal was made of those custom-made instruments at the time.

    It doesn’t appear that generative AI was used in the creation of Swift’s music videos for the new album, and there doesn’t appear to be an indication that generative AI was used in the feature film released to mark the launch of the record.  Gizmodo reached out to representatives for Taylor Swift, as well as Rodrigo Prieto, cinematographer of “Taylor Swift: The Official Release Party of a Showgirl,” for comment regarding the potential use of generative AI in the making of these promotional videos, music videos, and the film. No parties responded on the record at the time of publication.

    On its face, this appears to be a pretty major blunder. You can’t tell your superfans, who think every word you speak and image you post contains secret messages, to look for clues in an AI-generated video and not expect them to spot inconsistencies. But hey, maybe these weird anomalies are just part of another Easter egg reveal, right?

    [ad_2]

    AJ Dellinger

    Source link