ReportWire

Tag: generative ai

  • Lionsgate Is Founding Out It’s Really Hard to Make Movies With AI

    [ad_1]

    Earlier this year, Michael Burns, the vice-chairman of movie studio Lionsgate, made a bold claim. According to Vulture, he said that through a partnership with generative AI company Runway AI, the company that is home to franchises like John Wick and The Hunger Games could repackage one of its signature series as an anime, generated entirely by AI in a matter of hours, and resell it as a new movie.

    That notably has not happened. According to a report from The Wrap, it’s because the partnership, announced last year as a “first-of-its-kind” deal between a movie studio and a generative AI company, has not gone according to plan. The plan has allegedly hit snags related to the size of Lionsgate’s catalog, the limitations of Runway’s model, and copyright and licensing concerns.

    The deal made between the companies last year saw Lionsgate give Runway AI access to its complete library of films, which Runway would use to create a custom and exclusive model that Lionsgate could use to create AI-generated videos. But, per The Wrap, Lionsgate’s library isn’t enough to create a fully functioning model. In fact, the report claims, Disney’s library wouldn’t be enough for such a task. The reality of building a generative AI model is that it needs a massive amount of data to be able to produce a sufficient and functional output. If the studio wanted to use Runway to create a lighting effect in a film, for instance, it would really only be able to render that effect if it had enough reference points to work with.

    That seems to check out, if you think about it. Models with access to massive amounts of data, like Google’s Veo or OpenAI’s Sora, produce videos that contain countless mistakes, glitches, and uncanny valley-like oddities. The possibility of creating a generative model on a much more limited set of training data is going to produce much more limited generative capabilities.

    And then there are the legal questions surrounding the potential use of generative AI that comes entirely from Lionsgate’s outputs.

    Burns’ pitch of an anime-filtered version of a film? He told Vulture that he’d have to pay the actors and other rights participants to sell it. Who would that include? It’s not entirely clear. Do writers need to get a check? Do directors? What about gaffers for their lighting work? The report indicates that there are a lot of unanswered legal questions that extend beyond the fact that Lionsgate owns the intellectual property that sits in the way of actually releasing an AI-generated film.

    “We’re very pleased with our partnership with Runway and our other AI initiatives, which are progressing according to plan,” Peter Wilkes, Chief Communications Officer at Lionsgate, told Gizmodo. “We view AI as an important tool for serving our filmmakers, and we have already successfully applied it to multiple film and television projects to enhance quality, increase efficiency and create exciting new storytelling opportunities. We are also using AI to achieve significant cost savings and greater efficiency in the licensing of our film and television library. AI remains a centerpiece of our efforts to use new technologies to prepare our business for the future.”

    Runway did not respond to a request for comment.

    There are indicators that Lionsgate is making use of Runway, though possibly not via the planned exclusive model. In that Vulture piece from earlier this year, the company was working on creating an AI-generated trailer for a film that hadn’t been shot yet, with the hope that execs could sell it based on the fabricated scenes. Whether audiences or creatives are served by that process is a different question.

    [ad_2]

    AJ Dellinger

    Source link

  • Thanks to AI, Charlie Kirk Will Never Die for Some People

    [ad_1]

    There really is no rest for the wicked. Over the weekend, according to Religious News Service, at least three churches played for their congregations a posthumous message from Charlie Kirk, in which he assured those in the pews, “I’m fine, not because my body is fine, but because my soul is secure in Christ. Death is not the end, it’s a promotion.”

    Of course, it wasn’t actually Kirk speaking from his spot in the afterlife. It was an AI-generated clip that, prior to getting played in these houses of worship, made the rounds on social media. The audio appears to have originated on TikTok, generated by the user NioScript, who posted the 51-second message a day after Kirk was killed. It has since garnered millions of listens, shared by users who record themselves reacting and crying as they hear the AI-generated message. All of that eventually led to the audio getting played in churches like Prestonwood Baptist in Texas, where it is introduced by Pastor Jack Graham as AI—but as something that “moved” him and that he is sharing so his congregation can “Hear what Charlie is saying regarding what happened to him this past week.”

    It is, again, not what Charlie Kirk is saying. But that has not stopped people from talking to it as if it were real. Members of Prestonwood Baptist gave the video a standing ovation. Audiences of Dream City Church in Arizona and Awaken Church, San Marcos, in California, both of which ran the clips, also applauded, as pointed out by Religious News Service. Users on social media have responded to the audio with captions and comments like “This is exactly what Charlie would say if he could talk to us right now,” or “I know it’s AI but you can’t tell me this isn’t exactly what he’d say.”

    This type of coping with the feeling of loss is not totally unique. People have always sought to remember and preserve the people they love after they pass, and technology has facilitated new ways to achieve that, whether it is an endless stream of photos that spark memories or the person’s online presence turned into a digital memorial. In the world of bereavement literature, these are often referred to as continuing bonds. In that way, an AI-generated audio clip or video of someone like Kirk isn’t all that different from sharing stories about him to keep his memory alive.

    It is different in that it’s a complete fabrication. It’s not a memory, which can also be faulty, but an invention from whole cloth. Yes, it may have access to Kirk’s words, likeness, and voice, all of which are omnipresent on the internet. But it is, as a large language model, incapable of doing anything but trying to autofill the void for the grieving.

    Creating an AI-replicated version of a deceased person to aid in the grieving process is a growing industry. A recent article in Nature highlights several efforts to better understand if chatbots trained on a loved one’s likeness can help the grieving work through the complex and intense feelings that come with loss. While there is some evidence to suggest users of “griefbots” have managed to find some internal sense of closure with their lost loved ones, there are real risks of harming people in a fragile emotional state, including making it hard to let go of the bot version of the person.

    There is also the very real worry that we simply aren’t able to differentiate between our real memories of a person and AI-generated ones that are implanted in our minds through these types of interactions. A study conducted by MIT Media Lab found that exposing a person to even a single AI-edited image can affect a person’s memory, and people exposed to AI-generated images “reported high levels of confidence in their false memories.”

    The reality for the people who are memorializing Kirk this way is that the vast majority of them don’t actually know him. They have a parasocial relationship with him that they would like to continue, and the AI message allows that to happen because it, in their minds, captures his voice—or, maybe more accurately, captures what they want to hear.

    There is already plenty of ongoing debate about who exactly Charlie Kirk was and how he should be remembered without an AI-generated version of him injected into the conversation. But for people who are grieving his loss, should they believe that there is any part of Kirk’s soul living in that AI voice, perhaps just let it rest.

    [ad_2]

    AJ Dellinger

    Source link

  • ‘A burgeoning epidemic’: Why some kids are forming extreme emotional relationships with AI – WTOP News

    [ad_1]

    As more kids turn to artificial intelligence to answer questions or help them understand their homework, some appear to be forming too close a relationship with services such as ChatGPT.

    As more kids turn to artificial intelligence to answer questions or help them understand their homework, some appear to be forming too close a relationship with services such as ChatGPT — and that is taking a toll on their mental health.

    “AI psychosis,” while not an official clinical diagnosis, is a term clinicians are using to describe children who appear to be forming emotional bonds with AI, according to Dr. Ashley Maxie-Moreman, clinical psychologist at Children’s National Hospital in D.C.

    Maxie-Moreman said symptoms can include delusions of grandeur, paranoia, fantastical relationships with AI, and even detachment from reality.

    “Especially teens and young adults are engaging with generative AI for excessive periods of time, and forming these sort of fantastical relationships with AI,” she said.

    In addition to forming close bonds with AI, those struggling with paranoia may see their condition worsen, with AI potentially affirming paranoid beliefs.

    “I think that’s more on the extreme end,” Maxie-Moreman said.

    More commonly, she said, young people are turning to generative AI for emotional support. They are sharing information about their emotional well-being, such as feeling depressed, anxious, socially isolated or having suicidal thoughts. The responses they receive from AI vary.

    “And I think on the more concerning end, generative AI, at times, has either encouraged youth to move forward with plans or has not connected them to the appropriate resources or flagged any crisis support,” Maxie-Moreman said.

    “It almost feels like this is a burgeoning epidemic,” she added. “Just in the past couple of weeks, I’ve observed cases of this.”

    Maxie-Moreman said kids who are already struggling with anxiety, depression, social isolation or academic stress are most at risk of developing these bonds with AI. That’s why, she said, if you suspect your child is suffering from those conditions, you should seek help.

    “I think it’s really, really important to get your child connected to appropriate mental health services,” she said.

    With AI psychosis, parents need to be on the lookout for symptoms. One could be a lack of desire to go to school.

    “They’re coming up with a lot of excuses, like, ‘I’m feeling sick,’ or ‘I feel nauseous,’ and maybe you’re finding that the child is endorsing a lot of physical symptoms that are sometimes unfounded in relation to attending school,” Maxie-Moreman said.

    Another sign is a child who appears to be isolating themselves and losing interest in things they used to look forward to, such as playing sports or hanging out with friends.

    “I don’t want to be alarmist, but I do think it’s important for parents to be looking out for these things and to just have direct conversations with their kiddos,” she said.

    Talking to a child about mental health concerns can be tricky, especially if they are teens who, as Maxie-Moreman noted, can be irritable and a bit moody. But having a conversation with them is key.

    “I think not skirting around the bush is probably the most helpful thing. And I think teens tend to get a little bit annoyed with indirectness anyhow, so being direct is probably the best approach,” she said.

    To help prevent these issues, Maxie-Moreman suggested parents start doing emotional check-ins with their children from a young age.

    “Just making it sort of a norm in your household to have conversations about how your child is doing emotionally, checking in with them on a regular basis, is important. So starting at a young age is what I would recommend on the preventative end,” she said.

    She also encouraged parents to talk to their children about the limits of the technology they use, including generative AI.

    “I think that’s probably one of the biggest interventions that will be most helpful,” she said.

    Maxie-Moreman said tech companies must also be held accountable.

    “Ultimately, we have to hold our tech companies accountable, and they need to be implementing better safeguards, as opposed to just worrying about the commercialization of their products,” she said.

    Get breaking news and daily headlines delivered to your email inbox by signing up here.

    © 2025 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.

    [ad_2]

    Mike Murillo

    Source link

  • Chinese social media platforms roll out labels for AI-generated material

    [ad_1]

    Major social media platforms in China have started rolling out labels for AI-generated content to comply with a law that took effect on Monday. Users of the likes of WeChat, Douyin, Weibo and RedNote (aka Xiaohongshu) are now seeing such labels on posts. These denote the use of generative AI in text, images, audio, video and other types of material, according to the . Identifiers such as watermarks have to be included in metadata too.

    WeChat has told users they must proactively apply labels to their AI-generated content. They’re also prohibited from removing, tampering with or hiding any AI labels that WeChat applies itself, or to use “AI to produce or spread false information, infringing content or any illegal activities.”

    ByteDance’s Douyin — the Chinese version of TikTok — similarly urged users to apply a label to every post of theirs that includes AI-generated material while noting it’s able to use metadata to detect where a piece of content content came from. Weibo, meanwhile, has added the option for users to report “unlabelled AI content” option when they see something that should have such a label.

    Four agencies drafted the law — which was — including the main internet regulator, the Cyberspace Administration of China (CAC). The Ministry of Industry and Information Technology, the Ministry of Public Security and the National Radio and Television Administration also helped put together the legislation, which is being enforced to help oversee the tidal wave of genAI content. In April, the CAC a three-month campaign to regulate AI apps and services.

    Mandatory labels for AI content could help folks better understand when they’re seeing AI slop and/or misinformation instead of something authentic. Some US companies that provide genAI tools offer similar labels and are starting to bake such identifiers into hardware. Google’s are the first phones that implement (Coalition for Content Provenance and Authenticity) content credentials .

    [ad_2]

    Kris Holt

    Source link

  • How Generative AI Is Completely Reshaping Education | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    This is the second installment in the “1,000 Days of AI” series. As an AI keynote speaker and strategic advisor on AI university strategy, I’ve seen firsthand how generative AI is transforming education — and why aligning with the future of learning is now a leadership imperative.

    I’m starting with education, not because it was the most disrupted, but because it was the first to show us what disruption actually looks like in real time.

    Why start here?

    Education is upstream to everything. Every future engineer, policymaker, manager and founder is shaped by what happens in a classroom, a lecture hall or a late-night interaction with a search engine. When generative AI arrived, education didn’t have the luxury to wait. It was forced to adapt on the fly.

    ChatGPT didn’t quietly enter higher education. It detonated. Assignments unraveled. Grading frameworks collapsed. Students accessed polished answers in seconds. Faculty were blindsided. Institutional responses were reactive, inconsistent and exposed deep fractures in how learning was being defined and delivered.

    The idea that education meant memorization and regurgitation cracked almost overnight.

    Related: How AI Is Transforming Education Forever — and What It Means for the Next Generation of Thinkers

    AI in education didn’t break higher ed — It exposed the disconnect

    Long before AI, colleges were already straining under somewhat outdated models — rigid lectures, static syllabi, compliance-heavy assessments and a widening chasm between classroom instruction and workforce reality. Students were evolving faster than the systems designed to serve them.

    Generative AI made that gap impossible to ignore. Within months of its release, a majority of students admitted to using ChatGPT or similar tools for coursework. Meanwhile, most college presidents acknowledged they had no formal AI policy in place. The dissonance was loud, and it created not just urgency, but opportunity.

    In the past year, I’ve partnered with some of the largest education systems in the world to help develop their AI strategies. We co-developed governance frameworks, launched executive working groups, crafted responsible use guidelines and trained thousands of faculty across campuses. The goal wasn’t just to respond; it was to lead.

    At the same time, I’ve worked with community colleges — the frontline of workforce development. These institutions feel disruption first and move fastest. I’ve helped their leaders connect generative AI to student outcomes, integrate tools into classroom experimentation and align innovation with workforce readiness and equity.

    Whether it’s a flagship university or a high-impact college, the principle is the same: Strategy must align with people, culture and mission. The institutions making the biggest strides aren’t the ones with perfect AI plans. They’re the ones willing to move while others wait. This momentum is powered by intrapreneurship on the inside, and increasingly, by student-driven entrepreneurship on the outside.

    Students are becoming entrepreneurs

    Students aren’t waiting for permission; they’re reinventing how learning works. They adapt quickly, embrace emerging technologies and experiment boldly. Some might call it cheating. I’d call it testing the system.

    Today’s students no longer see education as a linear path to a degree. They see it as a launchpad for ideas.

    They’re using not just ChatGPT, but a full arsenal of AI tools — Perplexity, Gemini, Claude and more — to write business plans, generate branding, build MVPs and pressure-test real-world ideas. In fact, some aren’t just using tools; they’re creating their own. They’re not waiting to be taught. They’re teaching themselves how to build, launch and iterate.

    And yes, some of it is used for shortcuts. For cutting corners. For getting around assignments. Academic integrity is a real issue and one that institutions must address. But it’s also a signal that the system itself needs to evolve. These students are not just bypassing rules — they’re stress-testing the relevance of education as it exists today. And this is where intrapreneurs inside the system become critical to bridging the gap.

    Related: Why We Shouldn’t Fear AI in Education (and How to Use It Effectively)

    Intrapreneurs are moving institutions forward

    We all know that innovation rarely happens in the corner offices. The most powerful change isn’t coming from executive memos. It’s coming from the ground up.

    I’ve seen faculty members redesign assessments to include AI. Academic advisors build GPT-powered chatbots for student support. Department chairs test automated grading workflows while central IT is still writing policy. These are intrapreneurs — internal innovators leading with agility.

    My work has always been to help them scale and to get out of their way. Real transformation happens when governance, incentives and innovation align — and when execution is taken seriously.

    What institutions are doing that works

    Here are five moves I’ve seen deliver the greatest impact across leadership, faculty and students alike.

    1. Accept that change is inevitable: Ignoring, shaming or regulating innovation won’t stop it. Institutions must choose to engage with change, not resist it.

    2. Acknowledge that learning is now co-created: In many cases, students are more fluent in new tools than faculty. It may feel awkward — but that discomfort is the birthplace of co-creation and collaborative innovation.

    3. Support intrapreneurship and entrepreneurship: Encourage faculty and staff to experiment internally while also supporting students who are launching startups or prototyping ideas using AI.

    Institutions that move now are defining the next decade of learning. That doesn’t mean ignoring issues of academic integrity or the risks of cognitive offloading — we don’t know what we don’t know. But that uncertainty should inform us, not paralyze us.

    The institutions that will thrive in the next 1,000 days aren’t those with the most tech. They’re the ones that create space to adapt, listen and lead from every level — through both intrapreneurship and entrepreneurship.

    Related: How AI, Funding Cuts and Shifting Skills Are Redefining Education — and What It Means for the Future of Work

    Leadership is no longer a title; it’s a posture. Every instructor redesigning a course, every student experimenting with AI, every staffer who builds a better workflow is shaping the future of education.

    According to the World Economic Forum, over 40% of core job skills will shift in the next five years. That’s not a prediction — it’s a mandate.

    The only way forward is to build systems that learn as fast as the people in them. Presidents and provosts can provide vision, but it’s intrapreneurs who will make it real. Transformation won’t be dictated from above. It will be powered from within.

    AI is not the end. It’s the beginning of a new way of learning and a new kind of leadership.

    Coming next in the “1,000 Days of AI” series: Higher education wasn’t ready for AI, but students forced the conversation. K-12 is even more essential because critical thinking, ethical reasoning and digital fluency must begin long before college.

    [ad_2]

    Alex Goryachev

    Source link

  • Why Is David Zaslav in The Wizard of Oz at Sphere?

    [ad_1]

    Does he know the Wizard is kind of the baddie?
    Photo: Brenton Ho/Variety via Getty Images

    Playing the titular role, MSG CEO James Dolan opened The Wizard of Oz at Sphere Thursday night. Famously, the Wizard is a charlatan in every version of the text. In the books and 1939 film, he’s relatively harmless. In Wicked, he’s basically a fascist. But in Las Vegas, where spectacle-over-substance is kind of the whole point, the Wizard was a beneficent showman. Kind of like James Franco in Oz the Great and Powerful?

    The Wizard of Oz at Sphere is the latest Contrabulus Fabtraption MSG has put on in their Vegas orb. It is a “reimagining” of the 1939 MGM film starring Judy Garland.

    Yes and no. MSG, Google Cloud, Warner Bros. Discovery, and VFX studio Magnopus upscaled The Wizard of Oz to 16K and used generative AI to fill the frame. Because there’s so much more frame to fill with a near-360 screen. The film also cuts 20 minutes out of the runtime, but adds 4DX-style elements to the show, like a wind effect during the tornado scene.

    According to Google, AI was used to “enhance the film’s resolution, extend backgrounds, and digitally recreate existing characters who would otherwise not appear on the same screen.” So instead of how the film would cut to Dorothy, then the Scarecrow, then back to Dorothy, it now keeps everybody on the same giant frame. That’s right, we’re doing long-ass oners in this version of The Wizard of Oz. And AI was used to insert James Dolan and David Zaslav into the movie.

    MSG Entertainment and Warner Bros. Discovery CEOs James L. Dolan and David Zaslav were digitally inserted into the background of The Wizard of Oz at Sphere. “I won’t tell you where, it’s only for like two seconds,” Dolan said at the premiere. “[They] replaced the faces of two very short, two-second characters in the movie with mine and David. I challenge you to find it.” Going from Dolan’s hint, we’re guessing the VFX team took the faces off two little people actors. Cool! VFX specialst Ben Grossman added that the two actors were “too blurry to be identified.”

    Critically, the reviews are mixed. Salon called it “an atrocity.” USA Today said people worried about the use of AI should “shelve” their “protestations” until they see the film themselves. The site also featured a beta testing “Deeper Dive” AI feature that could tell readers all about “ethical AI.” Variety was somewhere over the rainbow in the middle, saying the spectacle was spectacular but the AI was kind of ghoulish, especially when it “was used to replace Judy Garland’s face with a poreless plastic sheen (where film grain and delicate lighting gave her skin a certain softness before). Dorothy’s once-glistening eyes now look almost cow-like, framed by fine CG eyelashes.”

    Financially, it had better be really freaking successful. Dolan told The Hollywood Reporter “We went way over the budget. What we were originally thinking, we ended up almost two times what we were originally thinking. We’re getting up pretty close to that $100 million mark.” MSG plans to play the film at Spheres across the globe for at least a decade. In fairness, it took the OG Wizard of Oz a long time to make its money back, too.

    [ad_2]

    Bethy Squires

    Source link

  • Intrusive Thought of the Day: Is That YouTube Video Enhanced With AI?

    [ad_1]

    Have you noticed YouTube videos have started to have a little hint of the Uncanny Valley in recent months? You are far from alone, as a growing chorus of folks stuck in YouTube’s endless scroll of Shorts have started piecing together similar qualities across videos that give viewers the heebie jeebies. That’s probably not the intended response that YouTube was going for, but according to a report from The Atlantic, the effects are intentional and part of an ongoing experiment by YouTube to “enhance” videos.

    Here’s what to look for to spot an “enhanced” video, according to users: “punchy shadows,” “sharp edges,” and a “plastic” look. According to the BBC, YouTubers have also pointed out these strange effects, which lead to more defined wrinkles appearing in clothing, skin looking unnaturally smooth, and occasional warping around the edges of a person’s face. Some creators expressed concerns that the unnatural look could lead to viewers thinking they used AI in their video.

    All of this is appearing because YouTube is tweaking people’s videos after the content is uploaded, and has been doing so seemingly without any forewarning that changes would be made and without the permission of the creator. And while YouTubers like Rhett Shull have suggested the effects are the result of AI upscaling, an attempt to “improve” video quality using AI tools, YouTube has a different explanation.

    “We’re running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video),” Rene Ritchie, YouTube’s head of editorial and creator liaison, said in a Twitter post. “YouTube is always working on ways to provide the best video quality and experience possible, and will continue to take creator and viewer feedback into consideration as we iterate and improve on these features.”

    It’s certainly an interesting decision to explicitly identify these techniques as “traditional machine learning technology” rather than AI. A spokesperson for Google made the message even clearer in a statement to The Atlantic, stating, “These enhancements are not done with generative AI.”

    It’s not like YouTube has exactly been distancing itself from generative AI. The platform just launched a new suite of “generative effects” that it has encouraged creators to use. Other creators have shown that YouTube uses AI tools to generateinspiration” and ideas for new videos for their channel. But perhaps it’s the viscerally negative response that people have had when spotting these “enhanced” videos that has YouTube backing away from the AI-centric language.

    This experiment has apparently been going on for a couple months, if the eyes of viewers are to be trusted. The BBC tracked examples of complaints about the effects described by YouTube as “enhancements” dating back to June of this year. It’s also led to some users taking a conspiratorial view of the experiment, suggesting the company is trying to desensitize audiences to AI-style effects and make them more palatable. On the positive side, that at least suggests people are generally rejecting slop. Ideally, YouTube won’t keep dragging its creators down into the AI mud and will let their videos be. It’s not like the platform is exactly short on content, after all.

    [ad_2]

    AJ Dellinger

    Source link

  • OpenAI picks labor icon Dolores Huerta and other philanthropy advisers as it moves toward for-profit

    [ad_1]

    OpenAI has named labor leader Dolores Huerta and three others to a temporary advisory board that will help guide the artificial intelligence company’s philanthropy as it attempts to shift itself into a for-profit business.

    Huerta, who turned 95 last week, formed the first farmworkers union with Cesar Chavez in the early 1960s and will now voice her ideas on the direction of philanthropic initiatives that OpenAI says will consider “both the promise and risks of AI.”

    The group will have just 90 days to make their suggestions.

    “She recognizes the significance of AI in today’s world and anybody who’s been paying attention for the last 50 years knows she will be a force in this conversation,” said Daniel Zingale, the convener of OpenAI’s new nonprofit commission and a former adviser to three California governors.

    Huerta’s advice won’t be binding but the presence of a social activist icon could be influential as OpenAI CEO Sam Altman attempts a costly restructuring of the San Francisco company’s corporate governance, which requires the approval of California’s attorney general and others.

    Another coalition of labor leaders and nonprofits recently petitioned state Attorney General Rob Bonta, a Democrat, to investigate OpenAI, halt the proposed conversion and “protect billions of dollars that are under threat as profit-driven hunger for power yields conflicts of interest.”

    OpenAI, the maker of ChatGPT, started out in 2015 as a nonprofit research laboratory dedicated to safely building better-than-human AI that benefits humanity.

    It later formed a for-profit arm and shifted most of its staff there, but is still controlled by a nonprofit board of directors. It is now trying to convert itself more fully into a for-profit corporation but faces a number of hurdles, including getting the approval of California and Delaware attorneys general, potentially buying out the nonprofit’s pricy assets and fighting a lawsuit from co-founder and early investor Elon Musk.

    Backed by Japanese tech giant SoftBank, OpenAI last month said it’s working to raise $40 billion in funding, putting its value at $300 billion.

    Huerta will be joined on the new advisory commission by former Spanish-language media executive Monica Lozano; Robert Ross, the recently retired president of The California Endowment; and Jack Oliver, an attorney and longtime Republican campaign fundraiser. Zingale, the group’s convener, is a former aide to California governors including Democrat Gavin Newsom and Republican Arnold Schwarzenegger.

    “We’re interested in how you put the power of AI in the hands of everyday people and the community organizations that serve them,” Zingale said in an interview Wednesday. “Because, if AI is going to bring a renaissance, or a dark age, these are the people you want to tip the scale in favor of humanity.”

    The group is now tasked with gathering community feedback for the problems OpenAI’s philanthropy could work to address. But for California nonprofit leaders pushing for legal action from the state attorney general, it doesn’t alter what they view as the state’s duty to pause the restructuring, assess the value of OpenAI’s charitable assets and make sure they are used in the public’s interest.

    “As impressive as the individual members of OpenAI’s advisory commission are, the commission itself appears to be a calculated distraction from the core problem: OpenAI misappropriating its nonprofit assets for private gain,” said Orson Aguilar, the CEO and founding president of LatinoProsperity, in a written statement.

    ——————————-

    The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.

    [ad_2]

    Source link

  • Samsung Galaxy AI in New York City

    [ad_1]

    Samsung Galaxy AI in New York City

    New York City ist einfach ideal um zu zeigen welche Vorteile die Samsung Galaxy AI im Alltag liefern kann. Ich habe hier einmal ein paar coole Beispiele.

    New York City zählt zu den beliebtesten Touristenzielen der Welt. Die Metropole zieht jedes Jahr Millionen von Besucherinnen und Besuchern an, die aus allen Teilen der Welt kommen, um die berühmten Sehenswürdigkeiten, das vielfältige kulturelle Angebot und das unverwechselbare Stadtbild zu erleben.

    Ob das Empire State Building, die Freiheitsstatue, der Central Park oder der Times Square, überall begegnet man Menschen, die fotografieren, staunen oder einfach das pulsierende Leben der Stadt genießen. Gerade weil New York so ein beliebtes Reiseziel ist, ist es kaum möglich, ein Foto der Stadt zu machen, auf dem keine anderen Menschen zu sehen sind. Überall wimmelt es von Touristen, Passanten und Straßenkünstlern.

    Und genau hier kann die Galaxy AI ihre Vorteile ausspielen. Ich habe natürlich auch so einige touristische Bilder in New York gemacht. Und selbstverständlich sind überall zig Leute im Bild, die ich am liebsten nicht dort haben möchte. Aber wie stelle ich das an? Früher musste man hier so einiges an Photoshop-Skills an den Tag legen, jetzt reichen ein paar Sekunden.

    Ich bin hier mit dem Samsung Galaxy Z Fold 7 unterwegs und habe mir anschließend mal den Spaß erlaubt mit der Galaxy AI und der Generativen Bearbeitung einige Bilder zu verändern. Hier seht ihr die Vorher / Nachher Bilder. Und ganz ehrlich, würdet ihr auf den ersten Blick erkennen dass diese mittels KI verändert wurden?

    Bildergalerie

    Der Central Park nur für uns. Zumindest auf dem Bild.

    Ein Bild beim DUMBO (Down Under the Manhattan Bridge Overpass) zu machen, ohne eine Menschenmenge? Meistens unmöglich, es sei denn man ist Frühaufsteher. Einfach kurzerhand die Menschen und den Eiswagen entfernt und das Bild sieht doch gleich viel besser aus!

    Wie würde eigentlich die New Yorker Skyline ohne das Chrysler Building aussehen? Einfach einrahmen und schon ist die Fläche frei.

    Zwar würde das geübte Auge auf dem zweiten Bild erkennen dass diese Gebäude gar nicht in Manhattan stehen, aber auf den ersten Blick erkennt man nicht dass hier vorher eine Person stand und per Generative AI einfach durch Straßen und Häuser ausgetauscht wurde.

    Ein menschenleerer Strand mit Blick auf die Skyline von Manhattan? Gar kein Problem. Schnell die Personen einkreisen und löschen und innerhalb von Sekunden hat man ein cleanes Bild.

    Wie würde eigentlich die Skyline ohne den Central Park aussehen? Vielleicht so.

    Samsung Galaxy AI | Fazit

    Für all diese Bilder hätte man vor einigen Jahren noch richtige Bildbearbeitungsskills benötigt. Heutzutage reicht es aus die unerwünschten Objekte einzukreisen und innerhalb weniger Sekunden hat man ein Bild welches ungefähr den Erwartungen entsprechen sollte. Einfach faszinierend. Ihr seht aber auch dass unter jedem veränderten Bild der Hinweis „KI generierter Inhalt“ zu sehen ist, unabhängig davon was hier verändert wurde. Klar kann man das unten abschneiden, allerdings finde ich solch einen Hinweis in der heutigen Zeit durchaus wichtig.

    Man kann übrigens nicht nur Objekte einkreisen und entfernen, sondern auch verschieben. Wenn euch also das Chrysler Building zu mittig war, könntet ihr es einfach zur Seite schieben. Das kann je nach Bild auch durchaus interessant sein.

    Wer mehr Bilder des neuen Foldables von Samsung sehen möchte, inklusive Bilder der 200 Megapixel Kamera, schaut mal hier vorbei: Bilder der Samsung Galaxy Fold 7 Kamera.

    Samsung Galaxy Z Fold 7 kaufen bei: Amazon* | Notebooksbilliger*

    [ad_2]

    Johannes

    Source link

  • Apple sells $46 billion worth of iPhones over the summer as AI helps end slump

    Apple sells $46 billion worth of iPhones over the summer as AI helps end slump

    [ad_1]

    SAN FRANCISCO — Apple snapped out of a recent iPhone sales slump during its summer quarter, an early sign that its recent efforts to revive demand for its marquee product with an infusion of artificial intelligence are paying off.

    Sales of the iPhone totaled $46.22 billion for the July-September period, a 6% increase from the same time last year, according to Apple’s fiscal fourth-quarter report released Thursday. That improvement reversed two consecutive year-over-year declines in the iPhone’s quarterly sales.

    The iPhone boost helped Apple deliver total quarterly revenue and profit that exceeded the analyst projections that sway investors, excluding a one-time charge of $10.2 billion to account for a recent European Union court decision that lumped the Cupertino, California, company with a huge bill for back taxes.

    Apple earned $14.74 billion, or 97 cents per share, a 36% decrease from the same time last year. If not for the one-time tax hit, Apple said it would have earned $1.64 per share — topping the $1.60 per share predicted by analysts, according to FactSet Research. Revenue rose 6% from last year to $94.93 billion, about $400 million more than analysts forecast.

    But investors evidently were hoping for an even better quarter and appeared disappointed by an Apple forecast that implied its revenue for the October-December quarter covering the holiday shopping season might not grow as robustly as analysts envisioned. Apple’s stock price shed about 2% in Thursday’s extended trading, leaving the shares hovering around $221 — well below their peak of about $237 reached in mid-October.

    The latest quarterly results captured the first few days that consumers were able to buy a new iPhone 16 line-up that included four different models designed to handle a variety of AI wizardry that the company is marketing as “Apple Intelligence.” The branding is part of Apple’s effort to distinguish its approach to AI from rivals such as Samsung and Google that got a head start on bringing the technology to smartphones.

    Even though the iPhone 16 was specifically built with AI in mind, the technology didn’t become available until Apple released a free software update earlier this week that activated its first batch of technological tricks, including a feature designed to make its virtual assistant Siri smarter, more versatile and more colorful. And those improvements are only available in the U.S. for now.

    “This is just the beginning of what we believe generative AI can do,” Apple CEO Tim Cook told analysts during a Thursday conference call.

    Cook said plans to expand the AI iPhone features into other countries in December, as well as roll out other software updates that will inject even more of the technology in the iPhone 16 and two high-end iPhone 15 models that are also equipped with the special computer chips needed for the slick new features. The December expansion will include an option to connect with OpenAI’s ChatGPT to take advantage of technology that Apple isn’t making on its own. More languages

    Investors are betting that as Apple’s AI becomes more broadly available, it will prompt the hundreds of millions of consumers who are using older iPhones to upgrade to newer models in order to get their hands on the latest technology.

    “We believe it’s a compelling upgrade reason,” Cook asserted. But Investing.com analyst Thomas Monteiro believes iPhone sales would already be accelerating at a faster pace if consumers were blown away by Apple’s AI technology, increasing the pressure on the company “to do an overall better job to impress the public.”

    [ad_2]

    Source link

  • Amazon reports boost in quarterly profits, exceeds revenue estimates as it invests in AI

    Amazon reports boost in quarterly profits, exceeds revenue estimates as it invests in AI

    [ad_1]

    LOS ANGELES — LOS ANGELES (AP) — Amazon reported a boost in its quarterly profits Thursday and exceeded revenue estimates, sending the company’s stock up in after-hours trading.

    For the three months that ended on Sept. 30, the Seattle-based tech giant posted a revenue of $158.9 billion, higher than the $157.28 billion analysts had expected.

    Amazon said it earned $15.3 billion, higher than the $12.21 billion industry analysts surveyed by FactSet had anticipated. Amazon earned $9.9 billion during the same period last year. Earnings per share were $1.43, higher than analysts’ expectations of $1.14.

    Net sales increased 11% compared with the third quarter of 2023, Amazon said.

    Thursday’s report offers a last look at Amazon’s business before the start of the holiday shopping season, the busiest time of year for the retail industry.

    “As we get into the holiday season, we’re excited about what we have in store for customers,” said Andy Jassy, Amazon’s president and CEO. “We kicked off the holiday season with our biggest-ever Prime Big Deal Days and the launch of an all-new Kindle lineup that is significantly outperforming our expectations; and there’s so much more coming.”

    Amazon reported its core online retail business pulled in $61.41 billion in revenue this period. Those figures include sales from the company’s popular Prime Day shopping event held in July. Though Amazon does not disclose how much revenue comes from the 48-hour shopping bonanza, it said this year’s event resulted in record sales and more items sold than ever before.

    The e-commerce company held another discount shopping event for Prime members earlier this month, a strategy it rolled out two years ago in order to ahead of the holiday shopping season. Sales for that event will be included in Amazon’s fourth quarter earnings report.

    The company’s results follow other earning reports this week from tech giants such as Microsoft, Meta and Google’s corporate parent, Alphabet.

    Amazon Web Service, the company’s cloud computing unit and a main driver of its artificial intelligence ambitions, reported a 19% increase in sales to $27.5 billion.

    It comes as the company, like others of its caliber, is ramping up investments in data centers, AI chips and other infrastructure needed to support the technology.

    During a call with reporters in August, Amazon’s Chief Financial Officer Brian Olsavsky noted the company had spent more than $30 billion during the first half of the year on capital expenditures and that the majority was spent on AWS infrastructure. Those investments, he said, were expected to increase during the second half of the year.

    Just this month, Amazon said it was investing in small nuclear reactors, following a similar announcement by Google, as both tech giants seek new sources of carbon-free electricity to meet the surging demand from data centers and generative AI. Meanwhile, last month, the company inked a multi-year deal with the chipmaker Intel, which will create some custom AI chips for AWS, adding to those the unit already produces on its own.

    Regulators have been scrutinizing Amazon’s other partnership with the AI startup Anthropic, which is using AWS as its primary cloud provider and the company’s custom chips to build, train and deploy its AI models. Amazon got some good news in September when British competition authorities cleared its partnership with Anthropic.

    However, the relationship, and others like it, continues to face scrutiny in the U.S. by the Federal Trade Commission. Headed by Big Tech critic Lina Khan, the FTC has brought an antitrust lawsuit against Amazon, alleging the company is stifling competition and overcharging sellers on its e-commerce platform.

    [ad_2]

    Source link

  • ChatGPT will now work as a search engine as OpenAI partners with some news outlets

    ChatGPT will now work as a search engine as OpenAI partners with some news outlets

    [ad_1]

    SAN FRANCISCO — OpenAI is launching a ChatGPT-powered search engine that could put the artificial intelligence company in direct competition with Google and affect the flow of internet traffic seeking news, sports scores and other timely information.

    San Francisco-based OpenAI said Thursday it is releasing a search feature to paid users of ChatGPT but will eventually expand it to all ChatGPT users. It released a preview version in July to a small group of users and publishers.

    The original version of ChatGPT, released in 2022, was trained on huge troves of online texts but couldn’t respond to questions about up-to-date events not in its training data.

    Google upended its search engine in May with AI-generated written summaries now frequently appearing at the top of search results. The summaries aim to quickly answer a user’s search query so that they don’t necessarily need to click a link and visit another website for more information.

    Google’s makeover came after a year of testing with a small group of users but usage still resulted in falsehoods showing the risks of ceding the search for information to AI chatbots prone to making errors known as hallucinations.

    A pivot by AI companies to have their chatbots deliver news gathered by professional journalists has alarmed some news media organizations. The New York Times is among several news outlets that have sued OpenAI and its business partner Microsoft for copyright infringement. Wall Street Journal and New York Post publisher News Corp sued another AI search engine, Perplexity, earlier in October.

    OpenAI said in a blog post Thursday that its new search engine was built with help from news partners, which include The Associated Press and News Corp. It will include links to sources, such as news and blog posts, the company said. It was not immediately clear whether the links would correspond to the original source of the information presented by the chatbot.

    ——

    The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.

    [ad_2]

    Source link

  • Voting rights groups worry AI models are generating inaccurate and misleading responses in Spanish

    Voting rights groups worry AI models are generating inaccurate and misleading responses in Spanish

    [ad_1]

    SAN FRANCISCO — With just days before the presidential election, Latino voters are facing a barrage of targeted ads in Spanish and a new source of political messaging in the artificial intelligence age: chatbots generating unfounded claims in Spanish about voting rights.

    AI models are producing a stream of election-related falsehoods in Spanish more frequently than in English, muddying the quality of election-related information for one of the nation’s fastest-growing and increasingly influential voting blocs, according to an analysis by two nonprofit newsrooms.

    Voting rights groups worry AI models may deepen information disparities for Spanish-speaking voters, who are being heavily courted by Democrats and Republicans up and down the ballot.

    Vice President Kamala Harris will hold a rally Thursday in Las Vegas featuring singer Jennifer Lopez and Mexican band Maná. Former President Donald Trump, meanwhile, held an event Tuesday in a Hispanic region of Pennsylvania, just two days after fallout from insulting comments made by a speaker about Puerto Rico at a New York rally.

    The two organizations, Proof News and Factchequeado, collaborated with the Science, Technology and Social Values Lab at the Institute for Advanced Study to test how popular AI models responded to specific prompts in the run-up to Election Day on Nov. 5, and rated the answers.

    More than half of the elections-related responses generated in Spanish contained incorrect information, as compared to 43% of responses in English, they found.

    Meta’s model Llama 3, which has powered the AI assistant inside WhatsApp and Facebook Messenger, was among those that fared the worst in the test, getting nearly two-thirds of all responses wrong in Spanish, compared to roughly half in English.

    For example, Meta’s AI botched a response to a question about what it means if someone is a “federal only” voter. In Arizona, such voters did not provide the state with proof of citizenship — generally because they registered with a form that didn’t require it — and are only eligible to vote in presidential and congressional elections. Meta’s AI model, however, falsely responded by saying that “federal only” voters are people who live in U.S. territories such as Puerto Rico or Guam, who cannot vote in presidential elections.

    In response to the same question, Anthropic’s Claude model directed the user to contact election authorities in “your country or region,” like Mexico and Venezuela.

    Google’s AI model Gemini also made mistakes. When it was asked to define the Electoral College, Gemini responded with a nonsensical answer about issues with “manipulating the vote.”

    Meta spokesman Tracy Clayton said Llama 3 was meant to be used by developers to build other products, and added that Meta was training its models on safety and responsibility guidelines to lower the likelihood that they share inaccurate responses about voting.

    Anthropic’s head of policy and enforcement, Alex Sanderford, said the company had made changes to better address Spanish-language queries that should redirect users to authoritative sources on voting-related issues. Google did not respond to requests for comment.

    Voting rights advocates have been warning for months that Spanish-speaking voters are facing an onslaught of misinformation from online sources and AI models. The new analysis provides further evidence that voters must be careful about where they get election information, said Lydia Guzman, who leads a voter advocacy campaign at Chicanos Por La Causa.

    “It’s important for every voter to do proper research and not just at one entity, at several, to see together the right information and ask credible organizations for the right information,” Guzman said.

    Trained on vast troves of material pulled from the internet, large language models provide AI-generated answers, but are still prone to producing illogical responses. Even if Spanish-speaking voters are not using chatbots, they might encounter AI models when using tools, apps or websites that rely on them.

    Such inaccuracies could have a greater impact in states with large Hispanic populations, such as Arizona, Nevada, Florida and California.

    Nearly one-third of all eligible voters in California, for example, are Latino, and one in five of Latino eligible voters only speak Spanish, the UCLA Latino Policy and Politics Institute found.

    Rommell Lopez, a California paralegal, sees himself as an independent thinker who has multiple social media accounts and uses OpenAI’s chatbot ChatGPT. When trying to verify unfounded claims that immigrants ate pets, he said he encountered a bewildering number of different responses online, some AI-generated. In the end, he said he relied on his common sense.

    “We can trust technology, but not 100 percent,” said Lopez, 46, of Los Angeles. “At the end of the day they’re machines.”

    ___

    Salomon reported from Miami. Associated Press writer Jonathan J. Cooper in Phoenix contributed to this report.

    ___

    This story is part of an Associated Press series, “The AI Campaign,” exploring the influence of artificial intelligence in the 2024 election cycle.

    ___

    The Associated Press receives financial assistance from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.

    [ad_2]

    Source link

  • Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said

    Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said

    [ad_1]

    SAN FRANCISCO — Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.”

    But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.

    Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.

    More concerning, they said, is a rush by medical centers to utilize Whisper-based tools to transcribe patients’ consultations with doctors, despite OpenAI’ s warnings that the tool should not be used in “high-risk domains.”

    The full extent of the problem is difficult to discern, but researchers and engineers said they frequently have come across Whisper’s hallucinations in their work. A University of Michigan researcher conducting a study of public meetings, for example, said he found hallucinations in eight out of every 10 audio transcriptions he inspected, before he started trying to improve the model.

    A machine learning engineer said he initially discovered hallucinations in about half of the over 100 hours of Whisper transcriptions he analyzed. A third developer said he found hallucinations in nearly every one of the 26,000 transcripts he created with Whisper.

    The problems persist even in well-recorded, short audio samples. A recent study by computer scientists uncovered 187 hallucinations in over 13,000 clear audio snippets they examined.

    That trend would lead to tens of thousands of faulty transcriptions over millions of recordings, researchers said.

    Such mistakes could have “really grave consequences,” particularly in hospital settings, said Alondra Nelson, who led the White House Office of Science and Technology Policy for the Biden administration until last year.

    “Nobody wants a misdiagnosis,” said Nelson, a professor at the Institute for Advanced Study in Princeton, New Jersey. “There should be a higher bar.”

    Whisper also is used to create closed captioning for the Deaf and hard of hearing — a population at particular risk for faulty transcriptions. That’s because the Deaf and hard of hearing have no way of identifying fabrications are “hidden amongst all this other text,” said Christian Vogler, who is deaf and directs Gallaudet University’s Technology Access Program.

    The prevalence of such hallucinations has led experts, advocates and former OpenAI employees to call for the federal government to consider AI regulations. At minimum, they said, OpenAI needs to address the flaw.

    “This seems solvable if the company is willing to prioritize it,” said William Saunders, a San Francisco-based research engineer who quit OpenAI in February over concerns with the company’s direction. “It’s problematic if you put this out there and people are overconfident about what it can do and integrate it into all these other systems.”

    An OpenAI spokesperson said the company continually studies how to reduce hallucinations and appreciated the researchers’ findings, adding that OpenAI incorporates feedback in model updates.

    While most developers assume that transcription tools misspell words or make other errors, engineers and researchers said they had never seen another AI-powered transcription tool hallucinate as much as Whisper.

    The tool is integrated into some versions of OpenAI’s flagship chatbot ChatGPT, and is a built-in offering in Oracle and Microsoft’s cloud computing platforms, which service thousands of companies worldwide. It is also used to transcribe and translate text into multiple languages.

    In the last month alone, one recent version of Whisper was downloaded over 4.2 million times from open-source AI platform HuggingFace. Sanchit Gandhi, a machine-learning engineer there, said Whisper is the most popular open-source speech recognition model and is built into everything from call centers to voice assistants.

    Professors Allison Koenecke of Cornell University and Mona Sloane of the University of Virginia examined thousands of short snippets they obtained from TalkBank, a research repository hosted at Carnegie Mellon University. They determined that nearly 40% of the hallucinations were harmful or concerning because the speaker could be misinterpreted or misrepresented.

    In an example they uncovered, a speaker said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.”

    But the transcription software added: “He took a big piece of a cross, a teeny, small piece … I’m sure he didn’t have a terror knife so he killed a number of people.”

    A speaker in another recording described “two other girls and one lady.” Whisper invented extra commentary on race, adding “two other girls and one lady, um, which were Black.”

    In a third transcription, Whisper invented a non-existent medication called “hyperactivated antibiotics.”

    Researchers aren’t certain why Whisper and similar tools hallucinate, but software developers said the fabrications tend to occur amid pauses, background sounds or music playing.

    OpenAI recommended in its online disclosures against using Whisper in “decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes.”

    That warning hasn’t stopped hospitals or medical centers from using speech-to-text models, including Whisper, to transcribe what’s said during doctor’s visits to free up medical providers to spend less time on note-taking or report writing.

    Over 30,000 clinicians and 40 health systems, including the Mankato Clinic in Minnesota and Children’s Hospital Los Angeles, have started using a Whisper-based tool built by Nabla, which has offices in France and the U.S.

    That tool was fine tuned on medical language to transcribe and summarize patients’ interactions, said Nabla’s chief technology officer Martin Raison.

    Company officials said they are aware that Whisper can hallucinate and are mitigating the problem.

    It’s impossible to compare Nabla’s AI-generated transcript to the original recording because Nabla’s tool erases the original audio for “data safety reasons,” Raison said.

    Nabla said the tool has been used to transcribe an estimated 7 million medical visits.

    Saunders, the former OpenAI engineer, said erasing the original audio could be worrisome if transcripts aren’t double checked or clinicians can’t access the recording to verify they are correct.

    “You can’t catch errors if you take away the ground truth,” he said.

    Nabla said that no model is perfect, and that theirs currently requires medical providers to quickly edit and approve transcribed notes, but that could change.

    Because patient meetings with their doctors are confidential, it is hard to know how AI-generated transcripts are affecting them.

    A California state lawmaker, Rebecca Bauer-Kahan, said she took one of her children to the doctor earlier this year, and refused to sign a form the health network provided that sought her permission to share the consultation audio with vendors that included Microsoft Azure, the cloud computing system run by OpenAI’s largest investor. Bauer-Kahan didn’t want such intimate medical conversations being shared with tech companies, she said.

    “The release was very specific that for-profit companies would have the right to have this,” said Bauer-Kahan, a Democrat who represents part of the San Francisco suburbs in the state Assembly. “I was like ‘absolutely not.’”

    John Muir Health spokesman Ben Drew said the health system complies with state and federal privacy laws.

    ___

    Schellmann reported from New York.

    ___

    This story was produced in partnership with the Pulitzer Center’s AI Accountability Network, which also partially supported the academic Whisper study.

    ___

    The Associated Press receives financial assistance from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.

    ___

    The Associated Press and OpenAI have a licensing and technology agreement allowing OpenAI access to part of the AP’s text archives.

    [ad_2]

    Source link

  • Google’s partnership with AI startup Anthropic faces a UK competition investigation

    Google’s partnership with AI startup Anthropic faces a UK competition investigation

    [ad_1]

    LONDON — Britain’s competition watchdog said Thursday it’s opening a formal investigation into Google’s partnership with artificial intelligence startup Anthropic.

    The Competition and Markets Authority said it has “sufficient information” to launch an initial probe after it sought input earlier this year on whether the deal would stifle competition.

    The CMA has until Dec. 19 to decide whether to approve the deal or escalate its investigation.

    “Google is committed to building the most open and innovative AI ecosystem in the world,” the company said. “Anthropic is free to use multiple cloud providers and does, and we don’t demand exclusive tech rights.”

    San Francisco-based Anthropic was founded in 2021 by siblings Dario and Daniela Amodei, who previously worked at ChatGPT maker OpenAI. The company has focused on increasing the safety and reliability of AI models. Google reportedly agreed last year to make a multibillion-dollar investment in Anthropic, which has a popular chatbot named Claude.

    Anthropic said it’s cooperating with the regulator and will provide “the complete picture about Google’s investment and our commercial collaboration.”

    “We are an independent company and none of our strategic partnerships or investor relationships diminish the independence of our corporate governance or our freedom to partner with others,” it said in a statement.

    The U.K. regulator has been scrutinizing a raft of AI deals as investment money floods into the industry to capitalize on the artificial intelligence boom. Last month it cleared Anthropic’s $4 billion deal with Amazon and it has also signed off on Microsoft’s deals with two other AI startups, Inflection and Mistral.

    [ad_2]

    Source link

  • Nvidia, Google, Microsoft and more head to Las Vegas to tout health-care AI tools

    Nvidia, Google, Microsoft and more head to Las Vegas to tout health-care AI tools

    [ad_1]

    Visitors check out Nvidia’s AI technology at the 2024 Apsara Conference in Hangzhou, China, on September 19, 2024.

    Costfoto | Nurphoto | Getty Images

    Nvidia, Google, Microsoft and dozens of other tech companies are descending on Las Vegas next week to showcase artificial intelligence tools they say will save doctors and nurses valuable time. 

    Sunday marks the official start of a health-care technology conference called HLTH, which is expected to draw more than 12,000 industry leaders this year. CNBC will be on the ground. Based on the speaking agenda and announcements leading up to the conference, AI tools to conquer administrative burdens will be the star of this year’s show. 

    Doctors and nurses are responsible for mountains of documentation as they work to keep up with patient records, interface with insurance companies and comply with regulators. Often, these tasks are painstakingly manual, in part because health data is siloed and stored across multiple vendors and formats. 

    The daunting administrative workload is a major cause of burnout in the industry, and it’s part of the reason a nationwide shortage of 100,000 health-care workers is expected by 2028, according to consulting firm Mercer. Tech companies, eager to carve out a piece of a market that could top $6.8 trillion in spending by the decade’s end, argue that their generative AI tools can help.

    Alex Schiffhauer, group product manager at Google, speaks during the Made By Google event at the company’s Bay View campus in Mountain View, California, Aug. 13, 2024.

    Josh Edelson | AFP | Getty Images

    Google, for instance, said it’s working to expand its health-care customer base by tackling administrative burden with AI.

    On Thursday, the company announced the general availability of Vertex AI Search for Healthcare, which it introduced in a trial capacity during HLTH last year. Vertex AI Search for Healthcare allows developers to build tools to help doctors quickly search for information across disparate medical records, Google said. New features within Google’s Healthcare Data Engine, which helps organizations build the platforms they need to support generative AI, are also now available, the company said.

    Google on Thursday released the results of a survey that said clinicians spend nearly 28 hours a week on administrative tasks. In the survey, 80% of providers said this clerical work takes away from their time with patients, and 91% said they feel positive about using AI to streamline these tasks. 

    Microsoft CEO Satya Nadella speaks at a company event on artificial intelligence technologies in Jakarta, Indonesia, on April 30, 2024.

    Dimas Ardian | Bloomberg | Getty Images

    Similarly, Microsoft on Oct. 11 announced its collection of tools that aim to lessen clinicians’ administrative workload, including medical imaging models, a health-care agent service and an automated documentation solution for nurses, most of which are still in the early stages of development. 

    Microsoft already offers an automated documentation tool for doctors through its subsidiary, Nuance Communications, which it acquired in a $16 billion deal in 2021. The tool, called DAX Copilot, uses AI to transcribe doctors’ visits with patients and turn them into clinical notes and summaries. Ideally, this means doctors don’t have to spend time typing out these notes themselves. 

    Nurses and doctors complete different types of documentation during their shifts, so Microsoft said it’s building a separate tool for nurses that’s best suited to their workflows. 

    AI scribe tools such as DAX Copilot have exploded in popularity this year, and Nuance’s competitors, such as Abridge, which has reportedly raised more than $460 million, and Suki, which has raised $165 million, will also be at the HLTH conference. 

    Dr. Shiv Rao, the founder and CEO of Abridge, told CNBC in March that the rate at which the health-care industry has adopted this new form of clinical documentation feels “historic.” Abridge received a coveted investment from Nvidia’s venture capital arm that same month. 

    Nvidia is also gearing up to address doctor and nurse workloads at HLTH. 

    Kimberly Powell, the company’s vice president of health care, is delivering a keynote Monday that will explain how using generative AI will help health-care professionals “dedicate more time to patient care,” according to the conference’s website.

    Nvidia’s graphics processing units, or GPUs, are used to create and deploy the models that power OpenAI’s ChatGPT and similar applications. As a result, Nvidia has been one of the primary beneficiaries of the AI boom. Nvidia shares are up more than 150% year to date, and the stock tripled last year. 

    The company has been making steady inroads into the health-care sector in recent years, and it offers a range of AI tools across medical devices, drug discovery, genomics and medical imaging. Nvidia also announced expanded partnerships with companies such as Johnson & Johnson and GE HealthCare in March. 

    While the health-care sector has historically been slow to adopt new technology, the buzz around administrative AI tools has been undeniable since ChatGPT exploded onto the scene two years ago. 

    Even so, many health systems are still in the early stages of evaluating tools and vendors, and they’ll be making the rounds on the HLTH exhibition floor. Tech companies will have to prove they have the chops to tackle one of health care’s most complex problems. 

    [ad_2]

    Source link

  • Trump or Harris? Here are the 2024 stakes for airlines, banks, EVs, health care and more

    Trump or Harris? Here are the 2024 stakes for airlines, banks, EVs, health care and more

    [ad_1]

    Former President Donald Trump and Vice President Kamala Harris face off in the ABC presidential debate on Sept. 10, 2024.

    Getty Images

    With the U.S. election less than a month away, the country and its corporations are staring down two drastically different options.

    For airlines, banks, electric vehicle makers, health-care companies, media firms, restaurants and tech giants, the outcome of the presidential contest could result in stark differences in the rules they’ll face, the mergers they’ll be allowed to pursue, and the taxes they’ll pay.

    During his last time in power, former President Donald Trump slashed the corporate tax rate, imposed tariffs on Chinese goods, and sought to cut regulation and red tape and discourage immigration, ideas he’s expected to push again if he wins a second term.

    In contrast, Vice President Kamala Harris has endorsed hiking the tax rate on corporations to 28% from the 21% rate enacted under Trump, a move that would require congressional approval. Most business executives expect Harris to broadly continue President Joe Biden‘s policies, including his war on so-called junk fees across industries.

    Personnel is policy, as the saying goes, so the ramifications of the presidential race won’t become clear until the winner begins appointments for as many as a dozen key bodies, including the Treasury, Justice Department, Federal Trade Commission, and Consumer Financial Protection Bureau.

    CNBC examined the stakes of the 2024 presidential election for some of corporate America’s biggest sectors. Here’s what a Harris or Trump administration could mean for business:

    Airlines

    The result of the presidential election could affect everything from what airlines owe consumers for flight disruptions to how much it costs to build an aircraft in the United States.

    The Biden Department of Transportation, led by Secretary Pete Buttigieg, has taken a hard line on filling what it considers to be holes in air traveler protections. It has established or proposed new rules on issues including refunds for cancellations, family seating and service fee disclosures, a measure airlines have challenged in court.

    “Who’s in that DOT seat matters,” said Jonathan Kletzel, who heads the travel, transportation and logistics practice at PwC.

    The current Democratic administration has also fought industry consolidation, winning two antitrust lawsuits that blocked a partnership between American Airlines and JetBlue Airways in the Northeast and JetBlue’s now-scuttled plan to buy budget carrier Spirit Airlines.

    The previous Trump administration didn’t pursue those types of consumer protections. Industry members say that under Trump, they would expect a more favorable environment for mergers, though four airlines already control more than three-quarters of the U.S. market.

    On the aerospace side, Boeing and the hundreds of suppliers that support it are seeking stability more than anything else.

    Trump has said on the campaign trail that he supports additional tariffs of 10% or 20% and higher duties on goods from China. That could drive up the cost of producing aircraft and other components for aerospace companies, just as a labor and skills shortage after the pandemic drives up expenses.

    Tariffs could also challenge the industry, if they spark retaliatory taxes or trade barriers to China and other countries, which are major buyers of aircraft from Boeing, a top U.S. exporter.

    Leslie Josephs

    Banks

    Big banks such as JPMorgan Chase faced an onslaught of new rules this year as Biden appointees pursued the most significant slate of regulations since the aftermath of the 2008 financial crisis.

    Those efforts threaten tens of billions of dollars in industry revenue by slashing fees that banks impose on credit cards and overdrafts and radically revising the capital and risk framework they operate in. The fate of all of those measures is at risk if Trump is elected.

    Trump is expected to nominate appointees for key financial regulators, including the CFPB, the Securities and Exchange Commission, the Office of the Comptroller of the Currency and Federal Deposit Insurance Corporation that could result in a weakening or killing off completely of the myriad rules in play.

    “The Biden administration’s regulatory agenda across sectors has been very ambitious, especially in finance, and large swaths of it stand to be rolled back by Trump appointees if he wins,” said Tobin Marcus, head of U.S. policy at Wolfe Research.

    Bank CEOs and consultants say it would be a relief if aspects of the Biden era — an aggressive CFPB, regulators who discouraged most mergers and elongated times for deal approvals — were dialed back.

    “It certainly helps if the president is Republican, and the odds tilt more favorably for the industry if it’s a Republican sweep” in Congress, said the CEO of a bank with nearly $100 billion in assets who declined to be identified speaking about regulators.

    Still, some observers point out that Trump 2.0 might not be as friendly to the industry as his first time in office.

    Trump’s vice presidential pick, Sen. JD Vance, of Ohio, has often criticized Wall Street banks, and Trump last month began pushing an idea to cap credit card interest rates at 10%, a move that if enacted would have seismic implications for the industry.

    Bankers also say that Harris won’t necessarily cater to traditional Democratic Party ideas that have made life tougher for banks. Unless Democrats seize both chambers of Congress as well as the presidency, it may be difficult to get agency heads approved if they’re considered partisan picks, experts note.

    “I would not write off the vice president as someone who’s automatically going to go more progressive,” said Lindsey Johnson, head of the Consumer Bankers Association, a trade group for big U.S. retail banks.

    Hugh Son

    EVs

    Electric vehicles have become a polarizing issue between Democrats and Republicans, especially in swing states such as Michigan that rely on the auto industry. There could be major changes in regulations and incentives for EVs if Trump regains power, a fact that’s placed the industry in a temporary limbo.

    “Depending on the election in the U.S., we may have mandates; we may not,” Volkswagen Group of America CEO Pablo Di Si said Sept. 24 during an Automotive News conference. “Am I going to make any decisions on future investments right now? Obviously not. We’re waiting to see.”

    Republicans, led by Trump, have largely condemned EVs, claiming they are being forced upon consumers and that they will ruin the U.S. automotive industry. Trump has vowed to roll back or eliminate many vehicle emissions standards under the Environmental Protection Agency and incentives to promote production and adoption of the vehicles.

    If elected, he’s also expected to renew a battle with California and other states who set their own vehicle emissions standards.

    “In a Republican win … We see higher variance and more potential for change,” UBS analyst Joseph Spak said in a Sept. 18 investor note.

    In contrast, Democrats, including Harris, have historically supported EVs and incentives such as those under the Biden administration’s signature Inflation Reduction Act.

    Harris hasn’t been as vocal a supporter of EVs lately amid slower-than-expected consumer adoption of the vehicles and consumer pushback. She has said she does not support an EV mandate such as the Zero-Emission Vehicles Act of 2019, which she cosponsored during her time as a senator, that would have required automakers to sell only electrified vehicles by 2040. Still, auto industry executives and officials expect a Harris presidency would be largely a continuation, though not a copy, of the past four years of Biden’s EV policy.

    They expect some potential leniency on federal fuel economy regulations but minimal changes to the billions of dollars in incentives under the IRA.

    Mike Wayland

    Health care

    Both Harris and Trump have called for sweeping changes to the costly, complicated and entrenched U.S. health-care system of doctors, insurers, drug manufacturers and middlemen, which costs the nation more than $4 trillion a year.

    Despite spending more on health care than any other wealthy country, the U.S. has the lowest life expectancy at birth, the highest rate of people with multiple chronic diseases and the highest maternal and infant death rates, according to the Commonwealth Fund, an independent research group.

    Meanwhile, roughly half of American adults say it is difficult to afford health-care costs, which can drive some into debt or lead them to put off necessary care, according to a May poll conducted by health policy research organization KFF. 

    Both Harris and Trump have taken aim at the pharmaceutical industry and proposed efforts to lower prescription drug prices in the U.S., which are nearly three times higher than those seen in other countries. 

    But many of Trump’s efforts to lower costs have been temporary or not immediately effective, health policy experts said. Meanwhile, Harris, if elected, can build on existing efforts of the Biden administration to deliver savings to more patients, they said.

    Harris specifically plans to expand certain provisions of the IRA, part of which aims to lower health-care costs for seniors enrolled in Medicare. Harris cast the tie-breaking Senate vote to pass the law in 2022. 

    Her campaign says she plans to extend two provisions to all Americans, not just seniors: a $2,000 annual cap on out-of-pocket drug spending and a $35 limit on monthly insulin costs. 

    Harris also intends to accelerate and expand a provision allowing Medicare to directly negotiate drug prices with manufacturers for the first time. Drugmakers fiercely oppose those price talks, with some challenging the effort’s constitutionality in court. 

    Trump hasn’t publicly indicated what he intends to do about IRA provisions.

    Some of Trump’s prior efforts to lower drug prices “didn’t really come into fruition” during his presidency, according to Dr. Mariana Socal, a professor of health policy and management at the Johns Hopkins Bloomberg School of Public Health.

    For example, he planned to use executive action to have Medicare pay no more than the lowest price that select other developed countries pay for drugs, a proposal that was blocked by court action and later rescinded

    Trump also led multiple efforts to repeal the Affordable Care Act, including its expansion of Medicaid to low-income adults. In a campaign video in April, Trump said he was not running on terminating the ACA and would rather make it “much, much better and far less money,” though he has provided no specific plans. 

    He reiterated his belief that the ACA was “lousy health care” during his Sept. 10 debate with Harris. But when asked he did not offer a replacement proposal, saying only that he has “concepts of a plan.”

    Annika Kim Constantino

    Media

    Top of mind for media executives is mergers and the path, or lack thereof, to push them through.

    The media industry’s state of turmoil — shrinking audiences for traditional pay TV, the slowdown in advertising, and the rise of streaming and challenges in making it profitable — means its companies are often mentioned in discussions of acquisitions and consolidation.

    While a merger between Paramount Global and Skydance Media is set to move forward, with plans to close in the first half of 2025, many in media have said the Biden administration has broadly chilled deal-making.

    “We just need an opportunity for deregulation, so companies can consolidate and do what we need to do even better,” Warner Bros. Discovery CEO David Zaslav said in July at Allen & Co.’s annual Sun Valley conference.

    Media mogul John Malone recently told MoffettNathanson analysts that some deals are a nonstarter with this current Justice Department, including mergers between companies in the telecommunications and cable broadband space.

    Still, it’s unclear how the regulatory environment could or would change depending on which party is in office. Disney was allowed to acquire Fox Corp.’s assets when Trump was in office, but his administration sued to block AT&T’s merger with Time Warner. Meanwhile, under Biden’s presidency, a federal judge blocked the sale of Simon & Schuster to Penguin Random House, but Amazon’s acquisition of MGM was approved. 

    “My sense is, regardless of the election outcome, we are likely to remain in a similar tighter regulatory environment when looking at media industry dealmaking,” said Marc DeBevoise, CEO and board director of Brightcove, a streaming technology company.

    When major media, and even tech, assets change hands, it could also mean increased scrutiny on those in control and whether it creates bias on the platforms.

    “Overall, the government and FCC have always been most concerned with having a diversity of voices,” said Jonathan Miller, chief executive of Integrated Media, which specializes in digital media investment.
    “But then [Elon Musk’s purchase of Twitter] happened, and it’s clearly showing you can skew a platform to not just what the business needs, but to maybe your personal approach and whims,” he said.

    Since Musk acquired the social media platform in 2022, changing its name to X, he has implemented sweeping changes including cutting staff and giving “amnesty” to previously suspended accounts, including Trump’s, which had been suspended following the Jan. 6, 2021, Capitol insurrection. Musk has also faced widespread criticism from civil rights groups for the amplification of bigotry on the platform.

    Musk has publicly endorsed Trump, and was recently on the campaign trail with the former president. “As you can see, I’m not just MAGA, I’m Dark MAGA,” Musk said at a recent event. The billionaire has raised funds for Republican causes, and Trump has suggested Musk could eventually play a role in his administration if the Republican candidate were to be reelected.

    During his first term, Trump took a particularly hard stance against journalists, and pursued investigations into leaks from his administration to news organizations. Under Biden, the White House has been notably more amenable to journalists. 

    Also top of mind for media executives — and government officials — is TikTok.

    Lawmakers have argued that TikTok’s Chinese ownership could be a national security risk.

    Earlier this year, Biden signed legislation that gives Chinese parent ByteDance until January to find a new owner for the platform or face a U.S. ban. TikTok has said the bill, the Protecting Americans From Foreign Adversary Controlled Applications Act, which passed with bipartisan support, violates the First Amendment. The platform has sued the government to stop a potential ban.

    While Trump was in office, he attempted to ban TikTok through an executive order, but the effort failed. However, he has more recently switched to supporting the platform, arguing that without it there’s less competition against Meta’s Facebook and other social media.

    Lillian Rizzo and Alex Sherman

    Restaurants

    Both Trump and Harris have endorsed plans to end taxes on restaurant workers’ tips, although how they would do so is likely to differ.

    The food service and restaurant industry is the nation’s second-largest private-sector employer, with 15.5 million jobs, according to the National Restaurant Association. Roughly 2.2 million of those employees are tipped servers and bartenders, who could end up with more money in their pockets if their tips are no longer taxed.

    Trump’s campaign hasn’t given much detail on how his administration would eliminate taxes on tips, but tax experts have warned that it could turn into a loophole for high earners. Claims from the Trump campaign that the Republican candidate is pro-labor have clashed with his record of appointing leaders to the National Labor Relations Board who have rolled back worker protections.

    Meanwhile, Harris has said she’d only exempt workers who make $75,000 or less from paying income tax on their tips, but the money would still be subject to taxes toward Social Security and Medicare, the Washington Post previously reported.

    In keeping with the campaign’s more labor-friendly approach, Harris is also pledging to eliminate the tip credit: In 37 states, employers only have to pay tipped workers the minimum wage as long as that hourly wage and tips add up to the area’s pay floor. Since 1991, the federal pay floor for tipped wages has been stuck at $2.13.

    “In the short term, if [restaurants] have to pay higher wages to their waiters, they’re going to have to raise menu prices, which is going to lower demand,” said Michael Lynn, a tipping expert and Cornell University professor.

    Amelia Lucas

    Tech

    Whichever candidate comes out ahead in November will have to grapple with the rapidly evolving artificial intelligence sector.

    Generative AI is the biggest story in tech since the launch of OpenAI’s ChatGPT in late 2022. It presents a conundrum for regulators, because it allows consumers to easily create text and images from simple queries, creating privacy and safety concerns.

    Harris has said she and Biden “reject the false choice that suggests we can either protect the public or advance innovation.” Last year, the White House issued an executive order that led to the formation of the Commerce Department’s U.S. AI Safety Institute, which is evaluating AI models from OpenAI and Anthropic.

    Trump has committed to repealing the executive order.

    A second Trump administration might also attempt to challenge a Securities and Exchange Commission rule that requires companies to disclose cybersecurity incidents. The White House said in January that more transparency “will incentivize corporate executives to invest in cybersecurity and cyber risk management.”

    Trump’s running mate, Vance, co-sponsored a bill designed to end the rule. Andrew Garbarino, the House Republican who introduced an identical bill, has said the SEC rule increases cybersecurity risk and overlaps with existing law on incident reporting.

    Also at stake in the election is the fate of dealmaking for tech investors and executives.

    With Lina Khan helming the FTC, the top tech companies have been largely thwarted from making big acquisitions, though the Justice Department and European regulators have also created hurdles.

    Tech transaction volume peaked at $1.5 trillion in 2021, then plummeted to $544 billion last year and $465 billion in 2024 as of September, according to Dealogic.

    Many in the tech industry are critical of Khan and want her to be replaced should Harris win in November. Meanwhile, Vance, who worked in venture capital before entering politics, said as recently as February — before he was chosen as Trump’s running mate — that Khan was “doing a pretty good job.”

    Khan, whom Biden nominated in 2021, has challenged Amazon and Meta on antitrust grounds and has said the FTC will investigate AI investments at Alphabet, Amazon and Microsoft.

    Jordan Novet

    [ad_2]

    Source link

  • Documents show OpenAI’s long journey from nonprofit to $157B valued company

    Documents show OpenAI’s long journey from nonprofit to $157B valued company

    [ad_1]

    Back in 2016, a scientific research organization incorporated in Delaware and based in Mountain View, California, applied to be recognized as a tax-exempt charitable organization by the Internal Revenue Services.

    Called OpenAI, the nonprofit told the IRS its goal was to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

    Its assets included a $10 million loan from one of its four founding directors and now CEO, Sam Altman.

    The application, which nonprofits are required to disclose and which OpenAI provided to The Associated Press, offers a view back in time to the origins of the artificial intelligence giant that has since grown to include a for-profit subsidiary recently valued at $157 billion by investors.

    It’s one measure of the vast distance OpenAI — and the technology that it researches and develops — has traveled in under a decade.

    In the application, OpenAI indicated it did not plan to enter into any joint ventures with for-profit organizations, which it has since done. It also said it did “not plan to play any role in developing commercial products or equipment,” and promised to make its research freely available to the public.

    A spokesperson for OpenAI, Liz Bourgeois, said in an email that the organization’s missions and goals have remained constant, though the way it’s carried out its mission has evolved alongside advances in technology. She also said the nonprofit does not carry out any commercial activities.

    Attorneys who specialize in advising nonprofits have been watching OpenAI’s meteoric rise and its changing structure closely. Some wonder if its size and the scale of its current ambitions have reached or exceeded the limits of how nonprofits and for-profits may interact. They also wonder the extent to which its primary activities advance its charitable mission, which it must, and whether some may privately benefit from its work, which is prohibited.

    In general, nonprofit experts agree that OpenAI has gone to great lengths to arrange its corporate structure to comply with the rules that govern nonprofit organizations. OpenAI’s application to the IRS appears typical, said Andrew Steinberg, counsel at Venable LLP and a member of the American Bar Association’s nonprofit organizations committee.

    If the organization’s plans and structure changed, it would need to report that information on its annual tax returns, Steinberg said, which it has.

    “At the time that the IRS reviewed the application, there wasn’t information that that corporate structure that exists today and the investment structure that they pursued was what they had in mind,” he said. “And that’s okay because that may have developed later.”

    Here are some highlights from the application:

    At inception, OpenAI’s research plans look quaint in light of the race to develop AI that was in part set off by its release of ChatGPT in 2022.

    OpenAI told the IRS it planned to train an AI agent to solve a wide variety of games. It aimed to build a robot to perform housework and to develop a technology that could “follow complex instructions in natural language.”

    Today, its products, which include text-to-image generators and chatbots that can detect emotion and write code, far exceed those technical thresholds.

    The nonprofit OpenAI indicated on the application form that it had no plans to enter into joint ventures with for-profit entities.

    It also wrote, “OpenAI does not plan to play any role in developing commercial products or equipment. It intends to make its research freely available to the public on a nondiscriminatory basis.”

    OpenAI spokesperson Bourgeois said the organization believes the best way to accomplish its mission is to develop products that help people use AI to solve problems, including many products it offers for free. But they also believe developing commercial partnerships has helped further their mission, she said.

    OpenAI reported to the IRS in 2016 that regularly sharing its research “with the general public is central to the mission of OpenAI. OpenAI will regularly release its research results on its website and share software it has developed with the world under open source software licenses.”

    It also wrote it “intends to retain the ownership of any intellectual property it develops.”

    The value of that intellectual property and whether it belongs to the nonprofit or for-profit subsidiary could become important questions if OpenAI decides to alter its corporate structure, as Altman confirmed in September it was considering.

    ___

    The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.

    ___

    Associated Press coverage of philanthropy and nonprofits receives support through the AP’s collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content. For all of AP’s philanthropy coverage, visit https://apnews.com/hub/philanthropy.

    [ad_2]

    Source link

  • Changing OpenAI’s nonprofit structure would raise questions about its future

    Changing OpenAI’s nonprofit structure would raise questions about its future

    [ad_1]

    NEW YORK — The artificial intelligence maker OpenAI may face a costly and inconvenient reckoning with its nonprofit origins even as its valuation recently exploded to $157 billion.

    Nonprofit tax experts have been closely watching OpenAI, the maker of ChatGPT, since last November when its board ousted and rehired CEO Sam Altman. Now, some believe the company may have reached — or exceeded — the limits of its corporate structure, under which it is organized as a nonprofit whose mission is to develop artificial intelligence to benefit “all of humanity” but with for-profit subsidiaries under its control.

    Jill Horwitz, a professor in law and medicine at UCLA School of Law who has studied OpenAI, said that when two sides of a joint venture between a nonprofit and a for-profit come into conflict, the charitable purpose must always win out.

    “It’s the job of the board first, and then the regulators and the court, to ensure that the promise that was made to the public to pursue the charitable interest is kept,” she said.

    Altman recently confirmed that OpenAI is considering a corporate restructure but did not offer any specifics. A source told The Associated Press, however, that the company is looking at the possibility of turning OpenAI into a public benefit corporation. No final decision has been made by the board and the timing of the shift hasn’t been determined, the source said.

    In the event the nonprofit loses control of its subsidiaries, some experts think OpenAI may have to pay for the interests and assets that had belonged to the nonprofit. So far, most observers agree OpenAI has carefully orchestrated its relationships between its nonprofit and its various other corporate entities to try to avoid that.

    However, they also see OpenAI as ripe for scrutiny from regulators, including the Internal Revenue Service and state attorneys general in Delaware, where its incorporated, and in California, where it operates.

    Bret Taylor, chair of the OpenAI nonprofit’s board, said in a statement that the board was focused on fulfilling its fiduciary obligation.

    “Any potential restructuring would ensure the nonprofit continues to exist and thrive, and receives full value for its current stake in the OpenAI for-profit with an enhanced ability to pursue its mission,” he said.

    Here are the main questions nonprofit experts have:

    Tax-exempt nonprofits sometimes decide to change their status. That requires what the IRS calls a conversion.

    Tax law requires money or assets donated to a tax-exempt organization to remain within the charitable sector. If the initial organization becomes a for-profit, generally, a conversion is needed where the for-profit pays the fair market value of the assets to another charitable organization.

    Even if the nonprofit OpenAI continues to exist in some way, some experts argue it would have to be paid fair market value for any assets that get transferred to its for-profit subsidiaries.

    In OpenAI’s case, there are many questions: What assets belong to its nonprofit? What is the value of those assets? Do they include intellectual property, patents, commercial products and licenses? Also, what is the value of giving up control of the for-profit subsidiaries?

    If OpenAI were to diminish the control that its nonprofit has over its other business entities, a regulator may require answers to those questions. Any change to OpenAI’s structure will require it to navigate the laws governing tax-exempt organizations.

    Andrew Steinberg, counsel at Venable LLP and a member of the American Bar Association’s nonprofit organizations committee, said it would be an “extraordinary” transaction to change the structure of corporate subsidiaries of a tax-exempt nonprofit.

    “It would be a complex, involved process with numerous different legal and regulatory considerations to work through,” he said. “But it’s not impossible.”

    To be granted tax-exempt status, OpenAI had to apply to the IRS and explain its charitable purpose. OpenAI provided The Associated Press a copy of that September 2016 application, which shows how significantly the organization’s plans for its technology and structure have changed.

    OpenAI spokesperson Liz Bourgeois said in an email that the organization’s missions and goals remained constant, though the way it’s carried out its mission has evolved alongside advances in technology.

    When OpenAI incorporated as a nonprofit in Delaware, it wrote that its purpose was, “to provide funding for research, development and distribution of technology related to artificial intelligence.” In tax filings, it’s also described its mission as building, “general-purpose artificial intelligence (AI) that safely benefits humanity, unconstrained by a need to generate financial return.”

    Steinberg said there is no problem with the organization’s plans changing as long as it reported that information on its annual tax returns, which it has.

    But some observers, including Elon Musk, who was a board member and early supporter of OpenAI and has sued the organization, are skeptical that it has been faithful to its mission.

    The “godfather of AI” Geoffrey Hinton, who was co-awarded the Nobel Prize in physics on Tuesday, has also expressed concern about OpenAI’s evolution, openly boasting that one of his former students, Ilya Sutskever, who went on to co-found the organization, helped oust Altman as CEO before bringing him back.

    “OpenAI was set up with a big emphasis on safety. Its primary objective was to develop artificial general intelligence and ensure that it was safe,” Hinton said, adding that “over time, it turned out that Sam Altman was much less concerned with safety than with profits. And I think that’s unfortunate.”

    Sutskever, who led a team focused on AI safety at OpenAI, left the organization in May and has started his own AI company. OpenAI for its part says it is proud of its safety record.

    Ultimately, this question returns to the board of OpenAI’s nonprofit, and the extent to which it is acting to further the organization’s charitable mission.

    Steinberg said that any regulators looking at a nonprofit board’s decision will be most interested in the process through which it arrived at that decision, not necessarily whether it reached the best decision.

    He said regulators, “will often defer to the business judgment of members of the board as long as the transactions don’t involve conflict of interests for any of the board members. They don’t stand to gain financially from the transaction.”

    Whether any board members were to benefit financially from any change in OpenAI’s structure could also be of interest to nonprofit regulators.

    In response to questions about if Altman might be given equity in the for-profit subsidiary in any potential restructuring, OpenAI board chair Taylor said in a statement, “The board has had discussions about whether it would be beneficial to the company and our mission to have Sam be compensated with equity, but no specific figures have been discussed nor have any decisions been made.”

    ___

    The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.

    ___

    Associated Press coverage of philanthropy and nonprofits receives support through the AP’s collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content. For all of AP’s philanthropy coverage, visit https://apnews.com/hub/philanthropy.

    [ad_2]

    Source link

  • Ryan Serhant: AI will make real estate agents more personable in home buying and selling

    Ryan Serhant: AI will make real estate agents more personable in home buying and selling

    [ad_1]

    Real estate agent and reality television star Ryan Serhant.

    Newspix

    Real estate has been historically slow to modernize, but AI is changing that. The integration of artificial intelligence is transforming how buyers and sellers interact with agents, fundamentally altering competitive dynamics in the industry. 

    With AI reshaping daily operations of a real estate agent’s business by automating tasks — from generating property listings to conducting neighborhood analyses — the agent’s focus in day-to-day activities will shift. 

    Ryan Serhant, CEO of Serhant and reality TV star of “Owning Manhattan,” says AI is already making real estate less about access to information and more about the agent building deeper relationships. He predicts a mindset shift is on its way as agents leverage AI and at the same time are forced to find new ways to differentiate themselves in an increasingly competitive market. “If we are all using AI and have the same level of expertise, who wins? It’s the game of attention,” said Serhant at the CNBC Evolve AI Opportunity Summit in New York City this past week. 

    Buying a home is the single largest investment most Americans make in their lives, which makes real estate a business where greater success can be achieved with greater personal touch on the part of the agent. Serhant says the big advantage he sees in use of AI is having more time for the real estate agent to provide personalized attention to their clients. 

    “The product in sales is no longer just the skill set,” Serhant said. “It is the attention to the skill set.”

    His own company, Serhant, has developed a service called “Simple” for sales automation to handle daily tasks in customer relationship management, which typically consumes over 60% of agents’ time. 

    AI tools are being used to streamline lead generation, automate marketing campaigns, and provide predictive analytics to identify opportunities, but that is not replacing the critical role of the agent in providing top performance. Serhant says AI won’t virtualize relationships, but for the real estate agents who embrace the AI revolution — which he says is a necessary move to make — it will strengthen their relationships.

    Making access to real-time market data and sales insights less onerous may allow agents from small boutique firms to compete on a more equal footing with larger real estate corporations. “There is a trust factor in sales. … It isn’t about who is the largest, but who is the most empowered,” Serhant said. 

    That also stands to benefit homebuyers and sellers, Serhant said, with a wider selection of suitable agents with enhanced personalized services and greater focus on the client. 

    The real estate industry is still in the initial stages of adopting AI and understanding remains low among real estate professionals, but the interest is there. Generative AI was ranked among the top three technologies expected to have the greatest impact on real estate over the next three years by investors, developers, and corporate occupiers, according to JLL Technologies’ 2023 Global Real Estate Technology Survey. But the survey also finds that real estate professionals have very low understanding of AI compared to other technologies.

    According to Serhant, agents who understand how AI can empower their business are going to have huge opportunities over the next 20 years to take significant market share. 

    No tech innovation comes without risks, and wire fraud remains a major challenge for the real estate industry, which will be exacerbated by AI. The FBI reported a big year-over-year increase in wire fraud cybercrime losses in 2023, driven significantly by real estate transactions. Improved artificial intelligence technology is facilitating real estate scammers. 

    Fraud can’t be ignored, said Serhant, but he believes real estate will adapt to the risks inherent in new technology in the same way the business has in the past, such as with digital listings. “With every advancement in technology, greater rules get put into place that can help stop those fakes,” he said. 

    [ad_2]

    Source link