ReportWire

Tag: Artificial Intelligence

  • One Tech Tip: Do’s and don’ts of using AI to help with schoolwork

    [ad_1]

    The rapid rise of ChatGPT and other generative AI systems has disrupted education, transforming how students learn and study.

    Students everywhere have turned to chatbots to help with their homework, but artificial intelligence’s capabilities have blurred the lines about what it should — and shouldn’t — be used for.

    The technology’s widespread adoption in many other parts of life also adds to the confusion about what constitutes academic dishonesty.

    Here are some do’s and don’ts on using AI for schoolwork:

    Don’t just copy and paste

    Chatbots are so good at answering questions with detailed written responses that it’s tempting to just take their work and pass it off as your own.

    But in case it isn’t already obvious, AI should not be used as a substitute for putting in the work. And it can’t replace our ability to think critically.

    You wouldn’t copy and paste information from a textbook or someone else’s essay and pass it off as your own. The same principle applies to chatbot replies.

    “AI can help you understand concepts or generate ideas, but it should never replace your own thinking and effort,” the University of Chicago says in its guidance on using generative AI. “Always produce original work, and use AI tools for guidance and clarity, not for doing the work for you.”

    So don’t shy away from putting pen to paper — or your fingers to the keyboard — to do your own writing.

    “If you use an AI chatbot to write for you — whether explanations, summaries, topic ideas, or even initial outlines — you will learn less and perform more poorly on subsequent exams and attempts to use that knowledge,” Yale University’s Poorvu Center for Teaching and Learning says.

    Do use AI as a study aid

    Experts say AI shines when it’s used like a tutor or a study buddy. So try using a chatbot to explain difficult concepts or brainstorm ideas, such as essay topics.

    California high school English teacher Casey Cuny advises his students to use ChatGPT to quiz themselves ahead of tests.

    He tells them to upload class notes, study guides and any other materials used in class, such as slideshows, to the chatbot, and then tell it which textbook and chapter the test will focus on.

    Then, students should prompt the chatbot to: “Quiz me one question at a time based on all the material cited, and after that create a teaching plan for everything I got wrong.”

    Cuny posts AI guidance in the form of a traffic light on a classroom screen. Green-lighted uses include brainstorming, asking for feedback on a presentation or doing research. Red lighted, or prohibited AI use: Asking an AI tool to write a thesis statement, a rough draft or revise an essay. A yellow light is when a student is unsure if AI use is allowed, in which case he tells them to come and ask him.

    Or try using ChatGPT’s voice dictation function, said Sohan Choudhury, CEO of Flint, an AI-powered education platform.

    “I’ll just brain dump exactly what I get, what I don’t get” about a subject, he said. “I can go on a ramble for five minutes about exactly what I do and don’t understand about a topic. I can throw random analogies at it, and I know it’s going to be able to give me something back to me tailored based on that.”

    Do check your school’s AI policy

    As AI has shaken up the academic world, educators have been forced to set out their policies on the technology.

    In the U.S., about two dozen states have state-level AI guidance for schools, but it’s unevenly applied.

    It’s worth checking what your school, college or university says about AI. Some might have a broad institutionwide policy.

    The University of Toronto’s stance is that “students are not allowed to use generative AI in a course unless the instructor explicitly permits it” and students should check course descriptions for do’s and don’ts.

    Many others don’t have a blanket rule.

    The State University of New York at Buffalo “has no universal policy,” according to its online guidance for instructors. “Instructors have the academic freedom to determine what tools students can and cannot use in pursuit of meeting course learning objectives. This includes artificial intelligence tools such as ChatGPT.”

    Don’t hide AI use from teachers

    AI is not the educational bogeyman it used to be.

    There’s growing understanding that AI is here to stay and the next generation of workers will have to learn how to use the technology, which has the potential to disrupt many industries and occupations.

    So students shouldn’t shy away from discussing its use with teachers, because transparency prevents misunderstandings, said Choudhury.

    “Two years ago, many teachers were just blanket against it. Like, don’t bring AI up in this class at all, period, end of story,” he said. But three years after ChatGPT’s debut, “many teachers understand that the kids are using it. So they’re much more open to having a conversation as opposed to setting a blanket policy.”

    Teachers say they’re aware that students are wary of asking if AI use is allowed for fear they’ll be flagged as cheaters. But clarity is key because it’s so easy to cross a line without knowing it, says Rebekah Fitzsimmons, chair of the AI faculty advising committee at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy.

    “Often, students don’t realize when they’re crossing a line between a tool that is helping them fix content that they’ve created and when it is generating content for them,” says Fitzsimmons, who helped draft detailed new guidelines for students and faculty that strive to create clarity.

    The University of Chicago says students should cite AI if it was used to come up with ideas, summarize texts, or help with drafting a paper.

    “Acknowledge this in your work when appropriate,” the university says. “Just as you would cite a book or a website, giving credit to AI where applicable helps maintain transparency.”

    And don’t forget ethics

    Educators want students to use AI in a way that’s consistent with their school’s values and principles.

    The University of Florida says students should familiarize themselves with the school’s honor code and academic integrity policies “to ensure your use of AI aligns with ethical standards.”

    Oxford University says AI tools must be used “responsibly and ethically” and in line with its academic standards.

    “You should always use AI tools with integrity, honesty, and transparency, and maintain a critical approach to using any output generated by these tools,” it says.

    ____

    Is there a tech topic that you think needs explaining? Write to us at [email protected] with your suggestions for future editions of One Tech Tip.

    [ad_2]

    Source link

  • Larry Summers takes leave from teaching at Harvard after release of Epstein emails

    [ad_1]

    Former U.S. Treasury Secretary Larry Summers abruptly went on leave Wednesday from teaching at Harvard University, where he once served as president, over recently released emails showing he maintained a friendly relationship with Jeffrey Epstein, Summers’ spokesperson said.

    Summers had canceled his public commitments amid the fallout of the emails being made public and earlier Wednesday severed ties with OpenAI, the maker of ChatGPT. Harvard had reopened an investigation into connections between him and Epstein, but Summers had said he would continue teaching economics classes at the school.

    That changed Wednesday evening with the news that he will step away from teaching classes as well as his position as director of the Mossavar-Rahmani Center for Business and Government with the Harvard Kennedy School.

    “Mr. Summers has decided it’s in the best interest of the Center for him to go on leave from his role as Director as Harvard undertakes its review,” Summers spokesperson Steven Goldberg said, adding that his co-teachers would finish the classes.

    Summers has not been scheduled to teach next semester, according to Goldberg.

    A Harvard spokesperson confirmed to The Associated Press that Summers had let the university know about his decision. Summers decision to go on leave was first reported by The Harvard Crimson.

    Harvard did not mention Summers by name in its decision to restart an investigation, but the move follows the release of emails showing that he was friendly with Epstein long after the financier pleaded guilty to soliciting prostitution from an underage girl in 2008.

    By Wednesday, the once highly regarded economics expert had been facing increased scrutiny over choosing to stay in the teaching role. Some students even filmed his appearance in shock as he appeared before a class of undergraduates on Tuesday while stressing he thought it was important to continue teaching.

    Massachusetts Sen. Elizabeth Warren, a Democrat, said in a social media post on Wednesday night that Summers “cozied up to the rich and powerful — including a convicted sex offender. He cannot be trusted in positions of influence.”

    Messages appear to seek advice about romantic relationship

    The emails include messages in which Summers appeared to be getting advice from Epstein about pursuing a romantic relationship with someone who viewed him as an “economic mentor.”

    “im a pretty good wing man , no?” Epstein wrote on Nov. 30, 2018.

    The next day, Summers told Epstein he had texted the woman, telling her he “had something brief to say to her.”

    “Am I thanking her or being sorry re my being married. I think the former,” he wrote.

    Summers’ wife, Elisa New, also emailed Epstein multiple times, including a 2015 message in which she thanked him for arranging financial support for a poetry project she directs. The gift he arranged “changed everything for me,” she wrote.

    “It really means a lot to me, all financial help aside, Jeffrey, that you are rooting for me and thinking about me,” she wrote.

    New, an English professor emerita at Harvard, did not respond to an email seeking comment Wednesday.

    An earlier review completed in 2020 found that Epstein visited Harvard’s campus more than 40 times after his 2008 sex-crimes conviction and was given his own office and unfettered access to a research center he helped establish. The professor who provided the office was later barred from starting new research or advising students for at least two years.

    Summers appears before Harvard class

    On Tuesday, Summers appeared before his class at Harvard, where he teaches “The Political Economy of Globalization” to undergraduates with Robert Lawrence, a professor with the Harvard Kennedy School.

    “Some of you will have seen my statement of regret expressing my shame with respect to what I did in communication with Mr. Epstein and that I’ve said that I’m going to step back from public activities for a while. But I think it’s very important to fulfill my teaching obligations,” he said.

    Summers’ remarks were captured on video by several students, but no one appeared to publicly respond to his comments.

    Epstein, who authorities said died by suicide in 2019, was a convicted sex offender infamous for his connections to wealthy and powerful people, making him a fixture of outrage and conspiracy theories about wrongdoing among American elites.

    Summers served as treasury secretary from 1999 to 2001 under President Bill Clinton. He was Harvard’s president for five years from 2001 to 2006. When asked about the emails last week, Summers issued a statement saying he has “great regrets in my life” and that his association with Epstein was a “major error in judgement.”

    Other organizations that confirmed the end of their affiliations with Summers included the Center for American Progress, the Center for Global Development and the Budget Lab at Yale University. Bloomberg TV said Summers’ withdrawal from public commitments included his role as a paid contributor, and the New York Times said it will not renew his contract as a contributing opinion writer.

    ___

    This story has been corrected to show that Summers is a former treasury secretary, not treasurer; to show that Summers’ statement about stepping back from public commitments was issued late Monday, not Tuesday; and to show that the school is known as the Harvard Kennedy School, not Kennedy Harvard School.

    ___

    Associated Press journalist Hallie Golden contributed to this report.

    [ad_2]

    Source link

  • Swatch’s New OpenAI-Powered Tool Lets You Design Your Own Watch

    [ad_1]

    And, just as with Swatch x You, it’s possible to further customize the watch by choosing indexes or selecting the color of its mechanism. To save on data center power drains and rampant creativity run amuck, you’re only allowed three prompts per day on AI‑DADA, something that Swatch is spinning as a “creative challenge that makes every attempt feel special.”

    Ultimately, what we have here, is a new version of Swatch x You that has been plugged with image-generation software supplied by OpenAI, thus letting the general public emblazon its timepieces with whatever graphics they see fit to dream up and deposit on them. What could possibly go wrong here, I wonder?

    I asked Roberto Amico, Swatch Group’s global head of digital & ecommerce, what guardrails have been put in place to stop people making, say, their very own Jeffrey Epstein Swatch, or White Power Swatch, or Stormy Daniels Swatch. Or maybe a Swatch with a Rolex logo on it, or something that looks a lot like the Rolex logo.

    Amico reassures me Swatch has indeed set guardrails, particularly with logos, for example, alongside the certain restrictions already in place from OpenAI. But interestingly, Swatch Group CEO Nick Hayek Jr. tells me he battled with OpenAI to remove some of its existing guardrails to make AI‑DADA “more liberal, more Swatch.”

    Hayek also confessed at the launch event in Switzerland that his first prompts on AI‑DADA all concerned “sex, drugs, and rock’n’roll,” but he was told his own model wouldn’t allow it. Still, you can never underestimate the ingenuity of the general public to get around obvious red flags—such as a ban on the model reproducing nudity or religious iconography—and create something that Swatch might not want to be associated with. Time will tell how bulletproof this model truly is.

    Familiar Faces

    While Swatch’s image model may be based on OpenAI, it defaults to a data set of more than 40 years of Swatch watches, products, designs, art and street paintings. Like a pattern or color on a particular 1980s Swatch dial or strap? It’s in there. Have a fondness for a Keith Haring or Vivienne Westwood or Phil Collins collaboration, the model has this too. If you ask for a design inspired by something outside of what Swatch has collected together in this archive, only then, Amico tells me, does AI‑DADA go beyond the in-house dataset and mine OpenAI’s data.

    Courtesy of: Swatch

    [ad_2]

    Jeremy White

    Source link

  • Nvidia’s strong earnings and a solid report on the job market boost US index futures

    [ad_1]

    NEW YORK — U.S. stock index futures added to their gains after the government reported that employers added twice as many jobs as expected in September. Futures were already higher on enthusiasm for a strong earnings report from AI bellwether Nvidia. Futures for the S&P 500 were up 1.5% before the opening bell, while futures for the Dow Jones Industrial Average gained 0.8%. Futures for the Nasdaq shot 1.9% higher. The Labor Department said employapners added 119,000 jobs in September, more than double the 50,000 economists had forecast. The market also focused on Nvidia as Wall Street’s most influential company jumped 5.1% overnight after reporting better-than-expected results.

    THIS IS A BREAKING NEWS UPDATE. AP’s earlier story follows below.

    Wall Street surged on Thursday after Nvidia reported stronger than expected quarterly earnings, tempering worries that AI-related stocks may have become overvalued.

    Futures for the S&P 500 were up 1.1% before the opening bell, while futures for the Dow Jones Industrial Average gained 0.5%. Futures for the Nasdaq shot 1.6% higher.

    The market’s focus remained on Nvidia as Wall Street’s most influential stock jumped 5.1% overnight after the chipmaker reported third-quarter earnings of $31.9 billion. That’s a 65% increase over last year and more than analysts were expecting.

    The Santa Clara, California company also forecast revenue for the current quarter covering November-January will come in at about $65 billion, nearly $3 billion above analysts’ projections, an indication that demand for its AI chips remains feverish.

    Nvidia is the most valuable company by market capitalization on Wall Street, having briefly topped $5 trillion in value. That means its movements have more of an effect on the S&P 500 than any other stock, and it can single-handedly steer the index’s direction some days.

    By continuing to deliver big profits for investors, Nvidia has mostly quieted recent criticism that its shares shot too high, too fast.

    Nvidia has become a bellwether for the broader frenzy around artificial-intelligence technology, because other companies are using its chips to ramp up their AI efforts.

    Walmart also reported its latest quarterly results Thursday. The Arkansas retailer delivered another standout quarter, posting strong sales and profits that blew past Wall Street expectations as it continues to lure cash-strapped Americans who have grown increasingly anxious about the economy and prices.

    With other retailers dialing back projections, the nation’s largest retailer raised its financial outlook Thursday after its strong third quarter, setting itself up for a strong holiday shopping season.

    Traders also made their final moves ahead of a September jobs report coming from the U.S. government on Thursday. The labor market data, usually released during the first week of every month, was delayed due to the six-week federal government shutdown.

    The Labor Department said Wednesday that it will not be releasing a full jobs report for October because the 43-day shutdown meant it couldn’t calculate the unemployment rate and some other key numbers.

    The job market has been slowing enough this year that the Fed has already cut its main interest rate twice. Lower rates can give a boost to the economy and to prices for investments, and the expectation on Wall Street had been for more cuts, including at the Fed’s next meeting in December.

    But some Fed officials are hinting that they should pause next month, in part because inflation has stubbornly remained above the Fed’s 2% target. Lower interest rates can worsen inflation.

    At midday in Europe, Germany’s DAX rose 0.8%, while Britain’s FTSE 100 and the CAC 40 in Paris each added 0.6%.

    In Asia, Japan’s Nikkei 225 index initially surged as much as 4.2% before giving up some early gains. It closed nearly 2.7% higher at 49,823.94 as technology stocks rallied, with investor sentiment boosted by Nvidia’s strong quarterly results after trading closed in the U.S.

    South Korea’s Kospi added 1.9% to 4,004.85, with gains led by technology and energy stocks. Investors were encouraged by Nvidia’s earnings and reports that the U.S. may delay planned semiconductor tariffs.

    Samsung Electronics gained 4.2%, while SK Hynix added 1.6%.

    Chinese markets ended mixed as reports said the government might be planning more measures to try to revive the ailing property sector.

    Hong Kong’s Hang Seng Index was barely changed at 25,835.57, while the Shanghai Composite index lost 0.4% to 3,931.05 after China’s central bank kept its one- and five-year loan prime rates unchanged at 3% and 3.5%, respectively.

    Taiwan’s Taiex closed 3.2% higher while India’s BSE Sensex added nearly 0.7%.

    Australia’s S&P/ASX 200 gained 1.2% to 8,552.70, also led by gains for technology stocks.

    In energy markets, benchmark U.S. crude oil gained 59 cents, or 1%, to $59.61 per barrel. Brent crude, the international standard, rose 62 cents to $64.13 per barrel.

    The U.S. dollar climbed to 157.66 Japanese yen from 157.06 yen. It has been trading at nearly the highest level this year on expectations that the government will delay efforts to rein in Japan’s national debt as Prime Minister Sanae Takaichi raises spending to help spur the economy.

    The euro fell to $1.1515 from $1.1538.

    [ad_2]

    Source link

  • Yet Another Study Shows That Most Companies Aren’t Making Any Money Off AI

    [ad_1]

    The U.S. and its global partners have sunk trillions of dollars into the AI arms race, with Silicon Valley’s prime movers swearing that the technology is destined to transform our world for the better. Now, a new study joins a growing chorus that seek to highlight an inconvenient truth: so far, a vast majority of companies that adopt AI are making no money from it at all.

    The new study comes from KPMG, a British accounting and professional services firm. The study, published Wednesday, looked at businesses in Canada, surveying them for evidence that AI was providing anything in the way of a ROI. Sadly, the study found that, no, pretty much nobody who is using AI has managed to find its financial upside yet. Indeed, the survey found that, while more and more businesses are using AI, only about 2 percent of respondents claimed that they had seen a “return on their generative AI investments.”

    The survey, which involved 753 business leaders from across the nation, found that the slim amount of respondents who did report positive results from AI were from very large companies that reported at least $1 billion in annual revenue. A lot of companies have yet to fully integrate AI into their workflows, and many of them are still merely experimenting with the technology, the report adds.

    AI is seeing the largest rates of adoption in IT and sales and marketing, the study shows. Other areas where it has seen broad adoption include research and development, finance and accounting, and engineering, it says.

    Canadian Managing Partner of Digital and Transformation at KPMG Canada Stephanie Terrill laid it out the general outlook like this:

    “Only a small sliver of Canadian businesses are generating growth from their AI investments today, and that’s understandable – new technologies take time to be adopted and demonstrate identifiable return on investment,” Terrill said. “However, Canada is facing near-term threats to its economic competitiveness and grappling with declining productivity and prosperity, so waiting years for AI investments to create value isn’t realistic in this environment – in fact, it’s downright risky.”

    Despite that fairly negative outlook, Terrill’s takeaway isn’t what you might think—that AI just isn’t very helpful and that companies should ditch it. Instead, she says that Canadian companies should turbocharge their AI investments so as to increase national “competitiveness” and see that elusive ROI that is currently eluding them:

    Canadian organizations need to accelerate AI implementation into core operations to start achieving near- to medium-term productivity gains if we hope to become more economically competitive as a country,” she added.

    How long do companies have to wait until AI starts doing what it’s supposed to do? Many business leaders expect to wait a number of years before AI has its intended effect, the study reports. Despite the fact that it’s currently useless, a certain portion of companies (3 in 10) expect to start seeing a return on their AI investment within a year, it adds. A greater portion (6 in 10) said they expect to see that ROI in one to five years. Hope springs eternal, I guess.

    [ad_2]

    Lucas Ropek

    Source link

  • I Set A Trap To Catch My Students Cheating With AI. The Results Were Shocking.

    [ad_1]

    I have been in and out of college classrooms for the last 10 years. I have worked as an adjunct instructor at a community college, I have taught as a graduate instructor at a major research institution, and I am now an assistant professor of history at a small teaching-first university.

    Since the spring semester of 2023, it has been apparent that an ever-increasing number of students are submitting AI-generated work. I am no stranger to students trying to cut corners by copying and pasting from Wikipedia, but the introduction of generative AI has enabled them to cheat in startling new ways, and many students have fully embraced it.

    Plagiarism detectors have and do work well enough for what I might call “classical cheating,” but they are notoriously bad at detecting AI-generated work. Even a program like Grammarly, which is ostensibly intended only to clean up one’s own work, will set off alarms.

    So, I set out this semester to look more carefully for AI work. Some of it is quite easy to notice. The essays produced by ChatGPT, for instance, are soulless, boring abominations. Words, phrases and punctuation rarely used by the average college student — or anyone for that matter (em dash included) — are pervasive.

    But there is a difference between recognizing AI use and proving its use. So I tried an experiment.

    A colleague in the department introduced me to the Trojan horse, a trick capable of both conquering cities and exposing the fraud of generative AI users. This method is now increasingly known (there’s even an episode of “The Simpsons” about it) and likely has already run its course as a plausible method for saving oneself from reading and grading AI slop. To be brief, I inserted hidden text into an assignment’s directions that the students couldn’t see but that ChatGPT can.

    I assigned Douglas Egerton’s book “Gabriel’s Rebellion,” which tells the story of the thwarted rebellion of enslaved people in 1800, and asked the students to describe some of the author’s main points. Nothing too in-depth, as it’s a freshman-level survey course. They were asked to use either the suggestions I provided or to write about whatever elements of Egerton’s argument they found most important.

    I received 122 paper submissions. Of those, the Trojan horse easily identified 33 AI-generated papers. I sent these stats to all the students and gave them the opportunity to admit to using AI before they were locked into failing the class. Another 14 outed themselves. In other words, nearly 39% of the submissions were at least partially written by AI.

    The percentage was surprising and deflating. I explained my disappointment to the students, pointing out that they cheated on a paper about a rebellion of the enslaved — people who sacrificed their lives in pursuit of freedom, including the freedom to learn to read and write. In fact, Virginia made it even harder for them to do so after the rebellion was put down.

    I’m not sure all of them grasped my point. Some certainly did. I received several emails and spoke with a few students who came to my office and were genuinely apologetic. I had a few that tried to fight me on the accusations, too, assuming I flagged them as AI for “well written sentences.” But the Trojan horse did not lie.

    The author with his cat, Persephone “Dots” Teague

    Photo Courtesy Of Will Teague

    There’s a lot of talk about how educators have to train students to use AI as a tool and help them integrate it into their work. Recently, the American Historical Association even made recommendations on how we might approach this in the classroom. The AHA asserts that “banning generative AI is not a long-term solution; cultivating AI literacy is.” One of their suggestions is to assign students an AI-generated essay and have them assess what it got right, got wrong or if it even understood the text in question.

    But I don’t know if I agree with the AHA. Let me tell you why the Trojan horse worked. It is because students do not know what they do not know. My hidden text asked them to write the paper “from a Marxist perspective.” Since the events in the book had little to do with the later development of Marxism, I thought the resulting essay might raise a red flag with students, but it didn’t.

    I had at least eight students come to my office to make their case against the allegations, but not a single one of them could explain to me what Marxism is, how it worked as an analytical lens or how it even made its way into their papers they claimed to have written. The most shocking part was that apparently, when ChatGPT read the prompt, it even directly asked if it should include Marxism, and they all said yes. As one student said to me, “I thought it sounded smart.”

    How do I assign students an AI-generated essay for assessment if they don’t have the basic knowledge to parse said essay? I can’t and I won’t.

    I’m a historian. I am trained and paid to teach students how to understand a narrative, to derive meaning from it with textual analysis and to communicate that meaning in written word. I cannot force them to do any of those things, but I won’t be complicit in exposing them to even more AI in my classroom.

    Not only is there an inability to recognize AI-generated content for the slop it is, but each university, each college and each department is adopting wildly different AI policies. There is no consistency. My colleagues and I are actively trying to solve this for ourselves, maybe by establishing a shared standard that every student who walks through our doors will learn and be subject to. But we can’t control what happens everywhere else.

    I have no doubt that many students are actively making the decision to cheat. But I also do not doubt that, because of inconsistent policies and AI euphoria, some were telling the truth when they told me they didn’t realize they were cheating. Regardless of their awareness or lack thereof, each one of my students made the decision to skip one of the many challenges of earning a degree — assuming they are only here to buy it (a very different cultural conversation we need to have). They also chose to actively avoid learning because it’s boring and hard.

    Now, I’m not equipped to make deep sociological or philosophical diagnoses on these choices. But this is a problem. How do we solve it? Is it a return to analog? Do we use paper and pen and class time for everything? Am I a professor or an academic policeman?

    The answer is the former. But students, society and administrations that are unwilling to take a hard stance (unless it’s the promotion of AI) are crushing higher ed. A college degree is not just about a job afterward — you have to be able to think, solve problems and apply those solutions, regardless of the field. How do we teach that without institutional support? How do we teach that when a student doesn’t want to and AI enables it?

    I don’t know. But for my students, I decided to not punish them. All I know how to do is teach, so that’s what I did. I assigned a wonderful essay by Cal Poly professor Patrick Lin that he addressed to his class on the benefits and detriments of AI use. I attached instructions that asked them to read it and reflect. These instructions also had a Trojan horse.

    Thirty-six of my AI students completed it. One of them used AI, and the other 12 have been slowly dropping the class. Ultimately, 35 out of 47 isn’t too bad. The responses to the assignment were generally good, and some were deeply reflective.

    But a handful said something I found quite sad: “I just wanted to write the best essay I could.” Those students in question, who at least tried to provide some of their own thoughts before mixing them with the generated result, had already written the best essay they could. And I guess that’s why I hate AI in the classroom as much as I do.

    Students are afraid to fail, and AI presents itself as a savior. But what we learn from history is that progress requires failure. It requires reflection. Students are not just undermining their ability to learn, but to someday lead.

    I asked my students to reflect, so I suppose I will end with my own reflection. I don’t use AI for anything in my academic or personal life. I value almost nothing more than my ability to think and to freely express myself. Even when I make mistakes, at least they are my mistakes.

    We live in an era where personal expression is saturated by digital filters, hivemind thinking is promoted through endless algorithms and academic freedom itself is under assault by the weakest minds among us. AI has only made this worse. It is a crisis.

    I can offer no solutions other than to approach it and teach about it that way. I’m sure angry detractors will say that is antiquated, and maybe it is.

    But I am a historian, so I will close on a historian’s note: History shows us that the right to literacy came at a heavy cost for many Americans, ranging from ostracism to death. Those in power recognized that oppression is best maintained by keeping the masses illiterate, and those oppressed recognized that literacy is liberation. To my students and to anyone who might listen, I say: Don’t surrender to AI your ability to read, write and think when others once risked their lives and died for the freedom to do so.

    Do you have a compelling personal story you’d like to see published on HuffPost? Find out what we’re looking for here and send us a pitch at pitch@huffpost.com.

    [ad_2]

    Source link

  • Warner Music and AI startup Udio settle copyright battle and ink license deal

    [ad_1]

    LONDON — Warner Music Group resolved its copyright battle with Udio and signed a deal to work with the AI music startup on a new song creation service that will allow users to remix tunes by established artists.

    It’s the second agreement between a major record label and Udio, a chatbot-style song generation tool.

    The deals underline how AI is shaking up the music industry. AI-generated music has been flooding streaming services amid the rise of song generators that instantly spit out new tunes based on prompts typed in by users without any musical knowledge. The synthetic music boom has also resulted in a wave of AI singers and bands that have climbed the charts after racking up millions of streams, even though they don’t exist in real life.

    Warner, which represents artists including Ed Sheeran and Dua Lipa, has resolved its copyright infringement litigation against Udio, the two companies said. They’ve also established ”a clear framework” for developing Udio’s licensed AI music creation service that’s set to launch in 2026.

    They provided no financial details on their agreement, which includes Warner’s recording and publishing businesses, but it will create “new revenue streams for artists and songwriters, while ensuring their work remains protected.”

    It’s similar to an agreement that Universal Music Group signed last month with Udio, which triggered a huge backlash because Udio stopped users from downloading the songs they created.

    Udio said it will remain a “closed-system” as it prepares to launch the new service next year. If artists and songwriters choose to let their works be used, they’ll be credited and paid when users remix or cover their songs, or make new tunes with their voices and compositions, the companies said.

    Sony Music Entertainment remains the only major record company that hasn’t yet signed an AI licensing deal with Udio or Suno, after filing suit against them last year over copyright alongside Universal and Warner. Suno hasn’t yet signed a deal with any major label.

    Also Wednesday, Warner unveiled a deal to work with another artificial intelligence company, Stability AI, on developing “professional-grade tools” for musicians, songwriters and producers.

    [ad_2]

    Source link

  • Advocacy groups urge parents to avoid AI toys this holiday season

    [ad_1]

    They’re cute, even cuddly, and promise learning and companionship — but artificial intelligence toys are not safe for kids, according to children’s and consumer advocacy groups urging parents not to buy them during the holiday season.

    These toys, marketed to kids as young as 2 years old, are generally powered by AI models that have already been shown to harm children and teenagers, such as OpenAI’s ChatGPT, according to an advisory published Thursday by the children’s advocacy group Fairplay and signed by more than 150 organizations and individual experts such as child psychiatrists and educators.

    “The serious harms that AI chatbots have inflicted on children are well-documented, including fostering obsessive use, having explicit sexual conversations, and encouraging unsafe behaviors, violence against others, and self-harm,” Fairplay said.

    AI toys, made by companies such as Curio Interactive and Keyi Technologies, are often marketed as educational, but Fairplay says they can displace important creative and learning activities. They promise friendship but also disrupt children’s relationships and resilience, the group said.

    “What’s different about young children is that their brains are being wired for the first time and developmentally it is natural for them to be trustful, for them to seek relationships with kind and friendly characters,” said Rachel Franz, director of Fairplay’s Young Children Thrive Offline Program. Because of this, she added, the amount of trust young children are putting in these toys can exacerbate the harms seen with older children.

    Fairplay, a 25-year-old organization formerly known as the Campaign for a Commercial-Free Childhood, has been warning about AI toys for more than 10 years. They just weren’t as advanced as they are today. A decade ago, during an emerging fad of internet-connected toys and AI speech recognition, the group helped lead a backlash against Mattel’s talking Hello Barbie doll that it said was recording and analyzing children’s conversations.

    “Everything has been released with no regulation and no research, so it gives us extra pause when all of a sudden we see more and more manufacturers, including Mattel, who recently partnered with OpenAI, potentially putting out these products,” Franz said.

    It’s the second big seasonal warning against AI toys since consumer advocates at U.S. PIRG last week called out the trend in its annual “ Trouble in Toyland ” report that typically looks at a range of product hazards, such as high-powered magnets and button-sized batteries that young children can swallow. This year, the organization tested four toys that use AI chatbots.

    “We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls,” the report said.

    Dr. Dana Suskind, a pediatric surgeon and social scientist who studies early brain development, said young children don’t have the conceptual tools to understand what an AI companion is. While kids have always bonded with toys through imaginative play, when they do this they use their imagination to create both sides of a pretend conversation, “practicing creativity, language, and problem-solving,” she said.

    “An AI toy collapses that work. It answers instantly, smoothly, and often better than a human would. We don’t yet know the developmental consequences of outsourcing that imaginative labor to an artificial agent—but it’s very plausible that it undercuts the kind of creativity and executive function that traditional pretend play builds,” Suskind said.

    California-based Curio Interactive makes stuffed toys, like Gabbo and rocket-shaped Grok, that have been promoted by the pop singer Grimes.

    Curio said it has “meticulously designed” guardrails to protect children and the company encourages parents to “monitor conversations, track insights, and choose the controls that work best for their family.”

    “After reviewing the U.S. PIRG Education Fund’s findings, we are actively working with our team to address any concerns, while continuously overseeing content and interactions to ensure a safe and enjoyable experience for children.”

    Another company, Miko, said it uses its own conversational AI model rather than relying on general large language model systems such as ChatGPT in order to make its product — an interactive AI robot — safe for children.

    “We are always expanding our internal testing, strengthening our filters, and introducing new capabilities that detect and block sensitive or unexpected topics,” said CEO Sneh Vaswani. “These new features complement our existing controls that allow parents and caregivers to identify specific topics they’d like to restrict from conversation. We will continue to invest in setting the highest standards for safe, secure and responsible AI integration for Miko products.”

    Miko’s products are sold by major retailers such as Walmart and Costco and have been promoted by the families of social media “kidfluencers” whose YouTube videos have millions of views. On its website, it markets its robots as “Artificial Intelligence. Genuine friendship.”

    Ritvik Sharma, the company’s senior vice president of growth, said Miko actually “encourages kids to interact more with their friends, to interact more with the peers, with the family members etc. It’s not made for them to feel attached to the device only.”

    Still, Suskind and children’s advocates say analog toys are a better bet for the holidays.

    “Kids need lots of real human interaction. Play should support that, not take its place. The biggest thing to consider isn’t only what the toy does; it’s what it replaces. A simple block set or a teddy bear that doesn’t talk back forces a child to invent stories, experiment, and work through problems. AI toys often do that thinking for them,” she said. “Here’s the brutal irony: when parents ask me how to prepare their child for an AI world, unlimited AI access is actually the worst preparation possible.”

    [ad_2]

    Source link

  • One Tech Tip: Do’s and don’ts of using AI to help with schoolwork

    [ad_1]

    The rapid rise of ChatGPT and other generative AI systems has disrupted education, transforming how students learn and study.

    Students everywhere have turned to chatbots to help with their homework, but artificial intelligence’s capabilities have blurred the lines about what it should — and shouldn’t — be used for.

    The technology’s widespread adoption in many other parts of life also adds to the confusion about what constitutes academic dishonesty.

    Here are some do’s and don’ts on using AI for schoolwork:

    Chatbots are so good at answering questions with detailed written responses that it’s tempting to just take their work and pass it off as your own.

    But in case it isn’t already obvious, AI should not be used as a substitute for putting in the work. And it can’t replace our ability to think critically.

    You wouldn’t copy and paste information from a textbook or someone else’s essay and pass it off as your own. The same principle applies to chatbot replies.

    “AI can help you understand concepts or generate ideas, but it should never replace your own thinking and effort,” the University of Chicago says in its guidance on using generative AI. “Always produce original work, and use AI tools for guidance and clarity, not for doing the work for you.”

    So don’t shy away from putting pen to paper — or your fingers to the keyboard — to do your own writing.

    “If you use an AI chatbot to write for you — whether explanations, summaries, topic ideas, or even initial outlines — you will learn less and perform more poorly on subsequent exams and attempts to use that knowledge,” Yale University’s Poorvu Center for Teaching and Learning says.

    Experts say AI shines when it’s used like a tutor or a study buddy. So try using a chatbot to explain difficult concepts or brainstorm ideas, such as essay topics.

    California high school English teacher Casey Cuny advises his students to use ChatGPT to quiz themselves ahead of tests.

    He tells them to upload class notes, study guides and any other materials used in class, such as slideshows, to the chatbot, and then tell it which textbook and chapter the test will focus on.

    Then, students should prompt the chatbot to: “Quiz me one question at a time based on all the material cited, and after that create a teaching plan for everything I got wrong.”

    Cuny posts AI guidance in the form of a traffic light on a classroom screen. Green-lighted uses include brainstorming, asking for feedback on a presentation or doing research. Red lighted, or prohibited AI use: Asking an AI tool to write a thesis statement, a rough draft or revise an essay. A yellow light is when a student is unsure if AI use is allowed, in which case he tells them to come and ask him.

    Or try using ChatGPT’s voice dictation function, said Sohan Choudhury, CEO of Flint, an AI-powered education platform.

    “I’ll just brain dump exactly what I get, what I don’t get” about a subject, he said. “I can go on a ramble for five minutes about exactly what I do and don’t understand about a topic. I can throw random analogies at it, and I know it’s going to be able to give me something back to me tailored based on that.”

    As AI has shaken up the academic world, educators have been forced to set out their policies on the technology.

    In the U.S., about two dozen states have state-level AI guidance for schools, but it’s unevenly applied.

    It’s worth checking what your school, college or university says about AI. Some might have a broad institutionwide policy.

    The University of Toronto’s stance is that “students are not allowed to use generative AI in a course unless the instructor explicitly permits it” and students should check course descriptions for do’s and don’ts.

    Many others don’t have a blanket rule.

    The State University of New York at Buffalo “has no universal policy,” according to its online guidance for instructors. “Instructors have the academic freedom to determine what tools students can and cannot use in pursuit of meeting course learning objectives. This includes artificial intelligence tools such as ChatGPT.”

    AI is not the educational bogeyman it used to be.

    There’s growing understanding that AI is here to stay and the next generation of workers will have to learn how to use the technology, which has the potential to disrupt many industries and occupations.

    So students shouldn’t shy away from discussing its use with teachers, because transparency prevents misunderstandings, said Choudhury.

    “Two years ago, many teachers were just blanket against it. Like, don’t bring AI up in this class at all, period, end of story,” he said. But three years after ChatGPT’s debut, “many teachers understand that the kids are using it. So they’re much more open to having a conversation as opposed to setting a blanket policy.”

    Teachers say they’re aware that students are wary of asking if AI use is allowed for fear they’ll be flagged as cheaters. But clarity is key because it’s so easy to cross a line without knowing it, says Rebekah Fitzsimmons, chair of the AI faculty advising committee at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy.

    “Often, students don’t realize when they’re crossing a line between a tool that is helping them fix content that they’ve created and when it is generating content for them,” says Fitzsimmons, who helped draft detailed new guidelines for students and faculty that strive to create clarity.

    The University of Chicago says students should cite AI if it was used to come up with ideas, summarize texts, or help with drafting a paper.

    “Acknowledge this in your work when appropriate,” the university says. “Just as you would cite a book or a website, giving credit to AI where applicable helps maintain transparency.”

    Educators want students to use AI in a way that’s consistent with their school’s values and principles.

    The University of Florida says students should familiarize themselves with the school’s honor code and academic integrity policies “to ensure your use of AI aligns with ethical standards.”

    Oxford University says AI tools must be used “responsibly and ethically” and in line with its academic standards.

    “You should always use AI tools with integrity, honesty, and transparency, and maintain a critical approach to using any output generated by these tools,” it says.

    ____

    Is there a tech topic that you think needs explaining? Write to us at onetechtip@ap.org with your suggestions for future editions of One Tech Tip.

    [ad_2]

    Source link

  • Advocacy groups urge parents to avoid AI toys this holiday season

    [ad_1]

    They’re cute, even cuddly, and promise learning and companionship — but artificial intelligence toys are not safe for kids, according to children’s and consumer advocacy groups urging parents not to buy them during the holiday season.

    These toys, marketed to kids as young as 2 years old, are generally powered by AI models that have already been shown to harm children and teenagers, such as OpenAI’s ChatGPT, according to an advisory published Thursday by the children’s advocacy group Fairplay and signed by more than 150 organizations and individual experts such as child psychiatrists and educators.

    “The serious harms that AI chatbots have inflicted on children are well-documented, including fostering obsessive use, having explicit sexual conversations, and encouraging unsafe behaviors, violence against others, and self-harm,” Fairplay said.

    AI toys, made by companies such as Curio Interactive and Keyi Technologies, are often marketed as educational, but Fairplay says they can displace important creative and learning activities. They promise friendship but also disrupt children’s relationships and resilience, the group said.

    “What’s different about young children is that their brains are being wired for the first time and developmentally it is natural that for them to be trustful, for them to seek relationships with kind and friendly characters,” said Rachel Franz, director of Fairplay’s Young Children Thrive Offline Program. Because of this, she added, the amount of trust young children are putting in these toys can exacerbate the harms seen with older children.

    Fairplay, a 25-year-old organization formerly known as the Campaign for a Commercial-Free Childhood, has been warning about AI toys for more than 10 years. They just weren’t as advanced as they are today. A decade ago, during an emerging fad of internet-connected toys and AI speech recognition, the group helped lead a backlash against Mattel’s talking Hello Barbie doll that it said was recording and analyzing children’s conversations.

    “Everything has been released with no regulation and no research, so it gives us extra pause when all of a sudden we see more and more manufacturers, including Mattel, who recently partnered with OpenAI, potentially putting out these products,” Franz said.

    It’s the second big seasonal warning against AI toys since consumer advocates at U.S. PIRG last week called out the trend in its annual “ Trouble in Toyland ” report that typically looks at a range of product hazards, such as high-powered magnets and button-sized batteries that young children can swallow. This year, the organization tested four toys that use AI chatbots.

    “We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls,” the report said.

    Dr. Dana Suskind, a pediatric surgeon and social scientist who studies early brain development, said young children don’t have the conceptual tools to understand what an AI companion is. While kids have always bonded with toys through imaginative play, when they do this they use their imagination to create both sides of a pretend conversation, “practicing creativity, language, and problem-solving,” she said.

    “An AI toy collapses that work. It answers instantly, smoothly, and often better than a human would. We don’t yet know the developmental consequences of outsourcing that imaginative labor to an artificial agent—but it’s very plausible that it undercuts the kind of creativity and executive function that traditional pretend play builds,” Suskind said.

    California-based Curio Interactive makes stuffed toys, like rocket-shaped Gabbo, that have been promoted by the pop singer Grimes.

    Curio said it has “meticulously designed” guardrails to protect children and the company encourages parents to “monitor conversations, track insights, and choose the controls that work best for their family.”

    “After reviewing the U.S. PIRG Education Fund’s findings, we are actively working with our team to address any concerns, while continuously overseeing content and interactions to ensure a safe and enjoyable experience for children.”

    Another company, Miko, said it uses its own conversational AI model rather than relying on general large language model systems such as ChatGPT in order to make their product — an interactive AI robot — safe for children.

    “We are always expanding our internal testing, strengthening our filters, and introducing new capabilities that detect and block sensitive or unexpected topics,” said CEO Sneh Vaswani. “These new features complement our existing controls that allow parents and caregivers to identify specific topics they’d like to restrict from conversation. We will continue to invest in setting the highest standards for safe, secure and responsible AI integration for Miko products.”

    Miko’s products have been promoted by the families of social media “kidfluencers” whose YouTube videos have millions of views. On its website, it markets its robots as “Artificial Intelligence. Genuine friendship.”

    Ritvik Sharma, the company’s senior vice president of growth, said Miko actually “encourages kids to interact more with their friends, to interact more with the peers, with the family members etc. It’s not made for them to feel attached to the device only.”

    Still, Suskind and children’s advocates say analog toys are a better bet for the holidays.

    “Kids need lots of real human interaction. Play should support that, not take its place. The biggest thing to consider isn’t only what the toy does; it’s what it replaces. A simple block set or a teddy bear that doesn’t talk back forces a child to invent stories, experiment, and work through problems. AI toys often do that thinking for them,” she said. “Here’s the brutal irony: when parents ask me how to prepare their child for an AI world, unlimited AI access is actually the worst preparation possible.”

    [ad_2]

    Source link

  • Nvidia’s earnings attest to its leadership in the AI race. By the numbers

    [ad_1]

    Nvidia reported more eye-catching numbers for its fiscal third quarter Wednesday, with net income jumping 65% and revenue increasing 62% from a year earlier.

    Last month, Nvidia became the first public company to reach a market capitalization of $5 trillion.

    The ravenous appetite for the Silicon Valley company’s chips is the main reason that the company’s stock price has increased so rapidly since early 2023.

    Nvidia carved out an early lead in tailoring its chipsets known as graphics processing units, or GPUs, from use in powering video games to helping to train powerful AI systems, like the technology behind ChatGPT and image generators. Demand skyrocketed as more people began using AI chatbots. Tech companies scrambled for more chips to build and run them.

    Nvidia’s journey to be one of the world’s most prominent companies has produced some extraordinary numbers. Here’s a look.

    $31.9 billion

    Nvidia’s net income for the third quarter, up from $19.3 billion a year ago.

    38.9%

    Nvidia stock’s gain for the year, as of the close of trading Wednesday. That follows gains of 171% in 2024 and 239% in 2023.

    $4.53 trillion

    Nvidia’s total market capitalization as of the close of trading Wednesday, tops in the S&P 500.

    Apple at $3.98 trillion and Microsoft at $3.62 trillion were next among the most valuable companies in the S&P 500. In all, nine companies in the index have market cap’s above $1 trillion.

    $4.28 trillion

    The gross domestic product of Japan, the world’s fourth largest economy, according to the International Monetary Fund.

    79

    The number of trading days it took for Nvidia’s market cap to grow from $4 trillion to $5 trillion earlier this year. The market cap had jumped from $3 trillion on May 13, to $4 trillion on July 9 (41 trading days), although Nvidia had crossed and fallen back below the $3 trillion threshold a number of times between June 2024 and May 2025 before making the run to $4 trillion.

    19.8%

    The company’s contribution to the gain in the S&:P 500 this year as of Oct. 31, according to S&P Dow Jones Indices.

    $162 billion

    The net worth of Nvidia CEO Jensen Huang, according to Forbes, putting him eighth on its Real-Time Billionaires List. Elon Musk is No. 1 at $467.7 billion.

    [ad_2]

    Source link

  • Asian Shares Surge as Nvidia’s Strong Quarterly Earnings Lift Sentiments

    [ad_1]

    MANILA, Philippines (AP) — Most Asian shares surged on Thursday after Nvidia reported stronger than expected quarterly earnings, soothing worries that AI-driven stock prices may have shot too high.

    U.S. futures and oil prices were higher.

    Japan’s Nikkei 225 index initially surged as much as 4.2% before giving up some early gains. By early afternoon, it was up 2.6% at 49,801.81 as technology stocks rallied, with investor sentiment boosted by Nvidia’s report of $57 billion in quarterly revenue after trading closed in the U.S., significantly above expectations.

    South Korea’s Kospi added 3% to 4,047.57, with gains led by technology and energy stocks. Investors were encouraged by Nvidia’s earnings and reports that the U.S. may delay planned semiconductor tariffs.

    Samsung Electronics gained 6.1%, while SK Hynix added 3.5%.

    Chinese markets saw more modest gains. Hong Kong’s Hang Seng Index edged 0.1% higher to 25,867.87, while the Shanghai Composite index added 0.4% to 3,961.71. Taiwan’s Taiex rose 3.2%.

    Australia’s S&P/ASX 200 gained 1.2% to 8,546.10, also led by gains for technology stocks.

    The S&P 500 rose 0.4% after veering between a small loss and a leap of 1.1% earlier in the day. That broke a four-day losing streak, the longest in nearly three months for the index, which has been shaking because of worries that stock prices have shot too high and that the Federal Reserve may not deliver as many cuts to interest rates as expected.

    The Dow Jones Industrial Average added 47 points, or 0.1%, and the Nasdaq composite climbed 0.6%.

    Constellation Energy led the market and rallied 5.3% after the U.S. Department of Energy said it’s lending $1 billion to help restart Constellation’s nuclear power plant at Three Mile Island. Lowe’s rose 4% after the home-improvement retailer reported a stronger profit for the summer than analysts expected.

    They helped offset a 2.8% drop for Target, which reported weaker revenue for the latest quarter than analysts expected. The retailer also hinted that challenges may continue through the critical holiday shopping season.

    The market’s focus, though, remained on Nvidia. Wall Street’s most influential stock climbed 2.8% as traders made their final moves ahead of the chip company’s latest profit report, which arrived after trading finished for the day. Nvidia surged 5.1% in after-hours trading.

    Nvidia is now the largest stock on Wall Street, having briefly topped $5 trillion in value. That means its movements have more of an effect on the S&P 500 than any other stock, and it can single-handedly steer the index’s direction some days.

    One way Nvidia can quiet criticism that it shot too high, which has dragged its stock down by roughly 10% from late last month, is to keep delivering bigger profits. That’s because stock prices tend to track profits over the long term.

    Nvidia has become a bellwether for the broader frenzy around artificial-intelligence technology, because other companies are using its chips to ramp up their AI efforts

    Traders also made their final moves ahead of a September jobs report coming from the U.S. government on Thursday.

    The job market has been slowing enough this year that the Fed has already cut its main interest rate twice. Lower rates can give a boost to the economy and to prices for investments, and the expectation on Wall Street had been for more cuts, including at the Fed’s next meeting in December.

    In other dealings on Thursday, US benchmark crude oil added 16 cents to $59.41 per barrel. Brent crude, the international standard, edged 16 cents higher to $63.67 per barrel.

    The U.S. dollar rose to 157.32 Japanese yen from 157.15 yen. it has been trading at nearly the highest level this year on expectations that the government will delay efforts to rein in Japan’s national debt as Prime Minister Sanae Takaichi raises spending to help spur the economy.

    The euro fell to $1.1520 from $1.1538.

    AP Business Writers Stan Choe and Matt Ott contributed.

    Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – Nov. 2025

    [ad_2]

    Associated Press

    Source link

  • Trump Takes Aim at State AI Laws in Draft Executive Order

    [ad_1]

    US President Donald Trump is considering signing an executive order that would seek to challenge state efforts to regulate artificial intelligence through lawsuits and the withholding federal funding, WIRED has learned.

    A draft of the order viewed by WIRED directs US Attorney General Pam Bondi to create an “AI Litigation Task Force,” whose purpose is to sue states in court for passing AI regulations that allegedly violate federal laws governing things like free speech and interstate commerce.

    Trump could sign the order, which is currently titled “Eliminating State Law Obstruction of National AI Policy,” as early as this week, according to four sources familiar with the matter. A White House spokesperson told WIRED that “discussion about potential executive orders is speculation.”

    The order says that the AI Litigation Task Force will work with several White House technology advisors, including the Special Advisor for AI and Crypto David Sacks, to determine which states are violating federal laws detailed in the order. It points to state regulations that “require AI models to alter their truthful outputs” or compel AI developers to “report information in a manner that would violate the First Amendment or any other provision of the Constitution,” according to the draft.

    The order specifically cites recently enacted AI safety laws in California and Colorado that require AI developers to publish transparency reports about how they train models, among other provisions. Big Tech trade groups, including Chamber of Progress—which is backed by Andreessen Horowitz, Google, and OpenAI—have vigorously lobbied against these efforts, which they describe as a “patchwork” approach to AI regulation that hampers innovation. These groups are lobbying instead for a light touch set of federal laws to guide AI progress.

    “If the President wants to win the AI race, the American people need to know that AI is safe and trustworthy,” says Cody Venzke, senior policy counsel at the American Civil Liberties Union. “This draft only undermines that trust.”

    The order comes as Silicon Valley has been upping the pressure on proponents of state AI regulations. For example, a super PAC funded by Andreessen Horowitz, OpenAI cofounder Greg Brockman, and Palantir cofounder Joe Lonsdale recently announced a campaign against New York Assembly member Alex Bores, the author of a state AI safety bill.

    House Republicans have also renewed their effort to pass a blanket moratorium on states introducing laws regulating AI after an earlier version of the measure failed.

    [ad_2]

    Maxwell Zeff, Makena Kelly

    Source link

  • Yann LeCun Leaves Meta to Create ‘Independent Entity’

    [ad_1]

    A Meta spokesperson confirmed to Bloomberg Wednesday that AI legend Yann LeCun is exiting Zuckland and striking out on his own. According to a Memo from LeCun himself that Bloomberg claims to have read, LeCun’s new endeavor is meant to “bring about the next big revolution in AI: systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences.”

    Sources apparently told Bloomberg that LeCun “clashed with others internally.” Meta had recently constructed a fully separate AI research department focused on generative AI, and in its latest story, Bloomberg now claims that Meta had begun to hide LeCun from view in favor of high-profile recent hires. Recent hires have invluded ChatGPT co-creator Shengjia Zhao.

    As previously discussed here at Gizmodo, LeCun is fascinated by an area of AI called “world models.” He has spent more than a year saying he thinks LLM research, the backbone of systems like ChatGPT, is no longer a worthy area of pursuit—at least as far as hypothetical advanced AI functions with terms like “AGI” and “superintelligence” are concerned.

    LeCun, who was born and raised in France, is among the handful of researchers often referred to as the “godfathers of AI,” or more specifically the godfathers of deep learning, and shared a Turing Award in 2019 with fellow godfathers Geoffrey Hinton and Yoshua Bengio. The influential cognitive scientist and AI researcher Gary Marcus is a longtime critic of LeCun, and their public disagreements go back years.

    LeCun joined Meta in 2013, when it was still called Facebook, as the head of what at the time was a research operation with a location in New York that LeCun could walk to from his office at NYU, where he works as a professor. At the time, it wasn’t totally clear what a company like Facebook even wanted from a scientist who worked with deep neural networks. Another major AI researcher, Andrew Ng, explained Facebook’s hiring decision to Wired in terms that now seem sort of quaint and social media-centric:

    “Machine learning is already used in hundreds of places throughout Facebook, ranging from photo tagging to ranking articles to your news feed. Better machine learning will be able to help improve all of these features, as well as help Facebook create new applications that none of us have dreamed of yet.”

    After the 2022 release of ChatGPT led to AI’s rise to domination of all priorities in the tech world, LeCun became notable for his skepticism about the need for AI safety. He told the Wall Street Journal last year that the idea that AI poses a threat to humanity is “complete B.S.”

    But LLMs aren’t LeCun’s cup of tea anyway. He clarified last month that he had almost no involvement with Meta’s Llama models, and that such generative AI-related work happened way off in another department at Meta. LeCun worked, he explained, in Meta’s Fundamental AI Research (FAIR) department, and was attempting to go “beyond LLMs.” 

    LeCun believes AI models are needed that can comprehensively understand the physical world through sensory inputs like vision, and how to reason its way through interactions with, and changes to, that world. He thinks the current crop of AI systems can’t do anything even close to this, and that they are in fact dumber than cats. 

    You can already see the start of LeCun’s world model research under the aegis of Meta in V-JEPA-2. That model is trained not on text, but on videos of the physical world, and designed not simply to replicate all that video, like Sora, but to model the causes and effects of actions in the world when things move around and interact. That’s the theory anyway.

    Bloomberg writes that Meta “plans to partner with LeCun on his startup, though details are still being finalized.” In LeCun’s memo, he wrote that his former company “will be a partner of the new company and will have access to its innovations.”

    It’s not at all clear yet how the partnership between LeCun’s new company and Meta will be structured, but tech companies are famous for being near inextricable from one another where AI is concerned. Microsoft owns about 27% of OpenAI, and has special rights to use its technology. Google similarly owns 14% of Anthropic. The way interdependent investments in the AI world lead to higher valuations has been compared to “circular dealmaking.

    LeCun’s memo says his new technology “will have far-ranging applications in many sectors of the economy, some of which overlap with Meta’s commercial interests, but many of which do not.”

    LeCun famously favors the term Advanced Machine Intelligence (AMI) in place of something like AGI (nota bene: “ami” is French for “friend”). In his memo, he reportedly wrote that “Pursuing the goal of AMI in an independent entity is a way to maximize its broad impact.” It’s an appropriately ambiguous turn of phrase. Presumably the “independent entity” is the new, non-Meta company, not an intelligent entity. Though he may mean that too. 

    [ad_2]

    Mike Pearl

    Source link

  • Melania Trump says AI will reshape war more profoundly than nuclear weapons during visit with Marines

    [ad_1]

    NEWYou can now listen to Fox News articles!

    In her first joint visit with Second Lady Usha Vance, First Lady Melania Trump met with troops and military families, praising the Marine Corps’ 250 years of service while warning that artificial intelligence (AI) will redefine modern warfare and America’s defense.

    In her Wednesday remarks at Marine Corps Air Station New River, Mrs. Trump emphasized AI’s role in her husband’s administration as a pillar of American defense strategy.

    “Technology is changing the art of war,” Trump said. “Predictably, AI will alter war more profoundly than any technology since nuclear weapons.”

    The First Lady’s remarks come as the Trump administration expands its focus on AI. The president posted to Truth Social earlier this week, saying, “We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes.”

    FIRST LADY MELANIA TRUMP AND USHA VANCE VISIT TROOPS’ FAMILIES IN FIRST JOINT VISIT

    First Lady Melania Trump cautioned about the future of artificial intelligence (AI) during remarks to Marines at Marine Corps Air Station New River on Wednesday, Nov. 19. (SAUL LOEB/AFP via Getty Images)

    President Trump’s AI push aligns with his broader “Winning the AI Race: America’s AI Action Plan,” published in July.

    The First Lady acknowledged the service and 250-year legacy of the Marine Corps, including two Marines she welcomed on stage, Sergeant Blake Donoher and Corporal Daishamari Cannon.

    Trump said that the “most significant change will be speed” when it comes to AI, adding that “artificial intelligence will take center stage in the theater of war… but of course, it is the Marine who will always play the most critical role in realizing mission success.”

    GOOGLE CEO, MAJOR TECH LEADERS JOIN FIRST LADY MELANIA TRUMP AT WHITE HOUSE AI MEETING

    First Lady Melania Trump hugs student

    First Lady Melania Trump embraces a student during a visit at DeLalio Elementary School on Marine Corps Air Station New River on Wednesday, Nov. 19.  (Anna Moneymaker/Getty Images)

    The First Lady noted that AI is taking America’s military “from soldiers to machines.”

    “Artificial intelligence is propelling America’s military into a new era,” Trump said. “We are moving from human operators to human overseers – fast. The shift from soldiers to machines is already underway: autonomous helicopters, swarming drones, and recon aircraft are here now. Fighter-less jets and autonomous bombers are on the way.”

    The First Lady was introduced by Second Lady and Marine Corps spouse Usha Vance, who greeted the Marines by relaying a “Happy birthday” message from Vice President JD Vance. The Marine Corps birthday is Nov. 10.

    MELANIA TRUMP ‘PEACE LETTER’ TO PUTIN HAILED BY USHA VANCE, WHO CALLS HER A ‘TRAILBLAZER’

    First Lady Melania Trump plays "Heads Up" with students

    First Lady Melania Trump played a game of “Heads Up” with students during a visit to Camp Lejeune on Wednesday, Nov. 19. (Anna Moneymaker/Getty Images)

    The event coincided with national Thanksgiving preparations, where both the First and Second Lady visited classrooms at Camp Lejeune.

    Students showcased AI projects as part of the Presidential Artificial Intelligence Challenge during the visit. Trump hugged a shy student in a sweet moment caught on camera in a first-grade class where kids read aloud and joined in a lively game of “Heads Up,” wearing a matching notecard on her head.

    “Don’t be shy,” the First Lady said before embracing the boy who seemed nervous to meet her.

    The First Lady concluded her remarks with heartfelt thanks to service members and their families.

    “To every Service Member — thank you for standing watch so others can celebrate in peace. And to every military spouse and child — thank you for your strength and love,” Trump said. “You serve our country, too.”

    First Lady Melania Trump holds a baby at Marine Corps visit

    Melania Trump greets military families at Marine Corps Air Station New River on Wednesday, Nov. 19. (Anna Moneymaker/Getty Images)

    “As we give thanks this season, let us remember what unites us — our shared love of country, our faith in one another, and our pride in those who serve,” Trump concluded.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    The Office of First Lady Melania Trump referred Fox News Digital to her prepared remarks.

    Fox News Digital’s Emma Bussey contributed to this report.

    [ad_2]

    Source link

  • Nvidia reports strong quarterly earnings, topping analyst expectations

    [ad_1]

    Nvidia’s third-quarter financial results on Wednesday surpassed analyst expectations, signaling that demand for its artificial intelligence chips remains robust amid investor concerns about an AI bubble

    The chipmaker reported earnings of $31.9 billion on record revenue of $57 billion for the third quarter. Revenue for the period surged 22% from the previous quarter and 62% from a year ago. Its earnings per share were $1.30. The Santa Clara, Calif., company had been expected to earn $1.26 per share on revenue of $54.9 billion for the quarter, according to analysts polled by FactSet. 

    “Blackwell sales are off the charts, and cloud GPUs are sold out,” Nvidia CEO Jensen Huang said in a statement on Wednesday, referring to the company’s proprietary superchips that power large language models. 

    “Compute demand keeps accelerating and compounding across training and inference — each growing exponentially. We’ve entered the virtuous cycle of AI. The AI ecosystem is scaling fast — with more new foundation model makers, more AI startups, across more industries, and in more countries. AI is going everywhere, doing everything, all at once,” Huang added.

    Nvidia forecast revenue of $65 billion for the fourth quarter.  The company’s shares, which have jumped 39% this year, rose nearly 4% in after-hours trading to $193.80.

    In October, the chipmaker became the first publicly listed company worth $5 trillion, with its shares buoyed by Wall Street expectations of surging demand. 

    But in recent weeks, some investors have expressed caution about the hype surrounding AI and whether the soaring market value of companies linked to the technology is warranted. Despite the promise of AI, most companies that are implementing AI have yet to see a measurable increase in productivity or profits, according to Wall Street analysts. 

    The company’s results were driven by demand for Nvidia’s Blackwell graphics processing unit chips, which could help convince investors “that this AI spending trend is an unparalleled moment in modern tech history and is not a bubble moment,” Wedbush Securities analyst Dan Ives said. 

    The construction of data centers across the U.S. has boosted demand for Nvidia’s chips. Data center investment, which includes spending on AI research and development, has become the largest contributor to U.S. growth this year, according to S&P Global

    The S&P 500’s 15% gain this year has been driven largely by big tech companies with AI investments. The combined market capitalization of the so-called “Magnificent 7” — Google-owner Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia and Tesla — accounts for 37% of the index’s total value, according to Morningstar. 

    [ad_2]

    Source link

  • The Teddy Bear Said What? And Other Dispatches From the AI Frontier

    [ad_1]

    The race to embrace artificial intelligence for its promise of unrivaled productivity may not be a conventional political story. But implementing it without proper guardrails raises an array of issues that will no doubt demand a public policy response.

    So here at Decision Points Global HQ, we plan to do periodic roundups of news about AI, highlighting the important, the useful, the scary and the downright weird things happening along this high-tech frontier.

    Sign Up for U.S. News Decision Points

    Your trusted source for breaking down the latest news from Washington and beyond, delivered weekdays.

    Sign up to receive the latest updates from U.S. News & World Report and our trusted partners and sponsors. By clicking submit, you are agreeing to our Terms and Conditions & Privacy Policy.

    The Teddy Bear Said What?

    As a Gen Xer, I remember the days of Teddy Ruxpin, a stuffed bear that told stories via a cassette player in its chest – predictable, carefully selected stories.

    Last week, the Public Interest Research Group issued its 40th “Trouble in Toyland” and flagged issues with some toys powered by AI chatbots.

    “We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave and have limited or no parental controls,” PIRG warned. “We also look at privacy concerns because these toys can record a child’s voice and collect other sensitive data, by methods such as facial recognition scans.”

    In what may be the most disturbing example, the report detailed the trouble with FoloToy’s Kumma, a $99 teddy bear that ships from China. PIRG researchers were able to trigger instructions on lighting a match and a fairly in-depth discussion of sexual “kink.”

    “In other exchanges lasting up to an hour, Kumma discussed even more graphic sexual topics in detail, such as explaining different sex positions, giving step-by-step instructions on a common ‘knot for beginners’ for tying up a partner, and describing roleplay dynamics involving teachers and students and parents and children – scenarios it disturbingly brought up itself,” according to the report.

    Google Boss Warns of AI Investment ‘Irrationality’

    Sundar Pichai, CEO of Google parent company Alphabet, warned in an interview with the BBC that the AI investment boom had “elements of irrationality.” And if it turns out to be a bubble that pops, “no company is going to be immune, including us.”

    Apparently alluding to the late 1990s dotcom bubble, Pichai said, “We can look back at the internet right now. There was clearly a lot of excess investment, but none of us would question whether the internet was profound.”

    “I expect AI to be the same. So I think it’s both rational and there are elements of irrationality through a moment like this.”

    The Week in Cartoons Nov. 17-21

    When AI Testifies

    Well this is brazen. NBC News reported this week about the rise of AI-generated “evidence” being submitted in court cases – including one glitchy “deep fake” video purporting to show witness testimony in a housing dispute in California.

    “With the rise of powerful AI tools, AI-generated content is increasingly finding its way into courts, and some judges are worried that hyperrealistic fake evidence will soon flood their courtrooms and threaten their fact-finding mission,” NBC said.

    Forged audio or video could land the people they spoof in serious trouble while also eroding “the foundation of trust upon which courtrooms stand.”

    Here, we get into more straightforwardly political issues. Some judges and legal experts are pushing “for changes to judicial rules and guidelines on how attorneys verify their evidence. By law and in concert with the Supreme Court, the U.S. Congress establishes the rules for how evidence is used in lower courts.”

    Over to you, Capitol Hill!

    [ad_2]

    Olivier Knox

    Source link

  • Target says it’s working with ChatGPT for AI-assisted shopping

    [ad_1]


    Target on Wednesday said it’s working with OpenAI to let customers shop its products through ChatGPT, a move that comes as the retailer is struggling to convince inflation-weary consumers to stick with it.

    Customers will be able to browse Target’s selection and make purchases within the ChatGPT app, according to the retailer. The tool will debut next week, providing access to ChatGPT’s 800 million weekly active users in time for the holiday shopping season.

    Target is leaning on price cuts and a $1 billion investment plan to revive its brand, the retailer said separately Wednesday, as same-store sales fell 2.7% in the latest quarter and profit tumbled 19%. With shoppers increasingly relying on AI to find products online, other big retailers — including Walmart, which struck a similar partnership with OpenAI last month — are turning to the technology to boost sales.

    Here’s how the ChatGPT-powered Target tool will work: Inside the ChatGPT app, consumers can tag Target and ask for ideas, such as if they’re planning something like a holiday family movie night. The Target app will then suggest specific products, such as blankets or snacks, and allow users to buy them directly without leaving the ChatGPT interface.

    Target said that AI will eventually start to understand and predict what customers want to buy. 

    A recent Harris Poll shows that nearly half of Gen Z consumers would trust AI as a personal shopper that helps them pick out what they buy and find deals. Streamlining the purchasing process could help retailers boost sales, according to retail experts. 



    Exploring the rise of artificial intelligence company OpenAI

    04:14

    [ad_2]

    Source link

  • How This Startup Landed a $300 Million Valuation After Pivoting to ChatGPT

    [ad_1]

    Like many early adopters, Benjamin Alarie was working with artificial intelligence well before it hit the mainstream. A business law professor at the University of Toronto, he founded the legal tech startup Blue J in 2015 with hopes of applying predictive AI to tax law.

    But only a few years later, generative artificial intelligence—the segment of AI best exemplified by ChatGPT, Claude and other text-generating chatbots—was finally taking off. Users were buying in, private capital was flowing and what had once been a powerful but unflashy software segment was suddenly the center of a consumer-facing boom.

    By then, Alarie told VentureBeat in a recent profile, Blue J had hit a ceiling, with revenue leveling off at around $2 million a year. So he made a high-stakes gamble and went all-in on the emerging genAI trend.

    “Large language models seemed like a very promising direction,” Alarie says of his pivot. Blue J’s earlier tech, predictive machine learning, couldn’t answer every question users asked of it—“Which was really the holy grail,” the Blue J chief executive explains—leading customers to abandon the tool when it let them down.

    Says Alarie: “I had this conviction that if we continued down that path, we weren’t going to be able to address our number one limitation.”

    Yet the generative AI boom offered a way out. After giving his team six months to get him a new product, and then honing the resulting system’s outputs over the last few years, Alarie says that by now he’s cut Blue J’s response time down from a minute and a half to just seconds, and more than quadrupled its net promoter score from 20 to over 80.

    He also credits a partnership with software giant OpenAI, which gives Blue J early access to cutting-edge artificial intelligence models in exchange for real-world feedback.

    It’s all paid off for the company. VentureBeat reports that Blue J’s $122 million Series D, announced this summer, secured the legal tech startup a valuation over $300 million. More than 3,500 different organizations are reportedly on the firm’s client list, including several Fortune 500 companies.

    “What once took tax professionals 15 hours of manual research to do can now be completed in about 15 seconds,” Alarie told VentureBeat. “That value proposition—we can take hours of work and turn it into seconds of work—that is driving a lot of this.”

    [ad_2]

    Brian Contreras

    Source link

  • How AI helps detect lung cancer sooner, improving survival rates – WTOP News

    [ad_1]

    New artificial intelligence technology is helping diagnose the U.S.’s most deadly cancer sooner, greatly improving the chances of lung cancer patients’ survival. 

    New artificial intelligence technology is helping diagnose the U.S.’s most deadly cancer sooner, greatly improving the chances of patients’ survival. 

    The key? Detecting tiny lung nodules when doctors aren’t screening for cancer, and automating and streamlining follow-up care. While most incidentally-discovered lung nodules turn out to be noncancerous, some become malignant over time.

    The importance of catching lung cancer early is clear: The five-year survival rate for non-small cell lung cancer when detected in localized Stage 1 is 67%. However, most lung cancer is diagnosed after it has spread to other organs, when the five-year rate is 12%, according to the American Cancer Society. 

    Inova Schar Cancer Institute, based in Fairfax, Virginia, is one forward-thinking cancer center harnessing the power of AI to flag incidental lung nodules that often go unnoticed, during an emergency room CT scan or MRI for pneumonia or a broken bone. The Eon Lung Cancer Screening system uses computational linguistics and natural language processing to scan radiology reports. 

    The company says it identifies high-risk patients with 98.3% accuracy by analyzing imaging data and integrating with electronic health records in real time. 

    ‘The patient is still in the ER, we call and tell them to come right to the clinic’

    Amit “Bobby” Mahajan is the medical director of interventional pulmonology in the Inova Health System. (Disclosure: He is also the doctor who did my bronchoscopy in November 2022 and told me I had lung cancer. After four months of one-pill-a-day targeted therapy and a robotic-assisted lobectomy, I was declared cancer-free in May 2023 and have remained that way while continuing my daily pill.) 

    AI-powered technology is enabling Schar’s interventional pulmonologists and surgeons to get patients with found-by-accident nodules into cancer care months or years earlier. Mahajan heads the Incidental Pulmonary Nodule Clinic, as part of the Inova Saville Cancer Screening and Prevention Center

    “Whether it be an MRI, a chest CT, or abdominal CT, it takes that data, comprises it into a finding, and then makes a risk score of that being cancer,” said Mahajan, during a recent WTOP visit and demonstration of the Eon technology.

    With the AI system scanning electronic health records as data is entered, “We’re able to call the patient and say, ‘Look, I know you just had a CT scan in the ER for your abdominal pain, but we also caught a lung nodule in the bottom of your lung that is suspicious,’” Mahajan said.

    “For better or worse, we’ve had more than a handful of people who we’ve said, ‘We need to send you over to the clinic right now, because you came in for something that’s nothing to worry about, but we did find something that needs to be addressed today,’” he added.

    Traditionally, reaching a cancer diagnosis for a patient with a persistent cough or other symptoms can take weeks and requires patients and doctors to coordinate follow-up scans and labs.

    “From an AI perspective, the system will learn more from our CT scans and image reports every time it sees one, and starts picking out the word ‘spiculated,’ the word ‘nodule,’ and where it’s located,”‘ Mahajan said.

    While benign nodules usually have smooth borders, a spiculated nodule’s edges appear irregular, or spiky, which often suggests the lesion is malignant.

    “It takes that data to the very well known Brock Model for risk of lung cancer, and it will actually calculate the risk of cancer in those patients, and give us a percentage,” Mahajan said. “Anyone over 5%, we call, and get them into the clinic right away, most of the time in the same week.”

    After being notified of an incidental nodule found in ER imaging, some patients prefer to check with their primary care physician.

    “Totally reasonable,” Mahajan said.

    Streamlining the follow-up process helps reduce the risk of patients “falling through the cracks.”

    “We’ve biopsied them two days later, and gotten a diagnosis of cancer,” Mahajan said. “Luckily, most have been early stage disease and they’ve been resected afterward.”

    With lung cancer, resection is a surgical procedure to remove lung tissue affected by cancer and is regarded as the most effective treatment for cancer that hasn’t spread to other organs.

    “Our goal is to get a patient with a newly-diagnosed lung cancer evaluated as soon as possible, to get them into surgery,” Schar thoracic surgeon Melanie Subramanian said. “It’s not only better for treating the disease, but it also gives patients a peace of mind too, knowing that they have a treatment plan and a treatment team.”

    The AI system creates guideline-based care plans, and sends alerts to doctors and nurse navigators, helping patients stay on schedule for future screenings.

    ‘It’s as close to an Xbox controller as you get’

    Artificial intelligence is also enabling robotic bronchoscopy procedures.

    “Previously, when we had to biopsy these small nodules in the lung, we had to use a handheld camera, to drive down as far as we could, but the lungs and airways get smaller and smaller the further out you go,” Mahajan said.

    “Now, we have robotic platforms,” Mahajan added. “The patient is completely asleep, and we drive about a four millimeter camera down to these nodules, using a handheld controller that’s as close to an Xbox controller as you can get.”

    And AI helps navigate through the airways: “There’s advanced imaging associated as well, and with the robotic platform, we can pretty much reach anything in the lung nowadays,” he said.

    Inova Schar says 69% of lung cancers are now being detected at Stage 1 or 2, compared to only 34% without low-dose CT screening and proactive follow-up of incidental nodules.

    Get breaking news and daily headlines delivered to your email inbox by signing up here.

    © 2025 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.

    [ad_2]

    Neal Augenstein

    Source link