ReportWire

Tag: ai training

  • Want AI Adoption to Actually Be Successful? Train the C-Suite First

    [ad_1]

    Companies are investing millions in AI tools and training employees across all levels to use them. Yet the most senior leaders — those who shape culture, set strategy, and control budgets — are often excluded from hands-on learning. The prevailing assumption is that executives only need high-level briefings, not practical training. They’ll grasp the details later, after all,  shouldn’t training focus on the people doing the “real work?” 

    This thinking has created a costly blind spot. A recent MIT report found that 95 percent of AI initiatives fail to provide ROI. While much of the conversation has focused on technical implementation challenges, there’s a less obvious culprit: Senior leaders are making decisions about AI adoption, governance, and resource allocation without truly understanding how these tools work in practice. The result is a disconnect between what gets purchased and what actually gets adopted, leading to fragmented execution and missed opportunities. 

    The leadership bar has shifted. Three years ago, being tech-aware was sufficient — following digital trends and relying on experts for execution. Today’s leaders must be tech-immersed.

    They need to understand how AI platforms directly impact products, customers, operations, and value chains. They need data literacy, which means knowing how to work effectively with generative AI tools to interrogate datasets, surface patterns, and generate insights. And perhaps most critically, they need human-centered leadership skills — the ability to cultivate trust at scale, guide teams through the emotional complexities of change, and create environments where people feel supported as their roles get redefined. 

    The real advantage in AI adoption comes from senior leaders who understand these tools well enough to model their use, ask the right questions, and create the conditions for genuine adoption across the organization. 

    What senior leader training actually looks like 

    Effective executive AI training bears little resemblance to the typical boardroom briefing. It’s not about consultants presenting frameworks on slides or walking through theoretical use cases. It starts with customization, training built around the actual data executives work with and the specific problems they need to solve. 

    For a senior leader, this might mean learning how to prepare for client meetings by connecting the AI tool to CRM data or analyzing financial forecasts to spot trends and anomalies. It could involve board preparation or simply understanding how to ask better questions of the data. And that training may look like the inverse of what has been typical for so long. 

    At one global financial services firm, a senior executive in charge of an entire region began meeting with a junior employee every two weeks for AI coaching. During one session, the junior employee suggested using the tool to analyze the tone of the executive’s emails. The executive discovered that during particularly busy weeks, her email tone became noticeably more abrasive. She hadn’t realized it before. Here was someone at the top of the organization chart being coached by someone near the bottom, and the value created rippled across the entire C-suite. 

    This dynamic requires something many senior leaders aren’t used to: vulnerability. Using AI tools effectively means iterating prompts, admitting when something doesn’t work, and starting over. When leaders experience firsthand what it’s like to learn these tools, they develop empathy for what their teams are going through and a more realistic understanding of what adoption actually requires. 

    Many organizations bring in outside firms to provide credibility and frameworks for thinking about AI strategy. That’s fine, maybe even useful. However, those high-level sessions need to be paired with practical, hands-on training that grounds abstract concepts in real implementation. Leaders need both the theory and the practice. 

    The cultural shift and structure required 

    Training executives to use AI tools is only half the equation. The other half is teaching them how to lead adoption across the organization, which means understanding that AI cannot sit in silos or remain the responsibility of technical teams. It must be embedded in decision-making at every level. 

    Leaders can’t simply mandate adoption from the top. What actually drives adoption is when leaders model the behavior themselves, like judging hackathons, highlighting the most creative uses of AI each week, visibly using these tools in their own work, and talking about what they’ve learned. As automation takes over routine tasks, the human elements of leadership become differentiators. 

    One of the most effective structures for driving adoption is a champion network — employees across different departments who become power users and help their colleagues. But many companies start too small. One global bank had 700 champions and wanted to know how to scale its AI usage. Our work with a range of enterprise companies shows that champion networks with roughly one person for every 25 employees create the diversity of perspectives necessary for genuine culture change. The bank actually needed to grow its base of champions before it could sustainably scale AI use. 

    The goal is citizen development, empowering thousands of employees to create thousands of their own AI applications and workflows, rather than relying solely on a few large, centrally managed projects. When an HR manager builds a custom tool for screening resumes, or a sales team creates an application for client research, adoption becomes organic rather than mandated. 

    The paradox is that leaders need to champion AI adoption, but they also need to know when to leave the room. Employees need space to be vulnerable, to try things that might fail, to iterate without fear of judgment. Effective AI adoption requires both top-down endorsement and bottom-up experimentation. 

    The companies that will succeed 

    Organizations that are finding success with AI aren’t necessarily the ones with the biggest budgets or the most advanced technology. They’re the ones where senior leaders understand these tools well enough to ask informed questions, model effective use, and create genuine space for experimentation and failure. 

    This requires a kind of humility that doesn’t always come naturally to people who have spent decades building expertise. It means being willing to learn alongside, and sometimes from, junior employees. It means admitting uncertainty and being comfortable with iteration. 

    If your executives haven’t received hands-on AI training using your actual data and real use cases, your adoption program is already behind. Learning the technology itself isn’t the bottleneck anymore — it’s understanding how to apply it effectively. The organizations that will win have leadership that advances from mere awareness to practical fluency. That doesn’t come from briefings and white papers; it comes from direct experience. 

    The early-rate deadline for the 2026 Inc. Regionals Awards is Friday, November 14, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Adam Caplan

    Source link

  • How AI Training Can Lead to Productivity Gains

    [ad_1]

    While most U.S. businesses are already using artificial intelligence tools in their workplaces, their adoption rates and productivity increases vary greatly. New research shows how valuable those efficiency increases can be when employers give their full support to staff, including properly focused training on new AI tools. It also shows how the ineffective or disorganized introduction of these apps often undermines their potential gains.

    As noted in recent Inc. reports, studies show a majority of global employees aren’t wasting time worrying about AI taking over their jobs, and have instead actively learned to use the tech to enhance the value of their work. A new global survey of nearly 3,250 workers and executives by the London School of Economics’ Inclusion Initiative along with business consultancy Protiviti, quantifies the efficiency gains AI offers. It found that on average, employees save 7.5 hours per week — nearly a full workday — by using apps to automate tasks. The report calculated that by redeploying that extra time for other work, each respondent generated about $18,000 in additional annual productivity for their companies.

    Alas, wasn’t the only lesson for business owners. The study also warned that a large gap exists between the AI-powered productivity increases and the level of support they’re offering staff to use most effectively.

    For starters, 68 percent of employees who answered the survey said they’d received no AI training in the previous 12 months. That was determined to have influenced both adoption rates, and efficiency gains made using the tech. Indeed, fully 93 percent of respondents who’d gotten that instruction reported regularly turning to apps for their work, versus only 57 percent who hadn’t been given that support.

    Meanwhile, the time saved by participants who said they’d received instruction on AI was double that of people who hadn’t. Concretely, 11 extra hours freed up each week to redirect to more productive tasks, versus five hours for people who used the tech without training. According to Grace Lordan, founding director of The Inclusion Initiative and the study’s research lead, those differences offer an obvious message to employers seeking efficiency gains through AI.

    “For business leaders, the priority is clear: Closing the AI training gap is one of the fastest ways to unlock measurable return,” Lordan said in comments accompanying the findings. “Equipping employees with the right skills doesn’t just improve individual productivity — it drives sharper decision-making, accelerates innovation and creates stronger overall performance. In an environment where every efficiency counts, organizations that act now will set themselves apart from those still waiting on the sidelines.”

    Getting all generational workplace members into that game on a more level field is also essential.

    While the survey found 82 percent of Gen Z respondents said they used AI for work, the rate dropped to 52 percent of Baby Boomers. Similarly, about half of Gen Z participants said they were involved in developing AI and its use across the workplace, compared to about 30 percent of Gen Z and Boomers combined.

    In line with those findings, the survey also showed nearly twice as many younger employers received AI training during the previous 12 months than older colleagues. That discrepancy was also reflected in the performance of workplace teams that were made up of people of different age cohorts. About 77 percent of working groups with higher degrees of generational diversity reported regular productivity gains, compared to 66 percent with lower age diversity.

    In other words, survey authors said, companies that both encourage AI use and train all workplace members to use those tools are likely to see higher increases in overall productivity — as well as better adoption rates and efficiency gains by employees of all generations.

    “AI isn’t just another tool for the workplace — it’s a catalyst for rethinking how they organize, lead and empower their people,” said Protiviti global leader of people and change Fran Maxwell. “The organizations that will benefit the most are those that embed AI into everyday workflows, redesign roles to focus on higher-value work, and give employees the confidence to experiment. This research shows that inclusive adoption across all generations doesn’t just improve productivity — it prepares companies for the next wave of change.”

    The early-rate deadline for the 2026 Inc. Regionals Awards is Friday, November 14, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Bruce Crumley

    Source link

  • Meta Says Porn Stash was for ‘Personal Use,’ Not Training AI Models

    [ad_1]

    Meta forgot to keep its porn in a passworded folder, and now its kink for data collection is the subject of scrutiny. The social media giant turned metaverse company turned AI power is currently facing a lawsuit brought by adult film companies Strike 3 Holdings and Counterlife Media, alleging that the Big Tech staple illegally torrented thousands of porn videos to be used for training AI models. Meta denies the claims, and recently filed a motion to dismiss the case because, in part, it’s more likely the videos were downloaded for “private personal use.”

    To catch up on the details of the case, back in July, Strike 3 Holdings (the producers of Blacked, Blacked Raw, Tushy, Tushy Raw, Vixen, MILFY, and Slayed) and Counterlife Media accused Meta of having “willfully and intentionally” infringed “at least 2,396 movies” by downloading and seeding torrents of the content. The companies claim that Meta used that material to train AI models and allege the company may be planning a currently unannounced adult version of its AI video generator Movie Gen, and are suing for $359 million in damages.

    For what it’s worth, Strike 3 has something of a reputation of being a very aggressive copyright litigant—so much so that if you search the company, you’re less likely to land on its homepage than you are to find a litany of law firms that offer legal representation to people who have received a subpoena from the company for torrenting their material.

    There may be some evidence that those materials were swept up in Meta’s data vacuum. Per TorrentFreak, Strike 3 was able to show what appear to be 47 IP addresses linked to Meta participating in torrenting of the company’s material. But Meta doesn’t seem to think much of the accusation. In its motion to dismiss, the company calls Strike 3’s torrent tracking “guesswork and innuendo,” and basically argues that, among other reasons, there simply isn’t even enough data here to be worth using for AI model training. Instead, it’s more likely just some gooners in the ranks.

    “The small number of downloads—roughly 22 per year on average across dozens of Meta IP addresses—is plainly indicative of private personal use, not a concerted effort to collect the massive datasets Plaintiffs allege are necessary for effective AI training,” the company argued. The company also denied building a porn generator model, basically stating that Strike 3 doesn’t have any evidence of this and Meta’s own terms of service prohibit its models from generating pornographic content.

    “These claims are bogus: We don’t want this type of content, and we take deliberate steps to avoid training on this kind of material,” a spokesperson for Meta told Gizmodo.

    As absurd as the case is, whether the accusations are right or wrong, there is one clear victim: the dad of a Meta contractor who is apparently simultaneously being accused by Strike 3 of being a conduit for copyright infringement and accused by Meta of being a degenerate: “[Strike 3] point to 97 additional downloads made using the home IP address of a Meta contractor’s father, but plead no facts plausibly tying Meta to those downloads, which are plainly indicative of personal consumption,” Meta’s motion said. God forbid this case move forward and this poor person has to answer for his proclivities reserved for incognito tabs.

    [ad_2]

    AJ Dellinger

    Source link

  • AI company Anthropic to pay authors $1.5 billion over pirated books used to train chatbots

    [ad_1]

    Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.Related video above: The risks to children under President Trump’s new AI policyThe landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement.”As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.”We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.Anthropic said in a statement Friday that the settlement, if approved, “will resolve the plaintiffs’ remaining legacy claims.””We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, the company’s deputy general counsel.As part of the settlement, the company has also agreed to destroy the original book files it downloaded.Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT. Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the dataset.Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.” The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren’t registered with the U.S. Copyright Office.”On the one hand, it’s comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price,” said Thomas Heldrup, the group’s head of content protection and enforcement.On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules.”It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space,” Heldrup said.The privately held Anthropic, founded by ex-OpenAI leaders in 2021, earlier this week put its value at $183 billion after raising another $13 billion in investments.Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.The settlement could influence other disputes, including an ongoing lawsuit by authors and newspapers against OpenAI and its business partner Microsoft, and cases against Meta and Midjourney. And just as the Anthropic settlement terms were filed, another group of authors sued Apple on Friday in the same San Francisco federal court.”This indicates that maybe for other cases, it’s possible for creators and AI companies to reach settlements without having to essentially go for broke in court,” said Long, the legal analyst.The industry, including Anthropic, had largely praised Alsup’s June ruling because he found that training AI systems on copyrighted works so chatbots can produce their own passages of text qualified as “fair use” under U.S. copyright law because it was “quintessentially transformative.”Comparing the AI model to “any reader aspiring to be a writer,” Alsup wrote that Anthropic “trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”But documents disclosed in court showed Anthropic employees’ internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles.With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn’t undo the earlier piracy, according to the judge.

    Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.

    Related video above: The risks to children under President Trump’s new AI policy

    The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.

    The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement.

    “As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”

    A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.

    A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites.

    If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.

    “We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.

    U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.

    Anthropic said in a statement Friday that the settlement, if approved, “will resolve the plaintiffs’ remaining legacy claims.”

    “We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, the company’s deputy general counsel.

    As part of the settlement, the company has also agreed to destroy the original book files it downloaded.

    Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT.

    Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.

    Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the dataset.

    Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.

    The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright.

    On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.”

    The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren’t registered with the U.S. Copyright Office.

    “On the one hand, it’s comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price,” said Thomas Heldrup, the group’s head of content protection and enforcement.

    On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules.

    “It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space,” Heldrup said.

    The privately held Anthropic, founded by ex-OpenAI leaders in 2021, earlier this week put its value at $183 billion after raising another $13 billion in investments.

    Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.

    The settlement could influence other disputes, including an ongoing lawsuit by authors and newspapers against OpenAI and its business partner Microsoft, and cases against Meta and Midjourney. And just as the Anthropic settlement terms were filed, another group of authors sued Apple on Friday in the same San Francisco federal court.

    “This indicates that maybe for other cases, it’s possible for creators and AI companies to reach settlements without having to essentially go for broke in court,” said Long, the legal analyst.

    The industry, including Anthropic, had largely praised Alsup’s June ruling because he found that training AI systems on copyrighted works so chatbots can produce their own passages of text qualified as “fair use” under U.S. copyright law because it was “quintessentially transformative.”

    Comparing the AI model to “any reader aspiring to be a writer,” Alsup wrote that Anthropic “trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”

    But documents disclosed in court showed Anthropic employees’ internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles.

    With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn’t undo the earlier piracy, according to the judge.

    [ad_2]

    Source link