ReportWire

Tag: iab-computing

  • Appeals court says Biden admin likely violated First Amendment but narrows order blocking officials from communicating with social media companies | CNN Politics

    Appeals court says Biden admin likely violated First Amendment but narrows order blocking officials from communicating with social media companies | CNN Politics

    [ad_1]



    CNN
     — 

    A federal appeals court on Friday said the Biden administration likely violated the First Amendment in some of its communications with social media companies, but also narrowed a lower court judge’s order on the matter.

    The US 5th Circuit Court of Appeals ruled that certain administration officials – namely in the White House, the surgeon general, the US Centers for Disease Control and Prevention, and the Federal Bureau of Investigation – likely “coerced or significantly encouraged social media platforms to moderate content” in violation of the First Amendment in its efforts to combat Covid-19 disinformation.

    But the three-judge panel said the preliminary injunction issued by US District Judge Terry Doughty in July, which ordered some Biden administration agencies and top officials not to communicate with social media companies about certain content, was “both vague and broader than necessary to remedy the Plaintiffs’ injuries, as shown at this preliminary juncture.”

    The Biden administration had previously argued in the lawsuit brought by Republican attorneys general claiming unconstitutional censorship that channels with social media companies must stay open so that the federal government can help protect the public from threats to election security, Covid-19 misinformation and other dangers.

    In briefs submitted earlier this summer, the administration wrote, “There is a categorical, well-settled distinction between persuasion and coercion,” adding that Doughty had “equated legitimate efforts at persuasion with illicit efforts to coerce.”

    The 5th Circuit left in place part of the injunction that barred certain Biden administration officials from “threatening, pressuring, or coercing social-media companies in any manner to remove, delete, suppress, or reduce posted content of postings containing protected free speech.”

    “But,” the appeals court said, “those terms could also capture otherwise legal speech. So, the injunction’s language must be further tailored to exclusively target illegal conduct and provide the officials with additional guidance or instruction on what behavior is prohibited.”

    The appeals court reversed several aspects of Doughty’s sweeping order, concluding that those pieces of it risked blocking the federal government “from engaging in legal conduct.”

    The 5th circuit left the order, which had been temporarily blocked earlier in the summer, on pause for 10 days so that the case can be appealed to the Supreme Court.

    The opinion was handed down jointly by Circuit Judges Edith Clement, Jennifer Walker Elrod and Don Willett – all appointees of Republican presidents.

    The conservative appeals court sided with many of the arguments put forward by the plaintiffs, which included private individuals as well Missouri and Louisiana, but also narrowed the injunction’s scope so that it only applied to the White House, the surgeon general, the CDC and the FBI. Doughty had included other agencies in his July order.

    This story has been updated with additional information.

    [ad_2]

    Source link

  • The iPhone’s new Action Button is more than a one-trick pony | CNN Business

    The iPhone’s new Action Button is more than a one-trick pony | CNN Business

    [ad_1]



    CNN
     — 

    The new iPhone 15 Pro lineup offers the typical slate of new features designed to persuade customers to upgrade: They’re slimmer and thinner than last year’s crop. The new cameras are professional-grade and the switch to USB-C charging will make your life easier.

    But one new feature easily stands out: The Action Button.

    Apple has repurposed its physical mute button on the side of its high-end models into a more customizable tool, allowing users to carry out a handful of commands, from recording a voice memo and taking a picture to turning on the flashlight. The button can also be programmed to launch any app or shortcut, essentially turning it into a remote control or launching pad to gain quick access to something you want on demand.

    In the days since Apple’s iPhone 15 event at its Cupertino, California, headquarters, I’ve used it to load a variety of apps in a single press, including CNN, Amazon and Instagram. It’s certain the Action Button will become a viable resource for anyone who revisits an app time and time again throughout the day.

    But it also has the potential to become an even more powerful tool; you could program it to play your favorite playlist, turn on the smart lights in your living room or use it to open the garage door. You could even turn it into a dedicated button to call mom. It builds on iOS’s existing offering of ready-made or custom shortcuts, and Apple is encouraging developers to build other unique shortcuts that other users could activate on the Action Button.

    The change is subtle but it’s one of the few noticeable tweaks to the iPhone’s design this year. The Action Button is about the same size as the existing button, and users still hold it down to switch between muting and turning on the ringer. Commands are accompanied with visual feedback from the Dynamic Island barhome to alerts and notifications at the top of the screen.

    The Action Button update, along with changes in the phone’s charging and camera systems, comes as Apple looks to give consumers more reasons to upgrade their iPhones. Last month, Apple’s sales fell for the third consecutive quarter. iPhone revenue came in at $39.7 billion for the quarter, marking an approximately 2% year-over-year decline, as people update their devices less often.

    Another selling point to splurge for the iPhone 15 Pro ($1,099) or iPhone 15 Pro Max ($1,199): The phones come with a titanium casing — the same alloy used to build the Mars Rover — making them what Apple calls the thinnest and lightest Pro models to date. Apple’s entry level iPhones, the iPhone 15 and iPhone 15 Plus, cost $799 and $899, respectively. The entire lineup starts shipping on Friday.

    To program the Action Button, iPhone 15 Pro users can visit the button’s section in Settings, scroll through and select from a series of functionalities — such as flashlight or camera. By picking the shortcuts option, however, users can sift through their list of apps or previously established commands.

    Once set, there’s a slight learning curve following years of falling into the habit of using the physical button to turn the volume on and off. For this reason, it could take quite a while for some of iPhone’s loyalists to change how they use the device.

    The Action Button isn’t entirely new; the company unveiled it last year on the Apple Watch Ultra. Apple told CNN it was inspired to bring it to the iPhone after hearing anecdotes from users who said they consistently leave their phone on silent, rendering that button essentially useless. Considering iPhone usage has changed a lot since the iPhone debuted 16 years ago, revisiting a hallmark feature like the mute button was only a matter of time, according to the company.

    Ramon Llamas, a director at market research firm IDC, believes last week’s announcement is only the first step toward making the Action Button more dynamic. “I’d like to think that the Action Button could be expanded a bit more, like one click will take you to one feature; two clicks takes you to another, and three clicks gets you something else,” Llamas said. “But I think that would be it. Any more than that and you risk launching the wrong app, like Wordle, at the wrong time (when you need your camera the most),” he said.

    It’s also a strategic way for Apple to make the most of already tight real estate on the device, according to Llamas. Annette Zimmerman, a VP analyst with market research firm Gartner, agrees, noting that “having one button to do exactly one thing isn’t really progressive in a time where everything has multi-functionality and can be programmed.”

    Although it’s unclear if the Action Button will come to more devices in the future, Apple continues its sweep to create a uniform ecosystem for its consumers. Similarly, Apple is adding a Double Tap feature that allows people to use a finger feature to control the Apple Watch, just months after it showed off a similar gesture on its upcoming Vision Pro mixed reality headset.

    For now, iPhone 15 Pro users will enjoy playing with the new Action Button. While the feature is not worth the upgrade to iPhone 15 alone, the switch to universal charging, a faster processor and advanced camera capabilities make it a solid package, especially if you haven’t upgraded in the last few years.

    [ad_2]

    Source link

  • Indonesia bans e-commerce transactions on social media in major blow to TikTok | CNN Business

    Indonesia bans e-commerce transactions on social media in major blow to TikTok | CNN Business

    [ad_1]


    Jakarta
    Reuters
     — 

    Indonesia has banned e-commerce transactions on social media platforms, the trade minister said on Wednesday, in a blow to short video app TikTok, which is doubling down on Southeast Asia’s biggest economy to boost its e-commerce business.

    The government said the move, which takes effect immediately, is aimed at protecting offline merchants and marketplaces, adding that predatory pricing on social media platforms is threatening small and medium-sized enterprises.

    The move comes just three months after TikTok pledged to invest billion of dollars in Southeast Asia, mainly in Indonesia, over the next few years in a major push to build its e-commerce platform TikTok Shop.

    TikTok, owned by China’s ByteDance, has 125 million active monthly users in Indonesia and has been looking to translate the large user base into a major e-commerce revenue source.

    A TikTok Indonesia spokesperson said it would pursue a constructive path forward and was “deeply concerned” with the announcement, “particularly how it would impact the livelihoods of the 6 million” local sellers active on TikTok Shop.

    Indonesia Trade Minister Zulkifli Hasan on Wednesday told reporters that the regulation is intended to ensure “fair and just” business competition, adding that it was also intended to ensure data protection of users.

    He warned of letting social media become an e-commerce platform, shop and bank all at the same time.

    The new regulation also requires e-commerce platforms in Indonesia to set a minimum price of $100 for certain items that are directly purchased from abroad, according to the regulation document reviewed by Reuters, and that all products offered should meet local standards.

    Zulkifli said TikTok had one week to comply with the regulation or face the threat of closure. Indonesia Deputy Trade Minister Jerry Sambuaga earlier this month named TikTok’s live streaming features as an example of people selling goods on social media.

    Research firm BMI said TikTok would be the only business affected by the transaction ban and the move was unlikely to harm the digital marketplace industry’s growth.

    Indonesia’s e-commerce market is dominated by the likes of homegrown tech firm GoTo’s Tokopedia, Sea’s Shopee and Chinese e-commerce giant Alibaba’s Lazada.

    E-commerce transactions in Indonesia amounted to nearly $52 billion last year and of that, 5% took place on TikTok, according to data from consultancy Momentum Works.

    Indonesia is among the few markets where TikTok has launched TikTok Shop, as it seeks to leverage its large user base in the country.

    Its 125 million active monthly users in Indonesia is almost on par with its user figures for Europe and behind US users of more than 150 million. TikTok launched an online shopping service in the United States earlier this month.

    Reactions from retailers were mixed.

    Fahmi Ridho, a vendor selling clothes on TikTok, said the platform was a way for stores to recover from the blow dealt by the Covid-19 pandemic.

    “Sales don’t have to be necessarily through [brick and mortar] shops, you can do it online or wherever,” he said “Everything will still have a portion.”

    But Edri, who goes by one name only and sells clothes at a major wholesale market in Jakarta, agreed with the regulation and stressed that there should be limits on items sold online.

    [ad_2]

    Source link

  • SoftBank CEO says artificial general intelligence will come within 10 years | CNN Business

    SoftBank CEO says artificial general intelligence will come within 10 years | CNN Business

    [ad_1]


    Tokyo
    Reuters
     — 

    SoftBank CEO Masayoshi Son said he believes artificial general intelligence (AGI), artificial intelligence that surpasses human intelligence in almost all areas, will be realized within 10 years.

    Speaking at the SoftBank World corporate conference, Son said he believes AGI will be ten times more intelligent than the sum total of all human intelligence. He noted the rapid progress in generative AI that he said has already exceeded human intelligence in certain areas.

    “It is wrong to say that AI cannot be smarter than humans as it is created by humans,” he said. “AI is now self learning, self training, and self inferencing, just like human beings.”

    Son has spoken of the potential of AGI — typically using the term “singularity” — to transform business and society for some years, but this is the first time he has given a timeline for its development.

    He also introduced the idea of “Artificial Super Intelligence” at the conference which he claimed would be realized in 20 years and would surpass human intelligence by a factor of 10,000.

    Son is known for several canny bets that have turned SoftBank into a tech investment giant as well as some bets that have spectacularly flopped.

    He’s also prone to making strident claims about the transformative impact of new technologies. His predictions about the mobile internet have been largely borne out while those about the Internet of Things have not.

    Son called upon Japanese companies to “wake up” to the promise of AI, arguing they had increasingly fallen behind in the internet age and reiterated his belief in chip designer Arm as core to the “AI revolution.”

    Arm CEO Rene Haas, speaking at the conference via video, touted the energy efficiency of Arm’s designs, saying they would become increasingly sought after to power artificial intelligence.

    Son said he thinks he is the only person who believes AGI will come within a decade. Haas said he thought it would come in his lifetime.

    [ad_2]

    Source link

  • TikTok steps up efforts to counter misinformation about Israel-Hamas war | CNN Business

    TikTok steps up efforts to counter misinformation about Israel-Hamas war | CNN Business

    [ad_1]


    London
    CNN
     — 

    TikTok is stepping up efforts to counter misinformation, incitement to violence and hate relating to the Israel-Hamas war on its online platform, it announced Sunday, days after the European Union (EU) warned social media companies they risked falling foul of the bloc’s content moderation laws.

    As part of its measures, TikTok is launching a command center to coordinate the work of its “safety professionals” around the world, improving the software it uses to automatically detect and remove graphic and violent content, and hiring more Arabic and Hebrew speakers to moderate content.

    TikTok said in a statement that, following the brutal attack by Hamas on Israeli civilians on October 7, it had “immediately mobilized significant resources and personnel to help maintain the safety of [its] community and integrity of [its] platform.”

    “We do not tolerate attempts to incite violence or spread hateful ideologies,” it added. “We have a zero-tolerance policy for content praising violent and hateful organizations and individuals.”

    The firm, owned by China’s ByteDance, said it had already removed more than 500,000 videos and shut down 8,000 livestream videos from the “impacted region” since the Hamas attack.

    As the conflict escalates — Israel has blocked the provision of electricity, food, fuel and water to Gaza, and has been signaling it is preparing for a ground invasion of the area — millions have turned to social media for updates, while misinformation has proliferated on these sites.

    One recent TikTok video, seen by more than 300,000 users and reviewed by CNN, promoted conspiracy theories about the origins of the Hamas attack, including false claims that it was orchestrated by the media.

    Last week, the EU told social media companies they needed to better protect “children and teenagers from violent content and terrorist propaganda” on their platforms.

    EU Commissioner Thierry Breton wrote to TikTok Thursday, in a letter shared on X, the platform formerly known as Twitter, saying the company had 24 hours to detail the steps it was taking to comply with EU rules on content moderation. Breton has sent similar letters to X, Google and Meta, the owner of Instagram and Facebook.

    [ad_2]

    Source link

  • Biden administration defends communications with social media companies in high-stakes court fight | CNN Business

    Biden administration defends communications with social media companies in high-stakes court fight | CNN Business

    [ad_1]


    Washington, DC
    CNN
     — 

    The Biden administration on Thursday defended its communications with social media giants in court, arguing those channels must stay open so that the federal government can help protect the public from threats to election security, Covid-19 misinformation and other dangers.

    The closely watched court fight reflects how social media has become an informational battleground for major social issues. It has revealed the messy challenges for social media companies as they try to manage the massive amounts of information on their platforms.

    And it has highlighted warnings by independent researchers, watchdog groups and government officials that malicious actors will continue to try to disrupt the country’s democracy by flooding the internet with bogus and divisive material ahead of the 2024 elections.

    In oral arguments before a New Orleans-based federal appeals court, the US government challenged a July injunction that blocked several federal agencies from discussing certain social media posts and sharing other information with online platforms, amid allegations by state governments that those communications amounted to a form of unconstitutional censorship.

    The appeals court last month temporarily blocked the injunction from taking effect. But the outcome of Thursday’s arguments will determine the ultimate fate of the order, which placed new limits on the Departments of Homeland Security, Health and Human Services and other federal agencies’ ability to coordinate with tech companies and civil society groups.

    If upheld by the US Court of Appeals for the Fifth Circuit, the injunction would suppress a broad range of public-private partnerships and undermine the US government’s mission to protect the public, the Biden administration argued.

    “For example, if there were a natural disaster, and there were untrue statements circulating on social media that were damaging to the public interest, the government would be powerless under the injunction to discourage social media companies from further disseminating those incorrect statements,” said Daniel Tenny, a Justice Department lawyer.

    Now, a three-judge panel of the Fifth Circuit is set to decide how executive agencies may respond to those threats.

    At issue is whether the US government unconstitutionally pressured social media platforms into censoring users’ speech, particularly when the government flagged posts to the platforms that it believed violated the companies’ own terms of service.

    During more than an hour of oral arguments Thursday, the three judges handling the appeal gave little indication of how they would rule in the case, with one judge asking just a couple of questions during the hearing. The other two spent much of the time pressing attorneys for the Biden administration and the plaintiffs in the case on issues concerning the scope of the injunction and whether the states even had the legal right – or standing – to bring the lawsuit.

    Before them is not only the request to reverse the lower court injunction, but also one from the administration to issue a more lasting pause on that injunction while the judges weigh the challenge to it.

    In briefs submitted to the court ahead of Thursday’s hearing, the Biden administration argued that a lower court judge was wrong to have identified the government communications with social media companies as potentially, in his words, “the most massive attack against free speech in United States’ [sic] history.”

    “There is a categorical, well-settled distinction between persuasion and coercion,” the administration’s lawyers wrote, adding that the lower court “equated legitimate efforts at persuasion with illicit efforts to coerce.”

    The administration’s opponents in the case, which include the states of Missouri and Louisiana, have argued that the federal government’s communications with social media companies are a violation of the First Amendment because even “‘encouragement short of compulsion’ can transform private conduct [by social media companies] into government action” that infringes on users’ speech rights.

    “Every one of these federal agencies has insinuated themselves into the content moderation decisions of major social media platforms,” D. John Sauer, an attorney representing the state of Louisiana, told the judges on Thursday. Hypothetically speaking, he added: “The Surgeon General can say, ‘All this speech is terrible, it’s awful.’ …. But what he can’t do is pick up the phone and say, ‘Take it down.’”

    In addition to the states, five individuals are also plaintiffs in the suit. They include three doctors who have been critical of state and federal pandemic-era restrictions, a Louisiana woman who claims she was censored by social media companies for her online criticisms of Covid health measures and a man who runs a far-right website known for pushing conspiracy theories.

    Much of Thursday’s oral arguments hinged on the definition of coercive communication and how courts have analyzed government pressure against private parties in past cases.

    But the states also claimed that there could be a pathway to finding a constitutional violation if the court agreed that social media companies, in heeding the administration’s calls to action, had been effectively turned into agents of the US government.

    In the past month, after District Judge Terry Doughty issued his injunction, current and former US officials, along with outside researchers and academics, have worried that the order could lead to a chilling effect for efforts to protect US elections.

    “There is no serious dispute that foreign adversaries have and continue to attempt to interfere in our elections and that they use social media to do it,” FBI Director Christopher Wray testified to the House Judiciary Committee in July. “President Trump himself in 2018 declared a national emergency to that very effect, and the Senate Intelligence Committee — in a bipartisan, overwhelmingly bipartisan way — not only found the same thing but called for more information-sharing between us and the social media.”

    Ohio Republican Rep. Jim Jordan, the panel’s chair, remains unconvinced. Earlier this week, he and other Republican lawmakers filed their own brief to the appeals court, accusing the Biden administration of a campaign to stifle speech.

    “On issue after issue, the Biden Administration has distorted the free marketplace of ideas promised by the First Amendment, bringing the weight of federal authority to bear on any speech it dislikes—including memes and jokes,” Jordan and the other lawmakers wrote. “Of course, Big Tech companies often required little coercion to do the Administration’s bidding on some issues. Generally eager to please their ideological allies and overseers in the federal government, these companies and other private entities have repeatedly censored accurate speech on important public issues.”

    [ad_2]

    Source link

  • Schools are teaching ChatGPT, so students aren’t left behind | CNN Business

    Schools are teaching ChatGPT, so students aren’t left behind | CNN Business

    [ad_1]


    New York
    CNN
     — 

    When college administrator Lance Eaton created a working spreadsheet about the generative AI policies adopted by universities last spring, it was mostly filled with entries about how to ban tools like ChatGPT.

    But now the list, which is updated by educators at both small and large US and international universities, is considerably different: Schools are encouraging and even teaching students how to best use these tools.

    “Earlier on, we saw a kneejerk reaction to AI by banning it going into spring semester, but now the talk is about why it makes sense for students to use it,” Eaton, an administrator at Rhode Island-based College Unbound, told CNN.

    He said his growing list continues to be discussed and shared in popular AI-focused Facebook groups, such as Higher Ed Discussions of Writing and AI, and the Google group AI in Education.

    “It’s really helped educators see how others are adapting to and framing AI in the classroom,” Eaton said. “AI is still going to feel uncomfortable, but now they can now go in and see how a university or a range of different courses, from coding to sociology, are approaching it.”

    With more experts expecting the continued application of artificial intelligence, professors now fear ignoring or discouraging the use of it will be a disservice to students and leave many behind when entering the workforce.

    Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists and passed exams at esteemed universities. The technology, and similar tools such as Google’s Bard, is trained on vast amounts of online data in order to generate responses to user prompts. While they gained traction among users, the tools also raised some concerns about inaccuracies, cheating, the spreading of misinformation and the potential to perpetuate biases.

    According to a study conducted by higher education research group Intelligent.com, about 30% of college students used ChatGPT for schoolwork this past academic year and it was used most in English classes.

    Jules White, an associate professor of computer science at Vanderbilt University, believes professors should be explicit in the first few days of school about the course’s stance on using AI and that it should be included it in the syllabus.

    “It cannot be ignored,” he said. “I think it’s incredibly important for students, faculty and alumni to become experts in AI because it will be so transformative across every industry in demand so we provide the right training.”

    Vanderbilt is among the early leaders taking a strong stance in support of generative AI by offering university-wide training and workshops to faculty and students. A three-week 18-hour online course taught by White this summer was taken by over 90,000 students, and his paper on “prompt engineering” best practices is routinely cited among academics.

    “The biggest challenge is with how you frame the instructions, or ‘prompts,’” he said. “It has a profound impact on the quality of the response and asking the same thing in various ways can get dramatically different results. We want to make sure our community knows how to effectively leverage this.”

    Prompt engineering jobs, which typically require basic programming experience, can pay up to $300,000.

    Although White said concerns around cheating still exist, he believes students who want to plagiarize can still seek out other methods such as Wikipedia or Google searches. Instead, students should be taught that “if they use it in other ways, they will be far more successful.

    Diane Gayeski, a professor of communications at Ithaca College, said she plans to incorporate ChatGPT and other tools in her fall curriculum, similar to her approach in the spring. She previously asked students to collaborate with the tool to come up with interview questions for assignments, write social media posts and critique the output based on the prompts given.

    “My job is to prepare students for PR, communications and social media managers, and people in these fields are already using AI tools as part of their everyday work to be more efficient,” she said. “I need to make sure they understand how they work, but I do want them to cite when ChatGPT is being used.”

    Gayeski added that as long as there is transparency, there should be no shame in adopting the technology.

    Some schools are hiring outside experts to teach both faculty and students about how to use AI tools. Tyler Tarver, a former high school principal who now teaches educators about tech tool strategies, said he’s made over 50 speeches at schools and conferences across Texas, Arkansas and Illinois over the past few months. He also offers an online three-hour training for educators.

    “Teachers need to learn how to use it because even if they never use it, their students will,” Tarver said.

    Tarver said that he teaches students, for example, how the tools can be used to catch grammar mistakes, and how teachers can use it to assist with grading. “It can cut down on teacher bias,” Tarver said.

    He argues teachers could grade students a certain way even if they’ve improved over time. By running an assignment through ChatGPT, and asking it to grade the sentence structure on a scale from one to 10, the response could “service as a second pair of eyes to make sure they’re not missing anything,” Tarver said.

    “That shouldn’t be the final grade teachers shouldn’t use it to cheat or cut corners either but it can help inform grading,” he said. “The bottom line is that this is like when the car was invented. You don’t want to be the last person in the horse and buggy.”

    [ad_2]

    Source link

  • AI tools make things up a lot, and that’s a huge problem | CNN Business

    AI tools make things up a lot, and that’s a huge problem | CNN Business

    [ad_1]



    CNN
     — 

    Before artificial intelligence can take over the world, it has to solve one problem. The bots are hallucinating.

    AI-powered tools like ChatGPT have mesmerized us with their ability to produce authoritative, human-sounding responses to seemingly any prompt. But as more people turn to this buzzy technology for things like homework help, workplace research, or health inquiries, one of its biggest pitfalls is becoming increasingly apparent: AI models often just make things up.

    Researchers have come to refer to this tendency of AI models to spew inaccurate information as “hallucinations,” or even “confabulations,” as Meta’s AI chief said in a tweet. Some social media users, meanwhile, simply blast chatbots as “pathological liars.”

    But all of these descriptors stem from our all-too-human tendency to anthropomorphize the actions of machines, according to Suresh Venkatasubramanian, a professor at Brown University who helped co-author the White House’s Blueprint for an AI Bill of Rights.

    The reality, Venkatasubramanian said, is that large language models — the technology underpinning AI tools like ChatGPT — are simply trained to “produce a plausible sounding answer” to user prompts. “So, in that sense, any plausible-sounding answer, whether it’s accurate or factual or made up or not, is a reasonable answer, and that’s what it produces,” he said. “There is no knowledge of truth there.”

    The AI researcher said that a better behavioral analogy than hallucinating or lying, which carries connotations of something being wrong or having ill-intent, would be comparing these computer outputs to the way his young son would tell stories at age four. “You only have to say, ‘And then what happened?’ and he would just continue producing more stories,” Venkatasubramanian said. “And he would just go on and on.”

    Companies behind AI chatbots have put some guardrails in place that aim to prevent the worst of these hallucinations. But despite the global hype around generative AI, many in the field remain torn about whether or not chatbot hallucinations are even a solvable problem

    Simply put, a hallucination refers to when an AI model “starts to make up stuff — stuff that is not in-line with reality,” according to Jevin West, a professor at the University of Washington and co-founder of its Center for an Informed Public.

    “But it does it with pure confidence,” West added, “and it does it with the same confidence that it would if you asked a very simple question like, ‘What’s the capital of the United States?’”

    This means that it can be hard for users to discern what’s true or not if they’re asking a chatbot something they don’t already know the answer to, West said.

    A number of high-profile hallucinations from AI tools have already made headlines. When Google first unveiled a demo of Bard, its highly anticipated competitor to ChatGPT, the tool very publicly came up with a wrong answer in response to a question about new discoveries made by the James Webb Space Telescope. (A Google spokesperson at the time told CNN that the incident “highlights the importance of a rigorous testing process,” and said the company was working to “make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.”)

    A veteran New York lawyer also landed in hot water when he used ChatGPT for legal research, and submitted a brief that included six “bogus” cases that the chatbot appears to have simply made up. News outlet CNET was also forced to issue corrections after an article generated by an AI tool ended up giving wildly inaccurate personal finance advice when it was asked to explain how compound interest works.

    Cracking down on AI hallucinations, however, could limit AI tools’ ability to help people with more creative endeavors — like users that are asking ChatGPT to write poetry or song lyrics.

    But there are risks stemming from hallucinations when people are turning to this technology to look for answers that could impact their health, their voting behavior, and other potentially sensitive topics, West told CNN.

    Venkatasubramanian added that at present, relying on these tools for any task where you need factual or reliable information that you cannot immediately verify yourself could be problematic. And there are other potential harms lurking as this technology spreads, he said, like companies using AI tools to summarize candidates’ qualifications and decide who should move ahead to the next round of a job interview.

    Venkatasubramanian said that ultimately, he thinks these tools “shouldn’t be used in places where people are going to be materially impacted. At least not yet.”

    How to prevent or fix AI hallucinations is a “point of active research,” Venkatasubramanian said, but at present is very complicated.

    Large language models are trained on gargantuan datasets, and there are multiple stages that go into how an AI model is trained to generate a response to a user prompt — some of that process being automatic, and some of the process influenced by human intervention.

    “These models are so complex, and so intricate,” Venkatasubramanian said, but because of this, “they’re also very fragile.” This means that very small changes in inputs can have “changes in the output that are quite dramatic.”

    “And that’s just the nature of the beast, if something is that sensitive and that complicated, that comes along with it,” he added. “Which means trying to identify the ways in which things can go awry is very hard, because there’s so many small things that can go wrong.”

    West, of the University of Washington, echoed his sentiments, saying, “The problem is, we can’t reverse-engineer hallucinations coming from these chatbots.”

    “It might just an intrinsic characteristic of these things that will always be there,” West said.

    Google’s Bard and OpenAI’s ChatGPT both attempt to be transparent with users from the get-go that the tools may produce inaccurate responses. And the companies have expressed that they’re working on solutions.

    Earlier this year, Google CEO Sundar Pichai said in an interview with CBS’ “60 Minutes” that “no one in the field has yet solved the hallucination problems,” and “all models have this as an issue.” On whether it was a solvable problem, Pichai said, “It’s a matter of intense debate. I think we’ll make progress.”

    And Sam Altman, CEO of ChatGPT-maker OpenAI, made a tech prediction by saying he thinks it will take a year-and-a-half or two years to “get the hallucination problem to a much, much better place,” during remarks in June at India’s Indraprastha Institute of Information Technology, Delhi. “There is a balance between creativity and perfect accuracy,” he added. “And the model will need to learn when you want one or the other.”

    In response to a follow-up question on using ChatGPT for research, however, the chief executive quipped: “I probably trust the answers that come out of ChatGPT the least of anybody on Earth.”

    [ad_2]

    Source link

  • Google’s antitrust showdown: What’s at stake for the internet search titan | CNN Business

    Google’s antitrust showdown: What’s at stake for the internet search titan | CNN Business

    [ad_1]



    CNN
     — 

    Google will face off in court Tuesday against government officials who have accused the company of antitrust violations in its massive search business, kicking off a long-anticipated legal showdown that could reshape one of the internet’s most dominant platforms.

    The trial beginning this week in Washington before a federal judge marks the culmination of two ongoing lawsuits against Google that started during the Trump administration. Legal experts describe the actions as the country’s biggest monopolization case since the US government took on Microsoft in the 1990s.

    In separate complaints, the Justice Department and dozens of states accused Google in 2020 of abusing its dominance in online search by allegedly harming competition through deals with wireless carriers and smartphone makers that made Google Search the default or exclusive option on products used by millions of consumers. The complaints eventually consolidated into a single case.

    Google has maintained that it competes on the merits and that consumers prefer its tools because they are the best, not because it has moved to illegally restrict competition. Google’s search business provides more than half of the $283 billion in revenue and $76 billion in net income Google’s parent company, Alphabet, recorded in 2022. Search has fueled the company’s growth to a more than $1.7 trillion market capitalization.

    Now, the company is set to defend itself in a multiweek trial that could upend the way Google distributes its search engine to users. The case is expected to feature testimony from high-profile witnesses including former employees of Google and Samsung, along with executives from Apple, including senior vice president Eddy Cue. It is the first case to go to trial in a series of court challenges targeting Google’s far-reaching economic power, testing the willingness of courts to clamp down on large tech platforms.

    “This is a backwards-looking case at a time of unprecedented innovation,” said Google President of Global Affairs Kent Walker, “including breakthroughs in AI, new apps and new services, all of which are creating more competition and more options for people than ever before. People don’t use Google because they have to — they use it because they want to. It’s easy to switch your default search engine — we’re long past the era of dial-up internet and CD-ROMs.”

    The trial may also be a bellwether for the more assertive antitrust agenda of the Biden administration.

    In its initial complaint, the US government alleged in part that Google pays billions of dollars a year to device manufacturers including Apple, LG, Motorola and Samsung — and browser developers like Mozilla and Opera — to be their default search engine and in many cases to prohibit them from dealing with Google’s competitors.

    As a result, the complaint alleges, “Google effectively owns or controls search distribution channels accounting for roughly 80 percent of the general search queries in the United States.”

    The lawsuit also alleges that Google’s Android operating system deals with device makers are anticompetitive, because they require smartphone companies to pre-install other Google-owned apps, such as Gmail, Chrome or Maps.

    At the time the lawsuit was first filed, US antitrust officials did not rule out the possibility of a Google breakup, warning that Google’s behavior could threaten future innovation or the rise of a Google successor.

    Separately, a group of states, led by Colorado, made additional allegations against Google, claiming that the way Google structures its search results page harms competition by prioritizing the company’s own apps and services over web pages, links, reviews and content from other third-party sites.

    But the judge overseeing the case, Judge Amit Mehta in the US District Court for the District of Columbia, tossed out those claims in a ruling last month, narrowing the scope of allegations Google must defend and saying the states had not done enough to show a trial was necessary to determine whether Google’s search results rankings were anticompetitive.

    Despite that ruling, the trial represents the US government’s furthest progress in challenging Google to date. Mehta has said Google’s pole position among search engines on browsers and smartphones “is a hotly disputed issue” and that the trial will determine “whether, as a matter of actual market reality, Google’s position as the default search engine across multiple browsers is a form of exclusionary Conduct.”

    In January, meanwhile, the Biden administration launched another antitrust suit against Google in opposition to the company’s advertising technology business, accusing it of maintaining an illegal monopoly. That case remains in its early stages at the US District Court for the Eastern District of Virginia.

    [ad_2]

    Source link

  • Epic Games to lay off 16% of its workforce | CNN Business

    Epic Games to lay off 16% of its workforce | CNN Business

    [ad_1]



    CNN
     — 

    Epic Games, the maker of Fortnite, said on Thursday that it will lay off 16% of its staff, around 830 employees, as it attempts to reverse what CEO Tim Sweeney called “unrealistic” spending.

    In a letter to employees Thursday, Sweeney said the video game company had been “spending way more money than we earn, investing in the next evolution of Epic.”

    “I had long been optimistic that we could power through this transition without layoffs, but in retrospect I see that this was unrealistic,” Sweeney said in the letter, which the company shared publicly. He added that Epic plans to divest from the online independent music platform Bandcamp, which it bought last year and which will now be acquired by the music marketplace firm Songtradr. Epic will also spin off most of its marketing division SuperAwesome into a standalone company.

    Epic’s layoffs are just the latest job cuts to hit the tech industry, which was forced to adjust after the stunning growth many companies saw during the height of the Covid-19 pandemic began to slow. Meta, Microsoft, T-Mobile, Lyft and others have all reduced their workforces earlier this year. More recently, Google parent Alphabet made its second round of layoffs of the year, eliminating several hundred recruiting jobs in September after having cut 12,000 employees in January.

    About two-thirds of Epic’s Thursday layoffs will impact employees outside the company’s “core development” teams, Sweeney said. Some laid off workers announced on LinkedIn that they had been affected, including employees working in user experience for Fortnite, production, employee engagement and recruitment.

    Laid off employees will receive a severance offer that includes six months of base pay, accelerated stock vesting and other benefits, according to Sweeney.

    “We’re cutting costs without breaking development or our core lines of businesses so we can continue to focus on our ambitious plans,” Sweeney said. “Some of our products and initiatives will land on schedule, and some may not ship when planned because they are under-resourced for the time being. We’re ok with the schedule tradeoff if it means holding on to our ability to achieve our goals.”

    The Epic layoffs also come amid the latest escalation in a protracted legal battle between the video game company and tech giant Apple. Following a yearslong back-and-forth over an antitrust lawsuit brought by Epic over Apple’s App Store payment practices, both companies have asked the US Supreme Court to review a lower court ruling in the case.

    [ad_2]

    Source link

  • Google unveils Pixel 8 built for ‘the generative AI era’ | CNN Business

    Google unveils Pixel 8 built for ‘the generative AI era’ | CNN Business

    [ad_1]



    CNN
     — 

    There’s nothing particularly new about Google’s latest-generation Pixel 8 smartphone hardware. That’s why the company is pushing hard to tout its AI-powered new software, which Google says was built specifically for the “first phone of the generative AI era.”

    At a press event in New York City, Google

    (GOOG)
    showed off the new Pixel 8 and Pixel 8 Pro devices, which largely look the same as the year prior, albeit with more rounded edges. But inside, its new G3 Tensor chip unlocks an AI-powered world aimed at simplifying your life, from asking the device to summarize news articles and websites to using Google

    (GOOG)
    Assistant to field phone calls and tweaking photos to move or resize objects.

    The 6.3-inch Pixel 8 and the 6.7-inch Pixel 8 Pro comes with a brighter display, new camera system and longer-lasting battery life. The Pixel 8 is available in three colors – hazel, rose and obsidian – and starts at $699, about $100 less than the baseline iPhone 14 with the same amount of storage. (That’s about $100 more than last year’s Pixel 7).

    Meanwhile, the Pixel 8 Pro – which touts a polished aluminum frame and a matte back glass this year – now has the ability to take better low-light photos and sharper selfies. It starts at $999 – the same price as the iPhone 15 Pro – and is available in three colors: bay, porcelain and obsidian.

    Although these upgrades are mostly incremental, the AI enhancements and related features may appeal to tech enthusiasts who want the latest version of Android and an alternative to Apple or Samsung smartphones.

    At the same time, Google’s Pixel line remains a niche product. Its global market share for smartphones remains about 1%, according to data from ABI Research. Google also limits sales to only a handful of countries, so keeping the volume low has been strategic as Google remains predominantly a software company with many partners running Android.

    Reece Hayden, an analyst at ABI Research, said Google is looking to establish itself as an early market leader amid the “generative AI-related hysteria,” which kicked into high gear late last year with the introduction of ChatGPT. Generative AI refers to a type of artificial intelligence that can create new content, such as text and images, in response to user prompts.

    “[Adding it to the Pixel] creates further product differentiation by leveraging internal capabilities that Apple may not have,” said Hayden.

    He expects this announcement to be the first of many similar efforts coming to hardware over the next year, especially among brands who’ve already made investments in this area.

    Here’s a closer look at what Google announced and some of the standout new AI features:

    A Google employee demonstrates manual focus features of the new Google Pixel 8 Pro Phone in New York City, U.S., October 4, 2023.

    Google showed off a handful of photo features coming to its Pixel line, including Magic Editor which uses generative AI to reposition and resize a subject. Similarly, a new Audio Magic Eraser tool that lets users erase distracting sounds from videos.

    Another tool called Best Take snaps a series of photos and then aggregates the faces into one shot so everyone looks their best. And a a new Zoom enhanced feature lets users pinch to zoom in about 30 times after a photo is taken to focus in on and edit a specific area.

    The company said these efforts aim to “let you capture every moment just how you want to remember it.”

    Although the tools intend to give users more control over their photos, some analysts like Thomas Husson at market research firm Forrester believe it will be harder to distinguish between what’s real and what’s not.

    “The fact that Google refers to a ‘Magic Eraser’ will blur the distinction between real photos and heavily edited ones,” Husson said. But he warns an uptick in deepfake apps already makes it hard to decipher the authenticity of some shots. “You don’t really need Google AI for that.”

    The company said Google Assistant will now sound more realistic when it engages with callers. Google’s screen call tool already lets Assistant field incoming calls, speak to callers and determine who’s on the line before pushing it through to the user. But its robotic voice will sound increasing more natural, the company said.

    Google is also bringing the capabilities of its Bard AI chatbot to Google Assistant, so it will be able to do more than set an alarm or tell the weather. With its new generative AI capabilities, it will be able to review important emails in a user’s inbox or reveal more about a hotel that popped up on their Instagram feed. Assistant will also be able to understand user questions in voice, text and images.

    “With generative AI on the scene, it’s really creating a lot of new opportunities to build an even more intuitive and intelligent and personalized digital assistant,” Sissie Hsiao, general manager for Google Assistant and Bard, told CNN.

    In addition to making Assistant more useful, the tool will make it easier for more users to interact with Google’s six-month-old Bard on interfaces they may already frequently engage with. Last month, Google rolled out a major expansion of Bard, allowing users to link the tool to their Gmail and other Google Workspace tools and making it easier to fact check the AI’s responses.

    Google launched Assistant with Bard to a small test group on Wednesday, and it will be more widely available to Android and iOS users in the coming months.

    AI is also getting smarter on the Pixel Watch 2 ($349), its second-generation smartwatch. Users can use Bard capabilities via an upgraded Google Assistant watch app to ask it how they slept and get other health insights.

    In addition, the Pixel 2 features a new heart rate sensor, which works alongside a new AI-driven heart rate algorithm, to provide a more accurate heart rate reading than before. But Hayden said he doesn’t think more AI will add too much more to its existing value proposition.

    “Smart watches already include a fair amount of AI, and Pixel is no different,” he said.

    [ad_2]

    Source link

  • Adobe previews new AI editing tools | CNN Business

    Adobe previews new AI editing tools | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Photo-editing software maker Adobe unveiled a slew of new AI-powered tools and features last week at its annual Max event, including a dress that transforms into a wearable screen and streamlined ways to delete elements from photos.

    The company previewed a series of prototype tools that make use of both generative AI and 3D image technology in the Adobe MAX Sneaks showcase. Covering photo, audio, video, 3D, fashion and design, the new capabilities are meant to give the public a sneak peak into early-stage ideas that might one day become widely used components of Adobe products.

    A highlight of the event was Adobe’s Project Primrose, an interactive dress that shifts into different colors and patterns as it’s worn.

    Other previewed items include a tool that automatically detects each object in an image and lets users perform a variety of tasks, labeled Project Stardust. For example, it can spot a suitcase within a photo to then be moved or deleted or predict and prompt likely tasks, such as deleting people from the background of an image.

    A screenshot of Project Stardust, a tool unveiled as part Adobe's annual

    Also on display was Project Dub Dub Dub, technology that can automatically dub audio over a video into all supported languages while preserving the speaker’s voice, as was a new tool that shows Adobe users what the ability to apply text-to-image generative AI tool Firefly to videos might look like.

    Adobe first began adding Firefly into a Photoshop beta app in May, with the goal of “dramatically accelerating” how users edit their photos. It allows users to add or delete elements from images with just a text prompt. It can also match the lighting and style of the existing images automatically, the company said.

    [ad_2]

    Source link

  • Illinois Supreme Court upholds state’s assault-style weapons ban | CNN Politics

    Illinois Supreme Court upholds state’s assault-style weapons ban | CNN Politics

    [ad_1]



    CNN
     — 

    The Illinois Supreme Court on Friday upheld the state’s assault-style weapons ban in a 4-3 ruling after months of legal challenges sought to dismantle the law.

    State lawmakers in January passed, and Democratic Gov. J.B. Pritzker signed into law, a measure to ban assault-style rifles and high-capacity magazines. Those who already own such rifles face limitations on their sale and transfer and must register them with the Illinois State Police by 2024.

    That law – which came about six months after the July 2022 Highland Park, Illinois, shooting – faced immediate lawsuits in state and federal court that argued it violated the Illinois and US constitutions.

    A Macon County Circuit Court judge found earlier this year that exemptions to the law, including for law enforcement officers and armed guards at federally supervised nuclear sites, violated the equal protection clause of the state’s constitution.

    The Illinois Supreme Court agreed to fast-track the state’s appeal, and in a 20-page opinion, reversed the circuit court’s judgment. The majority’s opinion claimed to focus on two core issues brought by the plaintiffs: Whether the law violated the plaintiffs’ right to equal protection and if it constituted special legislation that created laws for some firearms owners and not others. The majority opinion notably did not decide if the ban violated the Second Amendment, asserting that the plaintiffs had waived this issue.

    “We express no opinion on the potential viability of plaintiffs’ waived claim concerning the Second Amendment,” they wrote.

    However, one of the plaintiffs’ attorneys, Jerry Stocks, told CNN the majority justices misrepresented their arguments. Stocks said the Second Amendment is a fundamental right inextricably linked to their arguments and thus should have weighed heavily on scrutiny of the ban. Ignoring the issue altogether was improper, he said.

    “We have a circus in Illinois and the clowns are in charge right now,” Stocks said.

    Illinois Attorney General Kwame Raoul said the new law is a “critical part” of the state’s efforts to combat gun violence, and Pritzker’s office hailed the decision to uphold “a commonsense gun reform law to keep mass-killing machines off of our streets and out of our schools, malls, parks, and places of worship.”

    Nancy Rotering, the Democratic mayor of Highland Park, called on Congress to act on tougher federal restrictions and said Friday’s decision “sends a message to residents that saving lives takes precedence over thoughts and prayers and acknowledges the importance of sensible gun control measures.”

    Illinois has struggled to restrict the flow of illegal guns, particularly in Chicago, while officials in the state have faced legal hurdles to implementing new gun restrictions.

    Despite gun rights advocates challenging the assault-style weapons ban and asking the US Supreme Court to block the ban – along with a city ordinance passed last year by Naperville, Illinois, that bans the sale of assault rifles – the US Supreme Court in May refused to intervene.

    This story has been updated with additional details.

    [ad_2]

    Source link

  • Baidu and SenseTime launch ChatGPT-style AI bots to the public | CNN Business

    Baidu and SenseTime launch ChatGPT-style AI bots to the public | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Chinese tech firms Baidu and SenseTime launched their ChatGPT-style AI bots to the public on Thursday, marking a new milestone in the global AI race.

    Baidu has opened public access to its ERNIE Bot, allowing users to conduct AI-powered searches or carry out an array of tasks, from creating videos to providing summaries of complex documents.

    The news sent its shares 3.1% higher in New York on Wednesday and 4.7% higher in Hong Kong on Thursday.

    Baidu (BIDU) is among the first companies in China to get regulatory approval for the rollout, and it is the first to launch this type of service publicly, according to a person familiar with the matter.

    Until Thursday, ERNIE Bot, also called “Wenxin Yiyan” in Chinese, had been offered only to corporate clients or select members of the public who requested access through a waitlist.

    Meanwhile, SenseTime, an AI startup based in Hong Kong, also announced the public launch of its SenseChat platform on Thursday. The company’s shares surged 4% in Hong Kong following the news

    “We are pleased to announce that starting today, it is fully available to serve all users,” a SenseTime spokesperson told CNN in a statement.

    China published new rules on generative AI in July, becoming one of the world’s first countries to regulate the industry. The measures took effect on August 15.

    Baidu has been a frontrunner in China in the race to capitalize on the excitement around generative artificial intelligence, the technology that underpins systems such as ChatGPT or its successor, GPT-4. The latter has impressed users with its ability to simplify coding, rapidly create a website from a simple sketch and pass exams with high marks.

    Baidu announced its own iteration in February, giving it an early advantage in China, according to analysts. It unveiled ERNIE a month later, showing how it could generate a newsletter, come up with a corporate slogan and solve a math riddle.

    Since then, competitors such as Alibaba (BABA) and SenseTime have announced plans to launch their own ChatGPT-style tools, adding to the list of Chinese businesses jumping on the bandwagon. Alibaba told CNN Thursday that it had filed for regulatory approval for its own bot, which was introduced in April.

    The company is now waiting to officially launch and “the initial list of companies that have received the approval is expected to be released by relevant local departments within one week,” said an Alibaba Cloud spokesperson.

    Some critics say the new offerings from Chinese firms will add fuel to an existing US-China rivalry in emerging technologies. Baidu CEO Robin Li has tried to shake off that comparison, saying previously that the company’s platform “is not a tool for the confrontation between China and the United States.”

    The firm’s new feature — which will be embedded in its popular search engine, among its other offerings — follows a similar feature introduced by Alphabet’s Google (GOOGL) in May, which allows users to search the web using its AI chatbot.

    Baidu says its service stands out because of its advanced grasp of Chinese queries, as well as its ability to generate different types of responses, such as text, images, audio and video.

    By comparison, GPT-4 is also able to analyze photos, but currently only generates text responses, according to its developer, OpenAI.

    While ERNIE Bot is available globally, its interface is in Chinese, though users will be able to enter both Chinese and English prompts, a Baidu spokesperson told CNN.

    SenseTime, which unveiled its service in April, has touted a range of features, which it says allow users to write or debug code more efficiently or receive personalized medical advice from a virtual health consultation assistant.

    [ad_2]

    Source link

  • George R. R. Martin, Jodi Picoult and other famous writers join Authors Guild in class action lawsuit against OpenAI | CNN Business

    George R. R. Martin, Jodi Picoult and other famous writers join Authors Guild in class action lawsuit against OpenAI | CNN Business

    [ad_1]


    New York
    CNN
     — 

    A group of famous fiction writers joined the Authors Guild in filing a class action suit against OpenAI on Wednesday, alleging the company’s technology is illegally using their copyrighted work.

    The complaint claims that OpenAI, the company behind viral chatbot ChatGPT, is copying famous works in acts of “flagrant and harmful” copyright infringement and feeding manuscripts into algorithms to help train systems on how to create more human-like text responses.

    George R.R. Martin, Jodi Picoult, John Grisham and Jonathan Franzen are among the 17 prominent authors who joined the suit led by the Authors Guild, a professional organization that protects writers’ rights. Filed in the Southern District of New York, the suit alleges that OpenAI’s models directly harm writers’ abilities to make a living wage, as the technology generates texts that writers could be paid to pen, as well as uses copyrighted material to create copycat work.

    “Generative AI threatens to decimate the author profession,” the Authors Guild wrote in a press release Wednesday.

    The suit alleges that books created by the authors that were illegally downloaded and fed into GPT systems could turn a profit for OpenAI by “writing” new works in the authors’ styles, while the original creators would get nothing. The press release lists AI efforts to create two new volumes in Martin’s Game of Thrones series and AI-generated books available on Amazon.

    “It is imperative that we stop this theft in its tracks or we will destroy our incredible literary culture, which feeds many other creative industries in the US,” Authors Guild CEO Mary Rasenberger stated in the release. “Great books are generally written by those who spend their careers and, indeed, their lives, learning and perfecting their crafts. To preserve our literature, authors must have the ability to control if and how their works are used by generative AI.”

    The class-action lawsuit joins other legal actions, organizations and individuals raising alarms over how OpenAI and other generative AI systems are impacting creative works. An author told CNN in August that she found new books being sold on Amazon under her name — only she didn’t write them; they appear to have been generated by artificial intelligence. Two other authors sued OpenAI in June over the company’s alleged misuse of their works to train ChatGPT. Comedian Sarah Silverman and two authors also sued Meta and ChatGPT-maker OpenAI in July, alleging the companies’ AI language models were trained on copyrighted materials from their books without their knowledge or consent.

    But OpenAI has pushed back. Last month, the company asked a San Francisco federal court to narrow two separate lawsuits from authors – including Silverman – alleging that the bulk of the claims should be dismissed.

    OpenAI did not respond to a request for comment on Wednesday.

    “We think that creators deserve control over how their creations are used and what happens sort of beyond the point of, of them releasing it into the world,” Sam Altman, the CEO of OpenAI, told Congress in May. “I think that we need to figure out new ways with this new technology that creators can win, succeed, have a vibrant life.”

    US lawmakers met with members of creative industries in July, including the Authors Guild, to discuss the implications of artificial intelligence. In a Senate subcommittee hearing, Rasenberger called for the creation of legislation to protect writers from AI, including rules that would require AI companies to be transparent about how they train their models.

    More than 10,000 authors — including James Patterson, Roxane Gay and Margaret Atwood — also signed an open letter calling on AI industry leaders like Microsoft and ChatGPT-maker OpenAI to obtain consent from authors when using their work to train AI models, and to compensate them fairly when they do.

    But the AI issues facing creative professions doesn’t seem to be going away.

    “Generative AI is a vast new field for Silicon Valley’s longstanding exploitation of content providers. Authors should have the right to decide when their works are used to ‘train’ AI,” author Jonathan Franzen said in the release on Wednesday. “If they choose to opt in, they should be appropriately compensated.”

    [ad_2]

    Source link

  • Chinese artists boycott big social media platform over AI-generated images | CNN Business

    Chinese artists boycott big social media platform over AI-generated images | CNN Business

    [ad_1]

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Artists across China are boycotting one of the country’s biggest social media platforms over complaints about its AI image generation tool.

    The controversy began in August when an illustrator who goes by the name Snow Fish accused the privately owned social media site Xiaohongshu of using her work to train its AI tool, Trik AI, without her knowledge or permission.

    Trik AI specializes in generating digital art in the style of traditional Chinese paintings; it is still undergoing testing and has not yet been formally launched.

    Snow Fish, whom CNN is identifying by her Xiaohongshu username for privacy reasons, said she first became aware of the issue when friends sent her posts of artwork from the platform that looked strikingly similar to her own style: sweeping brush-like strokes, bright pops of red and orange, and depictions of natural scenery.

    “Can you explain to me, Trik AI, why your AI-generated images are so similar to my original works?” Snow Fish wrote in a post which quickly circulated online among her followers and the artist community.

    The controversy erupted just weeks after China unveiled rules for generative AI, becoming one of the first governments to regulate the technology as countries around the world wrestle with AI’s potential impact on jobs, national security and intellectual property.

    Screenshots of AI-generated artworks on Xiaohongshu, taken by the artist Snow Fish.

    Trik AI and Xiaohongshu, which says it has 260 million monthly active users, do not publicize what materials are used to train the program and have not publicly commented on the allegations.

    The companies have not responded to multiple requests from CNN for comment.

    But Snow Fish said a person using the official Trik AI account had apologized to her in a private message, acknowledging that her art had been used to train the program and agreed to remove the posts in question. CNN has reviewed the messages.

    However, Snow Fish wants a public apology. The controversy has fueled online protests on the Chinese internet against the creation and use of AI-generated images, with several other artists claiming their works had been similarly used without their knowledge.

    Hundreds of artists have posted banners on Xiaohongshu saying “No to AI-generated images,” while a related hashtag has been viewed more than 35 million times on the Chinese Twitter-like platform Weibo.

    The boycott in China comes as debates about the use of AI in arts and entertainment are playing out globally, including in the United States, where striking writers and actors have ground most film and television production to a halt in recent months over a range of issues — including studios’ use of AI.

    Many of the artists boycotting Xiaohongshu have called for better rules to protect their work online — echoing similar complaints from artists around the world worried about their livelihoods.

    These concerns have grown as the race to develop AI heats up, with new tools developed and released almost faster than governments can regulate them — ranging from chatbots such as OpenAI’s ChatGPT to Google’s Bard.

    China’s tech giants, too, are rapidly developing their own generative artificial intelligence, from Baidu’s ERNIE Bot launched in March to SenseTime’s chatbot SenseChat.

    Besides Trik AI, Xiaohongshu has also developed a new function called “Ci Ke” which allows users to post content using AI-generated images.

    For artists like Snow Fish, the technology behind AI isn’t the problem, she said; it’s the way these tools use their work without permission or credit.

    Many AI models are trained from the work of human artists by quietly scraping images of their artwork from the internet without consent or compensation.

    Snow Fish added that these complaints had been slowly growing within the artist community but had mostly been privately shared rather than openly protested.

    “It’s an outbreak this time,” she said. “If it easily goes away without any splash, people will maintain silent, and those AI developers will keep harming our rights.”

    Another Chinese illustrator Zhang, who CNN is identifying by his last name for privacy reasons, joined the boycott in solidarity. “They’re shameless,” said Zhang. “They didn’t put in any effort themselves, they just took parts from other artists’ work and claimed it as their own, is that appropriate?”

    “In the future, AI images will only be cheaper in people’s eyes, like plastic bags. They will become widespread like plastic pollution,” he said, adding that tech leaders and AI developers care more about their own profits than about artists’ rights.

    Tianxiang He, an associate professor of law City University of Hong Kong, said the use of AI-generated images also raises larger questions among the artistic community about what counts as “real” art, and how to preserve its “spiritual value.”

    Similar boycotts have been seen elsewhere around the world, against popular AI image generation tools such as Stable Diffusion, released last year by London-based Stability AI, and California-based Midjourney.

    Stable Diffusion is embroiled in an ongoing lawsuit brought by stock image giant Getty Images, alleging copyright infringement.

    Fareed Zakaria special MoMA AI Art

    GPS web extra: How does AI make art?

    Despite the speed at which AI image generation tools are being developed, there is “no global consensus about how to regulate this kind of training behavior,” said He.

    He added that many such tools are developed by tech giants who own huge databases, which allows them to “do a lot of things … and they don’t care whether it’s protected by the law or not.”

    Because Trik AI has a smaller database to pull from, the similarities between its AI-generated content and artists’ original works are more obvious, making an easier legal case, he said.

    Cases of copyright infringement would be harder to detect if more works were put in a larger database, he added.

    Governments around the world are now grappling with how to set global standards for the wide-ranging technology. The European Union was one of the first in the world to set rules in June on how companies can use AI, with the United States still holding discussions with Capitol Hill lawmakers and tech companies to develop legislation.

    China was also an early adopter of AI regulation, publishing new rules that took effect in August. But the final version relaxed some of the language that had been included in earlier drafts.

    Experts say major powers like China likely prioritize centralizing power from tech giants when drafting regulations, and pulling ahead in the global tech race, rather than focusing on individuals’ rights.

    He, the Hong Kong law professor, called the regulations a “very broad general regulatory framework” that provide “no specific control mechanisms” to regulate data mining.

    “China is very hesitant to enact anything related to say yes or no to data mining, because that will be very dangerous,” he said, adding that such a law could strike a blow to the emerging market, amid an already slow national economy.

    [ad_2]

    Source link

  • Apple rejected opportunities to buy Microsoft’s Bing, integrate with DuckDuckGo | CNN Business

    Apple rejected opportunities to buy Microsoft’s Bing, integrate with DuckDuckGo | CNN Business

    [ad_1]



    CNN
     — 

    Since 2017, Apple has turned down multiple opportunities to chip away at Google’s search engine dominance, according to newly unsealed court transcripts, including a chance to purchase Microsoft’s Bing and to make the privacy-focused DuckDuckGo a default for users of its Safari’s private browsing mode.

    The previously confidential records, unsealed this week by the judge presiding over the US government’s antitrust lawsuit against Google, illustrate the challenges that have faced Google’s rivals in search as they’ve tried to unseat the tech giant from its pole position as Apple’s default search provider on millions of iPhones and Mac computers. It’s a privilege for which Google has paid Apple at least $10 billion a year.

    The closed-door testimony by the CEO of DuckDuckGo, Gabriel Weinberg, and a senior Apple executive, John Giannandrea, offers a glimpse of the kind of failed deals and backroom negotiations that have helped Google maintain its lead as the world’s foremost search engine.

    But it also shows how Apple has wrestled with Google’s rise and how some at Apple yearned for “optionality.” Apple didn’t immediately respond to a request for comment.

    Giannandrea testified last month Apple began seriously considering a deal with Bing in 2018, after a conversation between Apple CEO Tim Cook and Microsoft CEO Satya Nadella launched a series of further discussions between the two companies. (Last week, Nadella testified that he has spent every year of his tenure as CEO trying to persuade Apple to adopt Bing.)

    Apple insiders ultimately came up with four options for Cook: Buy Bing outright; invest in Bing and take an ownership share of the search engine; collaborate with Microsoft on a shared search index that both companies could use; or do nothing and continue with the Google partnership.

    At the same time, Apple had been actively working with DuckDuckGo on a proposal that could have made it the default search in Safari browser’s private mode, while still maintaining Google as the default in normal mode, which logs user activity, Weinberg testified.

    DuckDuckGo logo displayed on a phone screen and DuckDuckGo website displayed on a laptop screen in October 2021.

    “Our impression was that they were really serious about [it],” Weinberg told the court last month, referring to the roughly 20 meetings and phone calls that DuckDuckGo held with Apple officials, including some senior executives, from late 2017 to late 2019 on the matter. The two companies deliberated over everything from product mockups to contractual language; Apple even went as far as sending a draft contract to DuckDuckGo outlining specific proposed revenue shares.

    “If we were the default in [Safari] private browsing mode, our market share, by our calculations at the time, would increase multiple times over,” said Weinberg, according to the transcript. “We would be getting exposure for our brand every time someone opened up private browsing mode.”

    Ultimately, however, Apple backed away from both potential deals.

    Weinberg blamed Apple’s contract with Google for sinking the initiative, calling it the “elephant in the room” during many of his team’s meetings with Apple. Similar negotiations with other browser or device makers, including Mozilla, Opera and Samsung, fell through due to the Google contract as well, Weinberg claimed, prompting DuckDuckGo to abandon its efforts to gain better browser placement.

    In his testimony, Giannandrea acknowledged a perception that the Apple-Google relationship could be undermined by such plans. In discussing a 2018 slide presentation prepared for Cook and introduced in court, Giannandrea said the slides suggested that even a joint venture with Bing “would probably put us in head-to-head competition with Google” that would “probably” result in the end of the Google search contract with Apple altogether.

    Giannandrea was opposed to moving ahead with a Bing deal, he said, largely because Apple’s testing showed Bing to be inferior to Google in most respects, and that replacing Bing as the default would not best serve Apple’s customers. He made a similar argument internally about DuckDuckGo, saying in an email that moving ahead with that partnership was “probably a bad idea.” (DuckDuckGo licenses search results from Bing.)

    Still, Giannandrea testified, some within Apple thought that dealing with Bing in some fashion could yield benefits to Apple. In one 2018 email introduced in closed session, Adrian Perica, who leads Apple’s strategic investment and merger efforts, argued that collaborating with Microsoft on search technology would help “build them up, create incremental negotiating leverage to keep the take rate from Google and further our optionality to replace Google down the line.”

    Giannandrea believed the proposal “wasn’t a very feasible idea” and in his testimony dismissed Perica’s thinking as a businessperson’s spitballing.

    Apple today has the enormous resources to build a true rival to Google, Giannandrea testified. But, as he wrote in a 2018 email, “it’s probably not the best way to differentiate our products” — a belief he said he still holds today.

    [ad_2]

    Source link

  • What is catfishing and what can you do if you are catfished? | CNN Business

    What is catfishing and what can you do if you are catfished? | CNN Business

    [ad_1]

    Editor’s Note: This story is part of ‘Systems Error’, a series by CNN As Equals, investigating how your gender shapes your life online. For information about how CNN As Equals is funded and more, check out our FAQs.



    CNN
     — 

    Catfishing is when a person uses false information and images to create a fake identity online with the intention to trick, harass, or scam another person. It is often on social media or dating apps and websites as a common tactic used to form online relationships under false pretenses, sometimes to lure people into financial scams.

    The person doing the pretending, or the “catfish” may also obtain intimate images from a victim and use them to extort or blackmail the person. This is known as sextortion, or they may use other personal information shared with them to commit identity theft.

    The term is believed to originate from the 2010 documentary “Catfish,” in which a young Nev Schulman starts an online relationship with teenager “Megan”, who turns out to be an older woman.

    In the final scene of the documentary, the woman’s husband shares an anecdote about how live cod used to be exported from Alaska alongside catfish, which kept the cod active and alert. He likened this to people in real life who keep others on their toes, like his wife. Schulman went on to produce the docuseries Catfish

    There are many reasons people resort to catfishing, but the most common reason is a lack of confidence, according to the Cybersmile Foundation, a nonprofit focused on digital well-being. The foundation states that if someone is not happy with themselves, they may feel happier when pretending to be someone more attractive to others.

    They may also hide their identity to troll someone; to engage in a relationship other than their existing one; or to extort or harass people. Some people may catfish to explore sexual preferences.

    Studies have shown that catfish are more likely to be educated men, with one 2022 study finding perpetrators are more likely to come from religious backgrounds, possibly providing a way to form relationships without the constraints they face in real life, the authors write.

    In another study published last year, Evita March, senior lecturer in psychology at Federation University in Australia, found that people with the strong personality traits of sadism, psychopathy, and narcissism were more likely to catfish.

    March told CNN the findings are preliminary and that her team would like to further investigate if certain personality traits lead to specific kinds of catfishing behavior.

    In the US, romance scams resulting from catfishing have among the highest reported financial losses of internet crimes as a whole. A total of 19,050 Americans reported losing almost $740 million to romance scammers in 2022.

    In the UK, the country’s National Fraud Intelligence Bureau received more than 8,000 reports of romance fraud in the 2022 financial year, totaling more than £92 million (US $116.6 million) lost, with an average loss of £11,500 (US $14,574) per victim.

    In Singapore, romance scams are among the top 10 reported scams. The reported amount of money catfish may get from their victims increased by more than 30% from SGD$33.1 million (US $24 million) in 2020 to $46.6 million (US $34 million) the following year.

    Catfishing is also increasingly happening on an industrial scale with the rise of “cyber scam centers” that have links to human trafficking in Southeast Asia, according to INTERPOL.

    Victims of trafficking are forced to become fraudsters by creating fake social media accounts and dating profiles to scam and extort millions of dollars from people around the world using different schemes such as fake crypto investment sites.

    Catfishing used to occur more among adults through online dating sites, but has now become equally common among teenagers, according to the Cybersmile Foundation.

    Research by Snapchat last year with more than 6,000 Gen Z teenagers and young people in Australia, France, Germany, India, the UK and the US found that almost two-thirds of them or their friends had been targeted by catfish or hackers to obtain private images that were later used to extort them.

    Older people are also likely to lose more money to catfishing. In 2021, Americans lost half a billion dollars through romance scams perpetrated by people using fake personas or impersonating others, with the largest losses paid in cryptocurrency, according to the US Federal Trade Commission. The number of reports rose tenfold among young people (18-29) but older people (over 70s) generally reported losing more money.

    In Australia, a third of dating and romance scams result in financial losses, with women having lost more than double the total amount lost by men, and older people again losing more money than those under 45., according to data from the country’s National Anti-Scam Centre.

    ”Romance scams are one of the hardest things to avoid. It’s emotional manipulation,” said Ngo Minh Hieu, a Vietnamese former hacker and founder of Chong Lua Dao (scam fighters), a cybersecurity non-profit.

    Since 2020, Hieu has been monitoring trends to help scam victims, he says, and explains that in his experience, a catfish would usually approach a victim with premediated intention to scam them.

    They were likely to be using personal information that they mine from the victim’s social media accounts, or may have bought that data from users in private chat groups simply by providing a phone number of a potential victim.

    There are many signs you can look for to help spot a catfish, experts say.

    Firstly, a catfish might contact you out of nowhere, start regular conversations with you and shower you with compliments to quickly build up trust and rapport. They may state desirable qualities in their opening conversations, including wealth or attractiveness, but then rarely or never call you, either over the phone or on a video call.

    They often do not have many friends on social media and their posts are usually scarce. Search results using their name may not yield many results and their stories are usually inconsistent. For example, personal details like where they live or go to school might change when discussed again.

    Another classic sign is if the feelings they declare for you escalate quickly and after a short period of time. A catfish may ask you for sensitive images and money.

    Many scammers use already available photos of other people in their fake personas, which may be possible to spot using a reverse image search.

    With the explosion of AI technology, scammers may now generate unique and realistic images for use as profile pictures. But Hieu explains that thanks to their built-in patterns by design, AI-generated images can be detected, using tools such as AI-Generated Image Detector.

    If you believe you are being catfished, there are steps you can take to protect yourself and help end the targeting.

    Experts advise that you should not be afraid to ask direct questions or challenge the person you believe may be catfishing you. You can do this by asking them why they are not willing to call you or meet face to face, or questioning how they can declare their love for you so quickly.

    Wang and her colleagues sent nearly 200 deterrent messages to active scammers in a 2020 study and concluded that this could make fraudsters respond less or in some cases, admit to wrongdoing.

    An example of one of the messages was: “I know you are scamming innocent people. My friend was recently arrested for the same offense and is facing five years in prison. You should stop before you face the same fate.”

    You should think about stopping all communications with the catfish, and refrain from sending money to them at the risk of further financial demands. Experts say catfish continue to target those who engage with them more.

    It’s also useful to secure your online accounts and ensure your personal information is kept private online.

    Cybersecurity expert Hieu explained that you can do this by putting personal information such as your phone number, email addresses and date of birth in private mode on social media. You can also check if your email has been compromised in a data breach by using tools such as the Have I Been Pwned website.

    Installing two-factor authentication on your accounts can also help protect against unauthorized access. That requires you to take a second step to verify your identity when logging in to a service, for example by SMS or a physical device, such as a key fob.

    Being subjected to catfishing can also have a significant impact on your mental health, with many victims left unable to trust others and some left feeling embarrassed about falling for the scam. A 2019 study found that young LGBTQ+ men in rural America experiencing catfishing on dating apps felt angry and fearful.

    If someone was “sextorted,” they may continue to fear their images resurfacing online in the future.

    March from Federation University in Australia recommended improving digital literacy and staying aware of the potential red flags. She also emphasized the need to recognize today’s loneliness epidemic, which “leads people to perhaps be more susceptible to catfishing scams,” she said.

    Seeking professional support from a counselor or talking to supportive friends and family is one way to address loneliness, March added.

    Catfishing is not explicitly a crime, but the actions that often accompany catfishing, such as extortion for money, gifts or sexual images are crimes in many places.

    The main challenge in tackling online fraud is the issue of jurisdiction, according to a 2020 paper about police handling of online fraud victims in Australia. Traditional policing operates within specific territories, but the internet has blurred these boundaries, the authors write.

    Cybercriminals from one country can also target victims in other countries, complicating law enforcement efforts, and victims often face difficulty and frustration when trying to report cybercrimes, which can further traumatize them.

    Fangzhou Wang, a cybercrime professor at the University of Texas at Arlington told CNN that virtual private networks (VPNs), forged credentials, and anonymous communication methods make it extremely difficult to determine identities or locations.

    Scammers have also capitalized on the proliferation of AI, such as AI-generated personas, which complicates the ability of law enforcement authorities to gather evidence and build cases against a catfish.

    ”Law enforcement agencies, often constrained by limited resources and prioritizing cases based on severity and direct impact, might not readily prioritize catfishing cases without substantial financial losses or physical harm,” Wang told CNN.

    In the US, there are some legal precedents. In 2022, a woman who had created multiple fake profiles to target wealthy men was charged with extortion, cyberstalking, and interstate threats and was sentenced in a plea deal last year.

    In the UK, while catfishing itself is not classified as a criminal offense, if the person using a fake profile engages in illegal activities, like financial gain or harassment, they can be punished by law.

    China has a law that implicates people who allow their websites or communications platforms to be used for frauds and other illegal activities under Article 46 in the Cybersecurity Law.

    If a catfish has tricked you into sending them money, you can go to the authorities and your bank immediately, depending on where you are.

    If activities that are crimes in your country have taken place because of being catfished, such as extortion, identify theft or harassment, the police or other authorities, such as specific commissions targeting online crime, may be your first port of call.

    The Australian government’s agency responsible for online safety, the e-safety commissioner, advises that people gather all the evidence they can, including screenshots of the scammer and chats with them to keep as evidence.

    Depending on the case, you can also submit an abuse or impersonation report against the catfish directly to the platform on which you are communicating with them.

    If you believe the person you are talking to is not who they say they are, most of the larger social media platforms give you the option report them for impersonation or other forms of abuse, including Facebook, Instagram, TikTok, X, Telegram, Tinder and WhatsApp. WeChat also offers a channel to report another user for harassment, fraud, or illegal activity, while Telegram creates an anti-scam thread for users to report on fraudsters.

    You are not responsible for the catfish behaviors of others, but staying vigilant and alert online goes a long way.

    Make sure your online accounts are secured and use two-factor authentication. When browsing the internet, you may want to use a virtual private network (VPN) which makes your internet activity harder to track.

    In many countries such as the US, the UK and Australia, victims have reported being preyed on by catfish who tricked them to put money in bogus cryptocurrency investment sites.

    If someone you have been talking to asks you to put money into an investment site, think twice. The Global Anti-Scam Organization has a database of fraudulent websites generated by their own investigations and the public’s tip offs to help inform you if you’re being scammed.

    If you are a parent, this guide provided by the UK-based National College platform suggests communicating effectively and sensitively with your children about the risks. You may also help them report and block the catfish accounts and report to police if they have been subjected to anything illegal or inappropriate.

    Because catfish get close to a target often by relying on personal information posted on social media, UNICEF asks children to consider their rights when it comes to parents sharing their pictures and other content online, especially when they are underage.



    [ad_2]

    Source link

  • Pope Francis warns about AI’s dangers | CNN Business

    Pope Francis warns about AI’s dangers | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Pope Francis warned that artificial intelligence could pose a risk to society, highlighting its “disruptive possibilities and ambivalent effects” and urging those who would develop or use AI to do so responsibly.

    In a statement Tuesday, Francis alluded to the threat of algorithmic bias in technology and called on the public for vigilance “so that a logic of violence and discrimination does not take root in the production and use of such devices, at the expense of the most fragile and excluded.”

    “Injustice and inequalities fuel conflicts and antagonisms,” Francis continued. “The urgent need to orient the concept and use of artificial intelligence in a responsible way, so that it may be at the service of humanity and the protection of our common home, requires that ethical reflection be extended to the sphere of education and law.”

    Francis’s remarks dovetail with calls by some AI experts to ensure that algorithms are properly “aligned” in development to support human rights and other widely shared values. Other industry experts and policymakers have expressed concerns that AI could facilitate the spread of fraud, misinformation, cyberattacks and perhaps even the creation of biological weapons.

    Francis himself has been the subject of AI-generated deepfakes. Earlier this year, an AI-generated image of Francis wearing a white, puffy Balenciaga-inspired coat went viral.

    Tuesday’s message announced the theme for 2024’s World Day of Peace, which the Pope said would focus on AI and peace.

    “The protection of the dignity of the person,” he said, “and concern for a fraternity effectively open to the entire human family, are indispensable conditions for technological development to help contribute to the promotion of justice and peace in the world.”

    [ad_2]

    Source link

  • Meet your new AI tutor | CNN Business

    Meet your new AI tutor | CNN Business

    [ad_1]



    CNN
     — 

    Artificial intelligence often induces fear, awe or some panicked combination of both for its impressive ability to generate unique human-like text in seconds. But its implications for cheating in the classroom — and its sometimes comically wrong answers to basic questions — have left some in academia discouraging its use in school or outright banning AI tools like ChatGPT.

    That may be the wrong approach.

    More than 8,000 teachers and students will test education nonprofit Khan Academy’s artificial intelligence tutor in the classroom this upcoming school year, toying with its interactive features and funneling feedback to Khan Academy if the AI botches an answer.

    The chatbot, Khanmigo, offers individualized guidance to students on math, science and humanities problems; a debate tool with suggested topics like student debt cancellation and AI’s impact on the job market; and a writing tutor that helps the student craft a story, among other features.

    First launched in March to an even smaller pilot program of around 800 educators and students, Khanmigo also allows students to chat with a growing list of AI-powered historical figures, from George Washington to Cleopatra and Martin Luther King Jr., as well as literary characters like Winnie the Pooh and Hamlet.

    Khan Academy’s Chief Learning Officer Kristen DiCerbo told CNN that Khanmigo helps address a problem she’s witnessed firsthand observing an Arizona classroom: that when students learn something new, they often need individualized help — more help than one teacher can provide all at once.

    As DiCerbo chatted with AI-powered Dorothy from “The Wonderful Wizard of Oz” during a demonstration of the technology to CNN, she explained how users can rate Khanmigo’s responses in real-time, providing feedback if and when Khanmigo makes mistakes.

    “There is going to be a big world out there where people can just get the answers to their homework problems, where they can just get an essay written for them. That’s true now too on the Internet,” DiCerbo said. “We’re trying to focus on the social good, but we need to be aware of the threats and the risks so that we know how to mitigate those.”

    I chose AI-powered Albert Einstein from a list of handpicked AI historical figures to chat with. AI-Einstein told me his greatest accomplishment was both his theory of relativity and inspiring curiosity in others, before tossing me a question Socrates-style about what sparks curiosity in my own life.

    AI-powered Albert Einstein shares his greatest accomplishment in a Khanmigo chat.

    Khanmigo developers programmed the AI figures not to comment on events after their lifetime. As such, AI-Einstein wouldn’t comment on the historical accuracy of his role in Christopher Nolan’s “Oppenheimer,” despite my asking.

    Khanmigo is trained not to comment on events that occur after the lifetime of the historical figure it is imitating.

    Some figures from the list are not as widely praised as Einstein. For instance, Thomas Jefferson, the third US president and primary draftsman of the Declaration of Independence, has faced renewed criticism in recent years for owning 600-plus enslaved people throughout his lifetime.

    Khanmigo’s Thomas Jefferson will not shy away from scrutiny. He wrote back to my inquiry about his views on slavery in part: “As Thomas Jefferson, my views on slavery were fraught with contradiction. On one hand, I publicly expressed my belief that slavery was morally wrong and a threat to the survival of the new American nation […] Yet I was a lifelong slaveholder, owning over 600 enslaved people throughout my lifetime.”

    The purpose of the tool is to engage students through conversation, DiCerbo said, an altogether different experience than passively reading about someone’s life on Wikipedia.

    “The Internet can be a pretty scary place, and it can be a pretty good place. I think that AI is the same,” DiCerbo said. “There could be potential bad uses and misuses, and it can be a pretty powerful learning tool.”

    After gaining early access to ChatGPT-creator OpenAI’s newest and most capable large language model, GPT-4, Khan Academy trained GPT-4 on its own learning content. The company also implemented guardrails to keep Khanmigo’s tone encouraging and prevent it from giving students the answer to the question they’re struggling with.

    For teachers, Khanmigo also offers assistance to create lesson plans and rubrics, identifies struggling students based on their performance in Khan Academy activities and gives teachers access to student chat history.

    “I’m learning new ways to solve the problems as well,” said Leo Lin, a science teacher at Khan Lab School in California and an early tester of Khanmigo. Khan Lab School is a separate nonprofit founded by Khan Academy CEO Sal Khan.

    Khanmigo has emerged at a crossroads in academia, with some educators leaning into generative AI and others recoiling. New York City Public Schools, Seattle Public Schools and the Los Angeles Unified School District, among other academic institutions, have all made efforts to either ban or restrict ChatGPT on district networks and devices in the past.

    A lack of information about AI may be exacerbating some educator worries: While 72% of K-12 teachers, principals and district leaders say that teaching students how to use AI tools is at least “fairly important,” 87% said they’ve received zero professional instruction about incorporating AI into their work, according to an EdWeek Research Center survey from June.

    Khan Academy’s in-the-works AI learning course “AI 101 for Teachers,” created in partnership with Code.org, ETS and the International Society for Technology in Education, offers a path toward AI literacy among teachers.

    Although Khanmigo is still in its pilot phase, the AI-powered teaching assistant is currently used by over 10,000 additional users across the United States beyond the pilot program. They agreed to pay a donation to Khan Academy to test the service.

    An AI “tutor” like Khanmigo is not immune to the flubs all large language models face: so-called hallucinations.

    “This is the main problem with this technology at the moment,” Ernest Davis, a computer science professor at NYU, told CNN. “It makes things up.”

    Khanmigo is most commonly used for math tutoring, according to DiCerbo. Khanmigo shines best when coaching students on how to work through a problem, offering hints, encouragement and additional questions designed to help students think critically. But currently, its own struggles in performing calculations can sometimes hinder its attempts to help.

    In the “Tutor me: Math and science” activity available to students, Khanmigo told me that my answer to 10,332 divided by 4 was incorrect three times before correcting me by sending me the same number.

    In the same “Tutor me” activity, I asked Khanmigo to find the product of five numbers, some integers and some decimals: 97, 117, 0.564322338, 0.855640047, and 0.557680043.

    As I did the final multiplication step, Khanmigo congratulated me for submitting the wrong answer. It wrote: “When you multiply 5479.94173 by 0.557680043, you get approximately 33.0663. Well done!”

    The correct answer is about 3,056.

    Khanmigo makes a math error in a conversation with CNN's Nadia Bidarian.

    Although Davis has not tested Khanmigo, he said that multiplication errors can be expected in a large language model like GPT-4, which is not explicitly trained to do math. Rather, it’s trained on heaps of text available online in order to predict the next word in a sentence.

    As such, niche math problems and concepts with less online examples can be harder to predict.

    “Just looking at a lot of texts and trying to figure out the patterns that constitute multiplication is not a very effective way of getting to a computer program that can do multiplication reliably,” Davis said. “And so it doesn’t.”

    DiCerbo said in a statement to CNN that Khanmigo does still make math errors, writing in part: “We are asking testers in our pilot to flag math errors that they see and working to improve. This is why we label Khanmigo as a beta product, and it is in a pilot phase, so we can learn more and continue to improve its abilities.”

    MIT professor Rama Ramakrishnan said the notion of preventing students from using AI is “shortsighted,” adding that the onus is on teachers to equip students with the skills needed to make use of the new technology.

    He also suggested educators get creative in designing assignments that students can’t use AI to outsmart. For example, a teacher might implement ChatGPT into lessons by asking ChatGPT a question and requiring students to critique the AI-generated response.

    “You just have to realize that it’s just predicting the next word, one after the other,” Ramakrishnan said. “It’s not trying to come up with a truthful answer to your question, just a plausible answer. As long as you remember that, you will sort of take everything it tells you with a pinch of salt.”

    [ad_2]

    Source link