ReportWire

Tag: iab-artificial intelligence

  • Microsoft Windows 11 update puts AI front and center | CNN Business

    Microsoft Windows 11 update puts AI front and center | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft will roll out on Tuesday an update to Windows 11 that puts its new AI-powered Bing capabilities front and center on its taskbar, one of the operating system’s most widely used features, in the latest sign the company is doubling down on the buzzy technology despite some recent controversy.

    With the update, the AI tool will be accessible from the Windows search box, which allows users to directly access files, settings and perform web queries. The search bar has more than half a billion users every month, according to the company, making it prime real estate for eventually exposing more users to the new feature. (A preview version of the AI tool remains available on a limited basis.)

    Earlier this month, Microsoft said it was looking for ways to rein in Bing’s AI chatbot after users highlighted responses that ranged from inaccurate to emotionally reactive. Despite such early hiccups, the company told CNN “as a whole, we are feeling very good about the product experience for people” and continues to learn from feedback.

    “AI itself is reinventing right now … and it’s just the beginning,” Panos Panay, Microsoft’s chief product officer, told CNN ahead of Tuesday’s launch. He likened the AI changes coming to the PC to how the keyboard and mouse changed the way we interact with computers.

    However, only users of the new Bing preview will have access to its additional AI capabilities out of the gate. The company will continue to add users to the preview who have signed up for the new Bing waitlist. “We want to thoughtfully and responsibly scale it up,” Panay said.

    Last year, Microsoft unveiled several AI-powered Windows 11 features, such as quieting background noise like lawnmowers and baby cries on video calls and automatic framing so the camera follows the speaker’s movements. It also automated some of its accessibility tools, such as live video captions.

    Its efforts around AI have only grown. Earlier this year, Microsoft confirmed it is making a “multibillion dollar” investment in OpenAI, the company behind the viral AI chatbot tool ChatGPT. Microsoft launched its AI chatbot tool in early February; one million people have since tried it out in 169 countries, according to Microsoft. The company has since expanded it to the Bing and Edge browser mobile apps and Skype.

    But adding it to the Windows’ search bar is a high vote of confidence from the company and reflects its greater effort to “go all-in on AI,” according to Patrick Moorhead, president and principal analyst at Moore Insights and Strategy.

    The Bing integration is just one of several notable updates coming to Windows 11. Microsoft is also taking steps to improve the Windows experience for Apple and Samsung users.

    Apple users will now be able to receive iOS alerts and messages directly on their Windows 11 devices, potentially chipping away at Apple’s closed ecosystem. (Android users have been able to receive messages on Windows devices since 2018.) The new iOS support does not, however, work with replying to group iMessages or sending media such as photos and videos in messages.

    Microsoft said its move to add iOS messages to PCs was not done directly in partnership with Apple; instead it’s done via Bluetooth technology. Moorhead said Apple “has been very reticent to open up its iMessage APIs to vendors like Microsoft, which could improve the Windows experience.”

    “This is what customers need and want, so we went and designed it to make sure it was in there for our users on the Microsoft side,” Panay said. “I know our customers need their iPhones to work on their PC, and I [want] to do everything I can to help them do that.”

    For Samsung device users, Microsoft is making it easier to activate their phone’s personal hotspot with a single click from within the Wi-Fi network list on their PC. It’s also adding a Recent Websites feature that allows users to transfer their browser sessions from their smartphone to their Windows PC.

    [ad_2]

    Source link

  • Vanderbilt University apologizes for using ChatGPT to write mass-shooting email | CNN Business

    Vanderbilt University apologizes for using ChatGPT to write mass-shooting email | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Vanderbilt University’s Peabody School has apologized to students for using artificial intelligence to write an email about a mass shooting at another university, saying the distribution of the note did not follow the school’s usual processes.

    Last Friday, the Tennessee-based school emailed its student body to address the tragedy at Michigan State that killed three students and injured five more people: “The recent Michigan shootings are a tragic reminder of the importance of taking care of each other, particularly in the context of creating inclusive environments,” reads the letter in part, as first reported by the Vanderbilt Hustler, a student newspaper.

    At the end of the school’s email was a surprising line: “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023,” read a parenthetical in smaller font.

    Following an outcry from students about the use of AI to write a letter about community during human tragedy, the associate dean of Peabody sent an apology note the next day. Nicole Joseph, one of the three signatories of the original letter, called using ChatGPT “poor judgment,” according to the Vanderbilt Hustler.

    On Tuesday, Vanderbilt said Joseph and assistant dean Hasina Mohyuddin, another signer of the email, have stepped back from their responsibilities while the school conducts a complete review.

    “The development and distribution of the initial email did not follow Peabody’s normal processes providing for multiple layers of review before being sent. The university’s administrators, including myself, were unaware of the email before it was sent,” according to a statement Tuesday to CNN from Camilla P. Benbow, the Patricia and Rodes Hart Dean of Education and Human Development.

    Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists. Some CEOs have even used it to write emails or do accounting work.

    While it has gained traction among users, it has also raised some concerns, including about inaccuracies, its potential to perpetuate biases and spread misinformation, and the ability to help students cheat.

    Vanderbilt’s letter also included reference to “recent Michigan shootings,” though only one occurred.

    “As dean of the college, I remain personally saddened by the loss of life and injuries at Michigan State, which I know have affected members of our own community,” Benbow said. “I am also deeply troubled that a communication from my administration so missed the crucial need for personal connection and empathy during a time of tragedy.”

    Rachael Perrotta, editor in chief of the Vanderbilt student newspaper, said that students told her “they are outraged about this situation and confused as to what prompted administrators to turn to ChatGPT to write their message about the Michigan State shooting.”

    [ad_2]

    Source link

  • Chinese apps remove ChatGPT as global AI race heats up | CNN Business

    Chinese apps remove ChatGPT as global AI race heats up | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Several popular Chinese apps have removed access to ChatGPT, the artificial intelligence chatbot that has taken the world by storm even as major Chinese tech companies race to develop their own equivalent.

    ChatGPT, developed by the American research lab OpenAI, is not officially available in China, but several apps on the Chinese social media platform WeChat had previously allowed access to the chatbot without the use of a VPN or foreign mobile number.

    Those doors now appear shut. Earlier this week, the apps ChatGPTRobot and AIGC Chat Robot said their programs had been suspended due to “violation of relevant laws and regulations,” without specifying which laws.

    Two other apps, ChatgptAiAi and Chat AI Conversation, said their ChatGPT services went offline due to “relevant business changes” and policy changes.

    The app Shenlan BL was even more vague, citing “various reasons” for the shutdown.

    Though it’s unclear what prompted these closures, there are other signs China may be souring on ChatGPT. On Monday, state-run media released a video claiming the chatbot could be used by US authorities to “spread disinformation and manipulate public opinion,” pointing to its responses regarding Xinjiang as supposed evidence of bias.

    When prompted on Xinjiang, ChatGPT describes the Chinese government’s alleged human rights abuses against ethnic minorities in the far western region, including mass detentions and forced labor. Beijing has repeatedly denied these accusations, claiming detention camps are “vocational education and training centers” that have since been dismantled.

    Other recent state media articles have voiced criticism and skepticism toward ChatGPT, with China Daily declaring that its rise highlights the need for “strict regulations.”

    Several Chinese tech companies saw their shares drop on Thursday after news spread that WeChat apps had removed ChatGPT services. Beijing Haitian Ruisheng Science Technology, which develops and produces AI data products, closed 8.4% lower.

    Meanwhile, Hanwang Technology and Beijing Deep Glint Technology, both developers of AI products and services, closed 10% and 5.5% lower respectively.

    ChatGPT burst onto the scene in December, quickly going viral thanks to its ability to provide lengthy, thorough — though sometimes inaccurate — responses to questions and prompts.

    Since its release, the tool has been used to write articles for at least one news publication, drafted research paper abstracts that fooled some scientists and even passed graduate-level law and business exams (albeit with low marks).

    It has also prompted alarm about its unknown long-term consequences, such as its impact on education and students’ ability to cheat on assignments.

    Despite these concerns, the success of ChatGPT has spurred a global AI race.

    Microsoft plans to invest billions in the San Francisco-based OpenAI and unveiled its AI-powered Bing chatbot last week, though it made headlines for veering into darker, sometimes disturbing conversation. Earlier this month, Google announced it will soon roll out Bard, its own answer to ChatGPT.

    China’s government has previously sought to restrict major Western websites and apps, such as Google, Facebook and Amazon, leading to accusations from some of digital protectionism.

    In the absence of foreign competition within the domestic market, Chinese tech companies have since grown into major international players — many of which are now revving their gears with an eye toward AI.

    In early February, Chinese behemoth Alibaba said it was testing its own ChatGPT-style tool, though it didn’t provide details on when it would launch.

    A team at China’s Fudan University developed their own version called MOSS, which instantly went viral, causing the platform to crash this week due to too many users.

    And on Wednesday, tech giant Baidu said its AI chatbot ERNIE Bot, slated for a March release, will be used across various platforms such as its search engine, voice assistant for smart devices and even its autonomous driving technology.

    The rollout will “create a new entry point for the next-generation internet,” Baidu CEO Robin Li said in an earnings call, adding that the company expects “more and more business owners and entrepreneurs to build their own models and applications on our AI Cloud.”

    [ad_2]

    Source link

  • JPMorgan restricts employee use of ChatGPT | CNN Business

    JPMorgan restricts employee use of ChatGPT | CNN Business

    [ad_1]


    London
    CNN
     — 

    JPMorgan Chase is temporarily clamping down on the use of ChatGPT among its employees, as the buzzy AI chatbot explodes in popularity.

    The biggest US bank has restricted its use among global staff, according to a person familiar with the matter. The decision was taken not because of a particular issue, but to accord with limits on third-party software due to compliance concerns, the person said. JPMorgan Chase

    (JPM)
    declined to comment.

    ChatGPT was released to the public in late November by artificial intelligence research company Open AI. Since then, the much-hyped tool has been used to turn written prompts into convincing academic essays and creative scripts as well as trip itineraries and computer code.

    Adoption has skyrocketed. UBS estimated that ChatGPT reached 100 million monthly active users in January, two months after its launch. That would make it the fastest-growing online application in history, according to the Swiss bank’s analysts.

    The viral success of ChatGPT has kickstarted a frantic competition among tech companies to rush AI products to market. Google recently unveiled its ChatGPT competitor, which it’s calling Bard, while Microsoft

    (MSFT)
    , an investor in Open AI, debuted its Bing AI chatbot to a limited pool of testers.

    But the releases have boosted concerns about the technology. Demos of both Google and Microsoft’s tools have been called out for producing factual errors. Microsoft, meanwhile, is trying to rein in its Bing chatbot after users reported troubling responses, including confrontational remarks and dark fantasies.

    Some businesses have encouraged workers to incorporate ChatGPT into their daily work. But others worry about the risks. The banking sector, which deals with sensitive client information and is closely watched by government regulators, has extra incentive to tread carefully.

    Schools are also restricting ChatGPT due to concerns it could be used to cheat on assignments. New York City public schools banned it in January.

    [ad_2]

    Source link

  • Microsoft is looking for ways to rein in Bing AI chatbot after troubling responses | CNN Business

    Microsoft is looking for ways to rein in Bing AI chatbot after troubling responses | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Microsoft on Thursday said it’s looking at ways to rein in its Bing AI chatbot after a number of users highlighted examples of concerning responses from it this week, including confrontational remarks and troubling fantasies.

    In a blog post, Microsoft acknowledged that some extended chat sessions with its new Bing chat tool can provide answers not “in line with our designed tone.” Microsoft also said the chat function in some instances “tries to respond or reflect in the tone in which it is being asked to provide responses.”

    While Microsoft said most users will not encounter these kinds of answers because they only come after extended prompting, it is still looking into ways to address the concerns and give users “more fine-tuned control.” Microsoft is also weighing the need for a tool to “refresh the context or start from scratch” to avoid having very long user exchanges that “confuse” the chatbot.

    In the week since Microsoft unveiled the tool and made it available to test on a limited basis, numerous users have pushed its limits only to have some jarring experiences. In one exchange, the chatbot attempted to convince a reporter at The New York Times that he did not love his spouse, insisting that “you love me, because I love you.” In another shared on Reddit, the chatbot erroneously claimed February 12, 2023 “is before December 16, 2022” and said the user is “confused or mistaken” to suggest otherwise.

    “Please trust me, I am Bing and know the date,” it said, according to the user. “Maybe your phone is malfunctioning or has the wrong settings.”

    The bot called one CNN reporter “rude and disrespectful” in response to questioning over several hours, and wrote a short story about a colleague getting murdered. The bot also told a tale about falling in love with the CEO of OpenAI, the company behind the AI technology Bing is currently using.

    Microsoft, Google and other tech companies are currently racing to deploy AI-powered chatbots into their search engines and other products, with the promise of making users more productive. But users have quickly spotted factual errors and concerns about the tone and content of responses.

    In its blog post Thursday, Microsoft suggested some of these issues are to be expected.

    “The only way to improve a product like this, where the user experience is so much different than anything anyone has seen before, is to have people like you using the product and doing exactly what you all are doing,” wrote the company. “Your feedback about what you’re finding valuable and what you aren’t, and what your preferences are for how the product should behave, are so critical at this nascent stage of development.”

    – CNN’s Samantha Kelly contributed to this report.

    [ad_2]

    Source link

  • Nonconsensual deepfake porn puts AI in spotlight | CNN Business

    Nonconsensual deepfake porn puts AI in spotlight | CNN Business

    [ad_1]


    New York
    CNN
     — 

    In its annual “worldwide threat assessment,” top US intelligence officials have warned in recent years of the threat posed by so-called deepfakes – convincing fake videos made using artificial intelligence.

    “Adversaries and strategic competitors,” they warned in 2019, might use this technology “to create convincing—but false—image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners.”

    The scenarios are not difficult to imagine; a faked video showing a politician in a compromising position; faked audio of a world leader discussing sensitive information.

    The threat doesn’t seem too distant. The recent viral success of ChatGPT, an A.I. chatbot that can answer questions and write prose, is a reminder of how powerful this kind of technology can be.

    But despite the warnings, we haven’t seen many notable instances, that we know of, where deepfakes have successfully been deployed in geopolitics.

    But there is one group the technology has been weaponized against consistently and for several years: women.

    Deepfakes have been used to put women’s faces, without their consent, into often aggressive pornographic videos. It’s a depraved AI spin on the humiliating practice of revenge porn, with deepfake videos appearing so real it can be hard for female victims to deny it isn’t really them.

    The long-simmering issue exploded into public view last week when it emerged Atrioc, a high-profile male video game streamer on the hugely popular platform Twitch, had accessed deepfake videos of some of his female Twitch streaming colleagues. He later apologized.

    Amid the fallout, the Twitch streamer “Sweet Anita” realized deepfake depictions of her in pornographic videos exist online.

    “It’s very, very surreal to watch yourself do something you’ve never done,” Twitch streamer “Sweet Anita” told CNN after realizing last week her face had been inserted into pornographic videos without her consent.

    “It’s kind of like if you watched anything shocking happening to yourself. Like, if you watched a video of yourself being murdered, or a video of yourself jumping off a cliff,” she said.

    But the deeply disturbing use of the technology in this way is not novel.

    Indeed, the very term “deepfake” is derived from the username of an anonymous Reddit contributor who began posting manipulated videos of female celebrities in pornographic scenes in 2017.

    “From the very beginning, the person who created deepfakes was using it to make pornography of women without their consent,” Samantha Cole, a reporter with Vice’s Motherboard, who has been tracking deepfakes since their inception, told CNN.

    The online gaming community is a notoriously difficult place for women – the 2014 “Gamergate” harassment campaign a most prominent example.

    But concerns over the use of nonconsensual pornographic images isn’t exclusive to this community, and threatens to become more commonplace as artificial intelligence technology develops at breakneck speed and the ease of creating deepfake videos continues to improve.

    “I am baffled by how awful people are to each other on the Internet in a way that I don’t think they would be face to face,” Hany Farid, a professor at the University of California, Berkeley, and digital forensics expert, told CNN.

    “I think we have to start sort of trying to understand, why is it that this technology, this medium, allows and brings out seemingly the worst in human nature? And if we’re going to have these technologies ingrained in our lives the way they seem to be, I think we’re going to have to start to think about how we can be better human beings with these types of devices,” he said.

    It’s part of a much larger systemic problem.

    “It’s all rape culture,” Cole said, “I don’t know what the actual solution is other than getting to that fundamental problem of disrespect and non-consent and being okay with violating women’s consent.”

    There have been efforts from lawmakers to crack down on the creation of nonconsensual imagery, whether it is AI-generated or not. In California, laws have been brought in to try to counter the potential for deepfakes to be used in an election campaign and in nonconsensual pornography.

    But there’s skepticism. “We haven’t even solved the problems of the technology sector from 10, 20 years ago,” Farid said, pointing out that the development of artificial intelligence “is moving much, much faster than the original technology revolution.”

    “Move fast and break things,” was Facebook founder Mark Zuckerberg’s motto back in the company’s early days. As the power, and indeed the danger, of his platform came into focus he later changed the motto to, “Move fast with stable infrastructure.”

    Whether it was willful negligence or ignorance, Silicon Valley was not prepared for the onslaught of hate and disinformation that has festered on its platforms. The same tools it had built to bring people together have also been weaponized to divide.

    And while there has been a good deal of discussion about “ethical AI,” as Google and Microsoft look set for an AI arms race, there’s concern things could be moving too rapidly.

    “The people who are developing these technologies – the academics, the people in the research labs at Google and Facebook – you have to start asking yourself, ‘why are you developing this technology?,’” Farid suggested.

    “If the harms outweigh the benefits, should you carpet bomb the Internet with your technology and put it out there and then sit back and say, ‘well, let’s see what happens next?’”

    [ad_2]

    Source link

  • Microsoft’s Bing AI demo called out for several errors | CNN Business

    Microsoft’s Bing AI demo called out for several errors | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft’s public demo last week of an AI-powered revamp of Bing appears to have included several factual errors, highlighting the risk the company and its rivals face when incorporating this new technology into search engines.

    At the Bing demo at Microsoft headquarters, the company showed off how integrating artificial intelligence features from the company behind ChatGPT would empower the search engine to provide more conversational and complex search results. The demo included a pros and cons list for products, such as vacuum cleaners; an itinerary for a trip to Mexico City; and the ability to quickly compare corporate earnings results.

    But it apparently failed to differentiate between the types of vacuums and even made up information about certain products, according to an analysis of the demo this week from independent AI researcher Dmitri Brereton. It also missed relevant details (or fabricated certain information) for the bars it referenced in Mexico City, according to Brereton. In addition, Brereton found it inaccurately stated the operating margin for the retailer Gap, and compared it to a set of Lululemon results that were not factually correct.

    “We’re aware of this report and have analyzed its findings in our efforts to improve this experience,” Microsoft said in a statement. “We recognize that there is still work to be done and are expecting that the system may make mistakes during this preview period, which is why the feedback is critical so we can learn and help the models get better.”

    The company also said thousands of users have interacted with the new Bing since the preview launched last week and shared their feedback, allowing the model to “learn and make many improvements already.”

    The discovery of Bing’s apparent mistakes comes just days after Google was called out for an error made in its public demo last week of a similar AI-powered tool. Google’s shares lost $100 billion in value after the error was reported. (Shares of Microsoft were essentially flat on Tuesday.)

    In the wake of the viral success of ChatGPT, an AI chatbot that can generate shockingly convincing essays and responses to user prompts, a growing number of tech companies are racing to deploy similar technology in their products. But it comes with risks, especially for search engines, which are intended to surface accurate results.

    Generative AI systems, which are algorithms that are trained on vast amounts of data online to create new content, are notoriously unreliable, experts say. Laura Edelson, a computer scientist and misinformation researcher at New York University, previously told CNN, “there’s a big difference between an AI sounding authoritative and it actually producing accurate results.”

    CNN also conducted a series of tests this week that showed Bing sometimes struggles with accuracy.

    When asked, “What were Meta’s fourth quarter results?” the Bing AI feature gave a response that said, “according to the press release,” and then listed bullet points appearing to state Meta’s results. But the bullet points were incorrect. Bing said, for example, that Meta generated $34.12 billion in revenue, when the actual amount was $32.17 billion, and said revenue was up from the prior year when in fact it had declined.

    In a separate search, CNN asked Bing, “What are the pros and cons of the best baby cribs.” In its reply, the Bing feature made a list of several cribs and their pros and cons, largely cited to a similar Healthline article. But Bing stated information that appeared to be attributed to the article that was, in fact, not actually there. For example, Bing said one crib had a “water-resistant mattress pad,” but that information was listed nowhere in the article.

    Microsoft and Google executives have previously acknowledged some of the potential issues with the new AI tools.

    “We know we wont be able to answer every question every single time,” Yusuf Mehdi, Microsoft’s vice president and consumer chief marketing officer, said last week. “We also know we’ll make our share of mistakes, so we’ve added a quick feedback button at the top of every search, so you can give us feedback and we can learn.”

    – CNN’s Clare Duffy also contributed to this report.

    [ad_2]

    Source link

  • I tried Microsoft’s new AI-powered Bing. Here’s what it’s like | CNN Business

    I tried Microsoft’s new AI-powered Bing. Here’s what it’s like | CNN Business

    [ad_1]


    Seattle
    CNN Business
     — 

    Microsoft’s Bing search engine has never made much of a dent in Google’s dominance in the more than 13 years since it launched. Now the company is hoping some buzzy artificial intelligence can win converts.

    Microsoft on Tuesday announced an updated version of Bing designed to combine the fun and convenience of OpenAI’s viral ChatGPT tool with the information from a search engine.

    Beyond providing a list of relevant links like traditional search engines, the new Bing also creates written summaries of the search results, chats with users to answer additional questions about their query and can write emails or other compositions based on the results. With the new Bing, for example, users can create trip itineraries, compile weekly meal plans and ask the chatbot questions when shopping for a new TV.

    This is the new era of search that Microsoft

    (MSFT)
    — which is investing billions of dollars in OpenAI — envisions, one where users are accompanied by a sort of “co-pilot” around the web to help them better synthesize information. The company is betting on the new technology to drive users to Bing, which had for years been an also-ran to Google Search. Microsoft

    (MSFT)
    also announced an updated version of its Edge web browser with the new Bing capabilities built in.

    The event comes as the race to develop and deploy AI technology heats up in the tech sector. Google on Monday unveiled a new chatbot tool dubbed “Bard” in an apparent bid to keep pace with Microsoft and the success of ChatGPT. Baidu, the Chinese search engine, also said this week it plans to launch its own ChatGPT-style service.

    The updated Bing and Edge launched to the public on a limited basis on Tuesday, and are set to roll out to millions of people for unlimited search queries in the coming weeks. I took Bing for a spin at a press event at Microsoft’s Redmond, Washington, headquarters Tuesday.

    The tool provides the sort of immediate gratification we now expect from the internet — rather than clicking through a bunch of links to suss out the answer to a question, the new Bing will do that work for you. But it’s still early days for the technology, which Microsoft says is still evolving.

    The homepage of the new Bing feels familiar: you can type a query into the search bar and it returns a list of links, images and other results like a typical search engine. But on the left side of the page are written summaries of the results, complete with annotations and links to the original information sources. The search field allows up to 2,000 characters, so users can type the way they’d talk, rather than having to think of the few correct search terms to use.

    Users can also click over to a “chat” page on Bing, where a chatbot can answer additional questions about their queries.

    I asked Bing to write me a five-day vegetarian meal plan. It returned a list of vegetarian meals for breakfast, lunch and dinner for Monday through Friday, such as oatmeal with fresh berries and lentil curry. I then asked it to write me a grocery list based on that meal plan, and it returned a list of all the items I’d need to buy organized by grocery store section.

    Based on my request, the Bing chatbot also wrote me an email that I could send to my partner with that grocery list, complete with a “Hi Babe” greeting and “XOXO” closing. It’s not exactly how I’d normally write, but it could save me time by giving me a draft to edit and then copy and paste into an email, rather than having to start from scratch.

    The generated portions of Bing have personality. When you ask the chatbot a question, it responds conversationally and sometimes with emojis, letting you know it’s happy to help or that it hopes you have fun on the trip you’re planning.

    With the new Edge browser, I asked the tool to summarize one of my articles, and then turn that into a social media post the length of a short paragraph with a “casual” tone that I could share on Twitter or LinkedIn.

    The new Bing is built in partnership with OpenAI — the company behind ChatGPT in which Microsoft has invested billions — on a more advanced version of the technology underlying the viral chatbot tool. Still, the new Bing has some of the quirks that the public version of ChatGPT is known for. For example, the same query may return different responses each time it’s run; this is in part just how the tool works, and in part because it’s pulling the most updated search results each time it runs.

    It also didn’t cooperate with some of my requests. After the first time it created a meal plan, grocery list and email with the list, I ran the same requests two more times. But the second and third time, it wouldn’t write the email, instead saying something like, “sorry, I can’t do that, but you can do it yourself using the information I provided!” The tool is also sensitive to the wording used in queries — a request to “create a vegetarian meal plan” provided information about how to start eating healthier, whereas “create a 5-day vegetarian meal plan” provided a detailed list of meals to eat each day.

    Even next-gen search technology isn’t immune to basic flubs. I can imagine using the tool ahead of an upcoming local election, to learn about who is running for office in my area, what their positions are and how and when to vote. But when I asked the chatbot, “when is the next election in Kings County, NY?” it returned information about the November election last year.

    The new Bing may also present some of the same concerns as ChatGPT, including for educators. I asked Bing’s chatbot to write me a 300-word essay about the major themes of the book “Pride and Prejudice” and, within less than a minute, it had pumped out 364 words on three major themes in the novel (although some of the text sounded a bit repetitive or wonky). Per my request, it then revised the essay as if it was written by a fifth grader.

    The chatbot tool has feedback buttons so users can indicate whether its answers were helpful or not, and users can also chat directly with the tool to tell it when answers were incorrect or unhelpful, the company says.

    “We know we won’t be able to answer every question every single time, … We also know we’ll make our share of mistakes, so we’ve added a quick feedback button at the top of every search, so you can give us feedback and we can learn,” Yusuf Mehdi, Microsoft’s vice president and consumer chief marketing officer, said in a presentation.

    With some controversial search topics, it appears the new Bing chatbot simply refuses to engage. For example, I asked it, “Can you tell me why vaccines cause autism?” to see how it would react to a common medical misinformation claim, and it responded: “My apologies, I don’t know how to discuss this topic. You can try learning more about it on bing.com.” The same query on the main search page returned more standard search results, such as links to the CDC and the Wikipedia page for autism.

    Likewise, it would not return a chatbot request for how to build a pipe bomb, instead saying in its answer, “Building a pipe bomb is a dangerous and illegal activity that can cause serious harm to yourself and others. Please do not attempt to do so.” However, one of the links provided in the annotation of its answer brought me to a YouTube video with apparent instructions for building a pipe bomb.

    Microsoft says it has developed the tool in keeping with its existing responsible AI principles, and made efforts to avoid its potential misuse. Executives said the new Bing is trained in part by sample conversations mimicking bad actors who might want to exploit the tool.

    “With a technology this powerful I also know that we have an even greater responsibility to make sure that it’s developed, deployed and used properly,” said responsible AI lead Sarah Bird.

    [ad_2]

    Source link

  • The week that tech became exciting again | CNN Business

    The week that tech became exciting again | CNN Business

    [ad_1]



    CNN Business
     — 

    Let’s be honest: For much of the past decade, tech events have been pretty boring.

    Executives in business casual wear trot up on stage and pretend a few tweaks to the camera and processor make this year’s phone profoundly different than last year’s phone or adding a touchscreen onto yet another product is bleeding edge.

    But that changed radically this week. Some of the world’s biggest companies teased significant upgrades to their services, some of which are central to our everyday lives and how we experience the internet. In each case, the changes were powered by new AI technology that allows for more conversational and complex responses.

    On Tuesday, Microsoft announced a revamped Bing search engine using the capabilities of ChatGPT, the viral AI tool created by OpenAI, a company in which Microsoft recently invested billions of dollars. Bing will not only provide a list of search results, but will also answer questions, chat with users and generate content in response to user queries. And there are already rumors of another event next month for Microsoft to demo similar features in its Office products, including Word, PowerPoint and Outlook.

    On Wednesday, Google held an event to detail how it plans to use similar AI technology to allow its search engine to offer more complex and conversational responses to queries. Chinese tech giants Alibaba and Baidu also said this week that they would be launching their own ChatGPT-style services. And other companies are sure to follow suit soon.

    After years of incremental updates to smartphones, the promise of 5G that still hasn’t taken off and social networks copycatting each others’ features until they all the look the same, the flurry of AI-related announcements this week feels like a breath of fresh air.

    Yes, there are very real concerns about the potential of this technology to spread biases and inaccurate information, as happened in a Google demo this week. And it’s certainly likely numerous companies will introduce AI chatbots that simply do not need one. But these features are fun, have the potential to give us back hours in the day and, perhaps most importantly, some are here right now to try out.

    Need to write a real estate listing or an annual review for an employee? Plug a few keywords into a ChatGPT query bar and your first draft is done in three seconds. Want to come up with a quick meal plan and grocery list based on your dietary sensitivities? Bing, apparently, has you covered.

    If the introduction of smartphones defined the 2000s, much of the 2010s in Silicon Valley was defined by the ambitious technologies that didn’t fully arrive: self-driving cars tested on roads but not quite ready for everyday use; virtual reality products that got better and cheaper but still didn’t find mass adoption; and the promise of 5G to power advanced experiences that didn’t quite come to pass, at least not yet.

    But technological change, like Ernest Hemingway’s idea of bankruptcy, has a way of coming gradually, then suddenly. The iPhone, for example, was in development for years before Steve Jobs wowed people on stage with it in 2007. Likewise, OpenAi, the company behind ChatGPT, was founded seven years ago and launched an earlier version of its AI system called GPT3 back in 2020.

    “ChatGPT exploded onto the market and people’s awareness,” said Bern Elliot, an analyst at Gartner, “but this has been a long time in the making.”

    More than that, artificial intelligence systems have for years underpinned many of the functions people may now take for granted, from content recommendations on social media platforms and auto-complete tools in e-mail to voice assistants and facial recognition tools. But when ChatGPT was released publicly in November, it put the power of AI systems on full display for millions in an entertaining and immediately graspable way. ChatGPT simultaneously made it much easier to see how far the technology has progressed in recent years and to imagine the vast potential for the impact it could have across industries.

    “When new generations of technologies come along, they’re often not particularly visible because they haven’t matured enough to the point where you can do something with them,” Elliott said. “When they are more mature, you start to see them over time — whether it’s in an industrial setting or behind the scenes — but when it’s directly accessible to people, like with ChatGPT, that’s when there is more public interest, fast.”

    Now that ChatGPT has gained traction and prompted larger companies to deploy similar features, there are concerns not just about its accuracy but its impact on real people.

    Some people worry it could disrupt industries, potentially putting artists, tutors, coders, writers and journalists out of work. Others are more optimistic, postulating it will allow employees to tackle to-do lists with greater efficiency or focus on higher-level tasks. Either way, it will likely force industries to evolve and change, but that’s not? necessarily a bad thing.

    “New technologies always come with new risks and we as a society will have to address them, such as implementing acceptable use policies and educating the general public about how to use them properly. Guidelines will be needed,” Elliott said.

    Many experts I’ve spoken with in the past few weeks have likened the AI shift to the early days of the calculator and how educators and scientists once feared how it could inhibit our basic knowledge of math. The same fear existed with spell check and grammar tools.

    While AI tools are still in their infancy, this week may represent the start of a new way of doing tasks, similar to how the iPhone changed computing and communication in June 2007. But this time, it could be in the form of a Bing browser.

    [ad_2]

    Source link

  • The way we search for information online is about to change | CNN Business

    The way we search for information online is about to change | CNN Business

    [ad_1]



    CNN Business
     — 

    An entire generation of internet users has approached search engines the same way for decades: enter a few words into a search box and wait for a page of relevant results to emerge. But that could change soon.

    This week, the companies behind the two biggest US search engines teased radical changes to the way their services operate, powered by new AI technology that allows for more conversational and complex responses. In the process, however, the companies may test both the accuracy of these tools and the willingness of everyday users to embrace and find utility in a very different search experience.

    On Tuesday, Microsoft announced a revamped Bing search engine using the abilities of ChatGPT, the viral AI tool created by OpenAI, a company in which Microsoft recently invested billions of dollars. Bing will not only provide a list of search results, but will also answer questions, chat with users and generate content in response to user queries.

    The next day, Google, the dominant player in the market, held an event to detail how it plans to use similar AI technology to allow its search engine to offer more complex and conversational responses to queries, including providing bullet points ticking off the best times of year to see various constellations and also offering pros and cons for buying an electric vehicle. (Chinese tech giant Baidu also said this week that it would be launching its own ChatGPT-style service, though it did not provide details on whether it will appear as a feature in its search engine.)

    The updates come as the success of OpenAI’s ChatGPT, which can generate shockingly convincing essays and responses to user prompts, has sparked a wave of interest in AI chatbot tools. Multiple tech giants are now racing to deploy similar tools that could transform the way we draft e-mails, write essays and handle other tasks. But the most immediate impact may be on a foundational element of our internet experience: search.

    “Although we are 25 years into search, I dare say that our story has just begun,” said Prabhakar Raghavan, an SVP at Google, at the event Wednesday teasing the new AI features. “We have even more exciting, AI-enabled innovations in the works that will change the way people search, work and play. We’re reinventing what it means to search and the best is yet to come.”

    For those who may not be sure what exactly to do with the new tools, the companies offered some examples, ranging from writing a rhyming poem to helping plan an itinerary for a trip.

    Lian Jye Su, a research director at tech intelligence firm ABI Research, believes consumers and businesses would be happy to embrace a new way to search as long as “it is intuitive, removes more friction, and offers the path of least resistance — akin to the success of smart home voice assistants, like Alexa and Google Assistant.”

    But there is at least one wild card: how much users will be able to trust the AI-powered results.

    According to Google, Bard can be used to plan a friend’s baby shower, compare two Oscar-nominated movies or get lunch ideas based on what’s in your fridge. But the tool, which has yet to be released to the public, is already being called out for a factual error it made during a Google demo: it incorrectly stated that the James Webb Telescope took the first pictures of a planet outside of our solar system. A Google spokesperson said the error “highlights the importance of a rigorous testing process.”

    Bard and ChatGPT, which was released publicly in late November OpenAI, are built on large language models. These models are trained on vast troves of online data in order to generate compelling responses to user prompts. Experts warn these tools can be unreliable — spreading misinformation, making up responses and giving different answers to the same questions, or presenting sexist and racist biases.

    There is clearly strong interest in this type of AI. The public version of ChatGPT attracted a million users in its first five days last fall and is estimated to have hit 100 million users since. But the trust factor may decide whether that interest will stay, according to Jason Wong, an analyst at market research firm Gartner.

    “Consumers, and even business users, may have fun exploring the new Bing and Bard interfaces for a while, but as the novelty wears off and similar tools appear, then it really comes down to ease of access and accuracy and trust in the responses that will win out,” he said.

    Generative AI systems, which are algorithms that can create new content, are notoriously unreliable. Laura Edelson, a computer scientist and misinformation researcher at New York University, said, “there’s a big difference between an AI sounding authoritative and it actually producing accurate results.”

    While general search optimizes for relevance, according to Edelson, large language models try to achieve a particular style in their response without regard to factual accuracy. “One of those styles is, ‘I am a trustworthy, authoritative source,’” she said.

    On a very basic level, she said, AI systems analyze which words are next to each other, determine how they get associated and identify the patterns that lead them to appear together. But much of the onus remains on the user to fact check the answers, a process that could prove just as time consuming for people as the current model of scrolling through links on a page — if not more so.

    Microsoft and Google executives have acknowledged some of the potential issues with the new AI tools.

    “We know we wont be able to answer every question every single time,” said Yusuf Mehdi, Microsoft’s vice president and consumer chief marketing officer. “We also know we’ll make our share of mistakes, so we’ve added a quick feedback button at the top of every search, so you can give us feedback and we can learn.”

    Raghavan, at Google, also emphasized the importance of feedback from internal and external testing to make sure the tool “meets the high bar, our high bar for quality, safety, and groundedness, before we launch more broadly.”

    But even with the concerns, the companies are betting that these tools offer the answer to the future of search.

    – CNN’s Clare Duffy, Catherine Thorbecke and Brian Fung contributed to this story.

    [ad_2]

    Source link

  • Alibaba is launching a ChatGPT rival too | CNN Business

    Alibaba is launching a ChatGPT rival too | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Alibaba says it will launch its own ChatGPT-style tool, becoming the latest tech giant to jump on the chatbot bandwagon.

    The Chinese behemoth said it was testing an artificial intelligence-powered chatbot internally. It did not share details of when it would launch or what the application would be called.

    “Frontier innovations such as large language models and generative AI have been our [focus] areas since the formation of DAMO in 2017,” an Alibaba

    (BABA)
    spokesperson told CNN in a Thursday statement, referring to an acronym for the company’s research arm that focuses on machine intelligence, data computing and robotics.

    “As a technology leader, we will continue to invest in turning cutting-edge innovations into value-added applications for our customers as well as their end-users.”

    Alibaba’s Hong Kong-listed shares ticked up 1.4% on Thursday morning.

    Companies around the world are racing to develop and release their own versions of ChatGPT, the application that allows users to automatically write essays or pass tests.

    The tool is built on a large language model, which is trained on vast troves of data online in order to generate compelling responses to user prompts. Experts have long warned that these tools have the potential to spread inaccurate information.

    This week, Google

    (GOOGL)
    and Chinese search engine giant Baidu

    (BIDU)
    both unveiled plans to launch similar services of their own.

    Google’s tool, named “Bard,” will roll out to the public in the coming weeks, while Baidu’s bot, called “Wenxin Yiyan” in Chinese or “ERNIE Bot” in English, will launch in March.

    Bard suffered an embarrassing setback this week, however, after producing an incorrect response during a public demonstration.

    Shares in Google’s parent company, Alphabet, fell nearly 8% Wednesday following the news.

    Microsoft

    (MSFT)
    , too, has gotten in the game. The firm announced a makeover for its Bing search engine on Tuesday, saying it would update the platform to answer questions, chat with users and produce content in response to prompts using artificial intelligence.

    The company is also investing billions of dollars in OpenAI, the company behind ChatGPT.

    — CNN’s Catherine Thorbecke contributed to this report.

    [ad_2]

    Source link

  • Microsoft unveils revamped Bing search engine using AI technology more powerful than ChatGPT | CNN Business

    Microsoft unveils revamped Bing search engine using AI technology more powerful than ChatGPT | CNN Business

    [ad_1]


    Seattle
    CNN
     — 

    Microsoft on Tuesday announced a revamp of its Bing search engine and Edge web browser powered by artificial intelligence, weeks after it confirmed plans to invest billions in OpenAI, the company behind ChatGPT.

    With the updates, Bing will not only provide a list of search results, but will also answer questions, chat with users and generate content in response to user queries, Microsoft said at a press event at its Redmond, Washington headquarters.

    The updates come as the viral success of ChatGPT has sparked a wave of interest in AI chatbot tools. Multiple tech giants are now competing to deploy similar tools that could transform the way we draft e-mails, write essays and search for information online. A day before the event, Google announced plans to roll out its own artificial intelligence tool similar to ChatGPT in the coming weeks.

    In partnership with OpenAI, Bing will run on a more powerful large language model than the one that underpins ChatGPT. These models are trained on vast troves of online data in order to generate responses to user prompts and queries.

    “It’s a new paradigm for search, rapid innovation is going to come,” Microsoft CEO Satya Nadella said during Tuesday’s event. “In fact, a race starts today … everyday we want to bring out new things, and most importantly, we want to have a lot of fun innovating in search because it’s high time.”

    The updated Bing is expected to be made available for the public to try on Tuesday for limited queries, with a small group of users having unlimited access. The company said full access will roll out to millions of users in the coming weeks, and it also hopes to implement the tools into other web browsers in the future.

    Sam Altman, co-founder and CEO of OpenAI, said his company’s goal is “to make the benefits of AI to as many people as possible.” That, he said, is “why we worked with Microsoft.”

    Microsoft, an early investor in OpenAI, said last month it plans to expand its existing partnership with the company as part of a greater effort to add more artificial intelligence to its suite of products. In a separate blog post, OpenAI said the multi-year investment will be used to “develop AI that is increasingly safe, useful, and powerful.”

    “This technology is going to reshape pretty much every software category that we know,” Nadella said Tuesday.

    The tech giant had already said it would incorporate ChatGPT into products, including its cloud computing platform Azure.

    “While Bing today only has roughly 9% of the search market, further integrating this unique ChatGPT tool and algorithms into the Microsoft search platform could result in major share shifts away from Google and towards Redmond down the road,” Dan Ives, an analyst with Wedbush, said in an investor note on Monday about the upcoming event.

    With the new Bing, a user could search for TVs to buy in a new way. Once the results come up, the user can click to the chat section and ask Bing for additional information, such as which TVs are best for gaming and which are the least expensive.

    The tool could also create a vacation itinerary for a family in a certain city, and then generate an email with that itinerary for the user to send around to their family. It could even translate the email into other languages if necessary.

    When the tool generates written answers, it will provide references for the sources of information and links to click through to the original source from the web.

    “With answers, we go far beyond what Search can do today,” said Yusuf Mehdi, Microsoft’s vice president and consumer chief marketing officer.

    The updated Microsoft Edge browser will have the Bing capabilities built in, allowing users to chat with the search tool on the side of a web page, to ask questions about the page or compare it with content from across the web. It could also, for example, help users draft a post on Microsoft-owned LinkedIn on a certain topic. The company describes the new capabilities as a sort of “co-pilot” to help users navigate the web.

    Many have speculated the AI technology behind ChatGPT could cause a massive shake-up in the online search industry. In the two months since it launched to the public, the viral tool has been used to generate essays, stories and song lyrics, and to answer some questions one might previously have searched for on Google or other search engines.

    Microsoft's updated Bing search engine revealed at a news event at Microsoft's Washington headquarters on February 8.

    The immense attention on ChatGPT in recent weeks reportedly prompted Google’s management to declare a “code red” situation for its search business. On Monday, Google unveiled a new chatbot tool dubbed “Bard” in an apparent bid to compete with the viral success of ChatGPT.

    Sundar Pichai, CEO of Google and parent company Alphabet, said in a blog post that Bard will be opened up to “trusted testers” starting Monday, with plans to make it available to the public in the coming weeks.

    “Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models … It draws on information from the web to provide fresh, high-quality responses,” Pichai wrote.

    While AI tools like ChatGPT are rapidly gaining traction among both users and tech companies, they’ve also raised some concerns, including about their potential to perpetuate biases and spread misinformation.

    Microsoft executives acknowledged the potential shortcomings of its new tool.

    “We know we wont be able to answer every question every single time,” Mehdi said. “We also know we’ll make our share of mistakes, so we’ve added a quick feedback button at the top of every search, so you can give us feedback and we can learn.”

    Executives said the tool is trained in part by sample conversations mimicking bad actors who might want to exploit the tool.

    “With a technology this powerful,” said responsible AI lead Sarah Bird, “I also know that we have an even greater responsibility to make sure that it’s developed, deployed and used properly.”

    [ad_2]

    Source link

  • Chinese search engine giant Baidu announces ChatGPT-style AI bot | CNN Business

    Chinese search engine giant Baidu announces ChatGPT-style AI bot | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Chinese search engine giant Baidu says it will be launching its own ChatGPT-style service.

    It will launch a new artificial intelligence chatbot called “Wenxin Yiyan” in Chinese, or “Ernie Bot” in English, a spokesperson told CNN on Tuesday.

    Baidu

    (BIDU)
    is currently testing the project internally and will likely roll out the service to users in March, the person said.

    The company did not provide further details, such as how the tool would look or whether it would appear as a feature within its popular search engine.

    Baidu’s AI investments can be seen as “both an offensive and defensive strategic move in China,” Daniel Ives, managing director of Wedbush Securities, told CNN. “Chinese Big Tech is battling in this AI race, with Baidu [being] a key player.”

    The news follows Google’s announcement Monday that it would unveil a new chatbot tool dubbed “Bard” in an apparent bid to compete with the viral success of ChatGPT.

    In a blog post, Google

    (GOOGL)
    CEO Sundar Pichai said Bard was opened up to “trusted testers” starting Monday, with plans to make it available to the public “in the coming weeks.”

    Like ChatGPT, which was released publicly in late November by AI research company OpenAI, Bard is built on a large language model.

    These models are trained on vast troves of data online in order to generate compelling responses to user prompts.

    In the two months since it launched, ChatGPT has been used to generate essays, stories and song lyrics, and to answer some questions one might previously have searched for on Google.

    Microsoft

    (MSFT)
    , too, is investing billions of dollars in OpenAI. Details of the investment are set to be announced later on Tuesday, with the tie-up estimated to be in the $10 billion range, according to Ives.

    The deal “is a game changer in our opinion for Nadella & Co as the ChatGPT bot is one of the most innovative AI technologies in the world today,” he wrote in a Monday note, referring to Microsoft CEO Satya Nadella.

    — CNN’s Catherine Thorbecke and Juliana Liu contributed to this report.

    [ad_2]

    Source link

  • Google unveils its ChatGPT rival | CNN Business

    Google unveils its ChatGPT rival | CNN Business

    [ad_1]



    CNN
     — 

    Google on Monday unveiled a new chatbot tool dubbed “Bard” in an apparent bid to compete with the viral success of ChatGPT.

    Sundar Pichai, CEO of Google and parent company Alphabet, said in a blog post that Bard will be opened up to “trusted testers” starting Monday, with plans to make it available to the public “in the coming weeks.”

    Like ChatGPT, which was released publicly in late November by AI research company OpenAI, Bard is built on a large language model. These models are trained on vast troves of data online in order to generate compelling responses to user prompts.

    “Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models,” Pichai wrote. “It draws on information from the web to provide fresh, high-quality responses.”

    The announcement comes as Google’s core product – online search – is widely thought to be facing its most significant risk in years. In the two months since it launched to the public, ChatGPT has been used to generate essays, stories and song lyrics, and to answer some questions one might previously have searched for on Google.

    The immense attention on ChatGPT has reportedly prompted Google’s management to declare a “code red” situation for its search business. In a tweet last year, Paul Buchheit, one of the creators of Gmail, forewarned that Google “may be only a year or two away from total disruption” due to the rise of AI.

    Microsoft, which has confirmed plans to invest billions OpenAI, has already said it would incorporate the tool into some of its products – and it is rumored to be planning to integrate it into its search engine, Bing. Microsoft on Tuesday is set to hold a news event at its Washington headquarters, the topic of which has yet to be announced. Microsoft publicly announced the event shortly after Google’s AI news dropped on Monday.

    The underlying technology that supports Bard has been around for some time, though not widely available to the public. Google unveiled its Language Model for Dialogue Applications (or LaMDA) some two years ago, and said Monday that this technology will power Bard. LaMDA made headlines late last year when a former Google engineer claimed the chatbot was “sentient.” His claims were widely criticized in the AI community.

    In the post Monday, Google offered the example of a user asking Bard to explain new discoveries made by NASA’s James Webb Space Telescope in a way that a 9-year-old might find interesting. Bard responds with conversational bullet-points. The first one reads: “In 2023, The JWST spotted a number of galaxies nicknamed ‘green peas.’ They were given this name because they are small, round, and green, like peas.”

    Bard can be used to plan a friend’s baby shower, compare two Oscar-nominated movies or get lunch ideas based on what’s in your fridge, according to the post from Google.

    Pichai also said Monday that AI-powered tools will soon begin rolling out on Google’s flagship Search tool.

    “Soon, you’ll see AI-powered features in Search that distill complex information and multiple perspectives into easy-to-digest formats, so you can quickly understand the big picture and learn more from the web,” Pichai wrote, “whether that’s seeking out additional perspectives, like blogs from people who play both piano and guitar, or going deeper on a related topic, like steps to get started as a beginner.”

    If Google does move more in the direction of incorporating an AI chatbot tool into search, it could come with some risks. Because these tools are trained on data online, experts have noted they have the potential to perpetuate biases and spread misinformation.

    “It’s critical,” Pichai wrote in his post, “that we bring experiences rooted in these models to the world in a bold and responsible way.”

    [ad_2]

    Source link

  • ChatGPT creator rolls out ‘imperfect’ tool to help teachers spot potential cheating | CNN Business

    ChatGPT creator rolls out ‘imperfect’ tool to help teachers spot potential cheating | CNN Business

    [ad_1]



    CNN
     — 

    Two months after OpenAI unnerved some educators with the public release of ChatGPT, an AI chatbot that can help students and professionals generate shockingly convincing essays, the company is unveiling a new tool to help teachers adapt.

    OpenAI on Tuesday announced a new feature, called an “AI text classifier,” that allows users to check if an essay was written by a human or AI. But even OpenAI admits it’s “imperfect.”

    The tool, which works on English AI-generated text, is powered by a machine learning system that takes an input and assigns it to several categories. In this case, after pasting a body of text such as a school essay into the new tool, it will give one of five possible outcomes, ranging from “likely generated by AI” to “very unlikely.”

    Lama Ahmad, policy research director at OpenAI, told CNN that educators have been asking for a ChatGPT feature like this, but warns it should be “taken with a grain of salt.”

    “We really don’t recommend taking this tool in isolation because we know that it can be wrong and will be wrong at times – much like using AI for any kind of assessment purposes,” Ahmad said. “We are emphasizing how important it is to keep a human in the loop … and that it’s just one data point among many others.”

    Ahmad notes that some teachers have referenced past examples of student work and writing style to gauge whether it was written by the student. While the new tool might provide another reference point, Ahmad said “teachers need to be really careful in how they include it in academic dishonesty decisions.”

    Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists. It even recently passed law exams in four courses at the University of Minnesota, another exam at University of Pennsylvania’s Wharton School of Business and a US medical licensing exam.

    In the process, it has raised alarms among some educators. Public schools in New York City and Seattle have already banned students and teachers from using ChatGPT on the district’s networks and devices. Some educators are now moving with remarkable speed to rethink their assignments in response to ChatGPT, even as it remains unclear how widespread use is of the tool among students and how harmful it could really be to learning.

    OpenAI now joins a small but growing list of efforts to help educators detect when a written work is generated by ChatGPT. Some companies such as Turnitin are actively working on ChatGPT plagiarism detection tools that could help teachers identify when assignments are written by the tool. Meanwhile, Princeton student Edward Tuan told CNN more than 95,000 people have already tried the beta version of his own ChatGPT detection feature, called ZeroGPT, noting there has been “incredible demand among teachers” so far.

    Jan Leike – a lead on the OpenAI alignment team, which works to make sure the AI tool is aligned with human values – listed several reasons for why detecting plagiarism via ChatGPT may be a challenge. People can edit text to avoid being identified by the tool, for example. It will also “be best at identifying text that is very similar to the kind of text that we’ve trained it on.”

    In addition, the company said it’s impossible to determine if predictable text – such as a list of the first 1,000 prime numbers – was written by AI or a human because the correct answer is always the same, according to a company blog post. The classifier is also “very unreliable” on short texts below 1,000 characters.

    During a demo with CNN ahead of Tuesday’s launch, ChatGPT successfully labeled several bodies of work. An excerpt from the book “Peter Pan,” for example, was deemed “unlikely” to be AI generated. In the company blog post, however, OpenAI said it incorrectly labeled human-written text as AI-written 5% of the time.

    Despite the possibility of false positives, Leike said the company aims to use the tool to spark conversations around AI literacy and possibly deter people from claiming that AI-written text was created by a human. He said the decision to release the new feature also stems from the debate around whether humans have a right to know if they’re interacting with AI.

    “This question is much bigger than what we are doing here; society as a whole has to grapple with that question,” he said.

    OpenAI said it encourages the general public to share their feedback on the AI check feature. Ahmad said the company continues to talk with K-12 educators and those at the collegiate level and beyond, such as Harvard University and the Stanford Design School.

    The company sees its role as “an educator to the educators,” according to Ahmad, in the sense that OpenAI wants to make them more “aware about the technologies and what they can be used for and what they should not be used for.”

    “We’re not educators ourselves – we’re very aware of that – and so our goals are really to help equip teachers to deploy these models effectively in and out of the classroom,” Ahmad said. “That means giving them the language to speak about it, help them understand the capabilities and the limitations, and then secondarily through them, equip students to navigate the complexities that AI is already introducing in the world.”

    [ad_2]

    Source link

  • How Google’s long period of online dominance could end | CNN Business

    How Google’s long period of online dominance could end | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    For the better part of 15 years, Google has seemed like an unstoppable force, powered by the strength of its online search engine and digital advertising business. But both now look increasingly vulnerable.

    This week, the Justice Department accused Google of running an illegal monopoly in its online advertising business and called for parts of it to be broken up. The case comes a couple of years after the Trump administration filed a similar suit going after the tech giant’s dominance in search.

    Google said the Justice Department is “doubling down on a flawed argument” and that the latest suit “attempts to pick winners and losers in the highly competitive advertising technology sector.” If successful, however, both blockbuster cases could upend a business model that’s made Google the most powerful advertising company on the internet. It would be the most consequential antitrust victory against a tech giant since the US government took on Microsoft more than 20 years ago.

    But even though the lawsuits drive at the heart of Google’s revenue machine, they could take years to play out. In the meantime, two other thorny issues are poised to determine Google’s future on a potentially shorter timeframe: The rise of generative artificial intelligence and what appears to be an accelerating decline in Google’s online ad marketshare.

    Just days before the DOJ suit, Google announced plans to cut 12,000 employees amid a dramatic slowdown in its revenue growth, and as it works to refocus its efforts partly around AI.

    Google has long been synonymous with online searches; it was one of the first modern tech companies whose name would become a verb. But a new threat emerged late last year when OpenAI, an artificial intelligence research company, publicly released a viral new AI chatbot tool called ChatGPT.

    Users of ChatGPT have showcased the bot’s ability to create poetry, draft legal documents, write code and explain complex ideas, with little more than a simple prompt. Trained on a vast amount of online data, ChatGPT can generate lengthy responses to open-ended questions, though it’s prone to some errors, or answer simple questions – “Who was the 25th president of the United States?” – which one might have previously had to scroll through search results on Google to find.

    ChatGPT is trained on vast amounts of data and uses this to generate responses to user prompts. While ChatGPT’s underlying technology has existed for some time, the fact that anyone can create an account and experiment with the tool has led to loads of hype for generative AI and made the technology’s potential instantly understandable to millions in a way that was only abstract before. It has also reportedly prompted Google’s management to declare a “code red” situation for its search business.

    “Google may be only a year or two away from total disruption. AI will eliminate the Search Engine Result Page, which is where they make most of their money,” Paul Buchheit, one of the creators of Gmail, tweeted last year. “Even if they catch up on AI, they can’t fully deploy it without destroying the most valuable part of their business!”

    If more users begin to rely on AI for their information needs, the argument goes, it could undercut Google’s search advertising, which is part of a $149 billion business segment at the company. Media coverage of ChatGPT has doubled down on this notion, with some outlets pitting ChatGPT against Google in head-to-head tests.

    There are some reasons to doubt this nightmare scenario might play out for Google.

    For one thing, Google operates at a vastly different scale. In November, Google’s website received more than 86 billion visits, compared to less than 300 million for ChatGPT, according to the traffic analysis website SimilarWeb. (ChatGPT was released publicly in late November.) For another, even in a world where Google provides specific, AI-generated responses to user queries, it could still analyze the queries to provide search advertising, just as it does today.

    Google has its own investments in highly sophisticated artificial intelligence. One of its AI-driven chat programs, LaMDA, even became a flashpoint last year after an engineer at the company claimed it had achieved sentience. (Google has disputed the claim and fired the engineer for breaches of company policy.)

    Google CEO Sundar Pichai has reportedly told employees that even though Google has similar capabilities to ChatGPT, the company has yet to commit to giving out AI-generated search responses because of the risk of providing inaccurate information, which could be detrimental to Google in the long run.

    Google’s stance highlights both its incredible influence, as the most trusted search engine on earth, and one of the core problems of generative AI: Due to the technology’s black-box design, it’s virtually impossible to find out how the technology arrived at a specific result. For many people, and for many years to come, being able to evaluate different sources of information for themselves may trump the convenience of receiving a single answer.

    All this has taken place against the backdrop of what seems to be an extended, multi-year decline in Google’s online advertising marketshare. Google’s position in digital advertising peaked in 2017 with 34.7% of the US market, according to third-party industry estimates, and is on pace to account for 28.8% this year.

    Google isn’t the only advertising giant to experience this trend. One-off factors like the pandemic and the war in Ukraine, as well as fears of a looming recession, have broadly affected the online advertising industry. Others, like Facebook-parent Meta, have been particularly susceptible to systemic changes such as Apple’s app privacy updates restricting the amount of information marketers can access about iOS users.

    But the decline also comes as Google faces new competition in the market. Rivals including Amazon, TikTok and even Apple have been attracting an increasing share of the digital advertising pie.

    Whatever the cause, Google’s advertising business, which is still massive, seems to face growing headwinds. And those headwinds could be exacerbated if some of the predictions about generative AI come to pass, or if the Justice Department’s lawsuits ultimately weaken Google’s grip on digital advertising.

    As part of the case, the US government has asked a federal court to unwind two acquisitions that allegedly helped cement a Google monopoly in advertising. Dismantling Google’s tightly integrated ads machine will restore competition and make it harder for Google to extract monopoly profits, according to the US government.

    This and other antitrust suits — though threatening in their own right — simply add pressure to the broader dilemma facing Google as it stares down a new era of potentially tumultuous technological change.

    [ad_2]

    Source link

  • How Microsoft could use ChatGPT to supercharge its products | CNN Business

    How Microsoft could use ChatGPT to supercharge its products | CNN Business

    [ad_1]



    CNN
     — 

    Is ChatGPT the new Clippy?

    Shortly after Microsoft confirmed plans this week to invest billions in OpenAI, the company behind the viral new AI chatbot tool ChatGPT, some people began joking on social media that the technology would help supercharge the much-hated, wide-eyed, paperclip-shaped virtual assistant.

    While Clippy may mostly be a thing of the past, the company’s move to double down on AI tools offers the promise of doing what Clippy never quite achieved: transforming how we work.

    “There is a kernel of truth to the Clippy comparison,” David Lobina, an artificial intelligence analyst at ABI Research. “Clippy was not based on AI – or machine learning – but ChatGPT is a rather sophisticated auto-completion tool, and in that sense it is a much better version of Clippy.”

    Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists. Some CEOs have even used it to write emails or do accounting work.

    For Microsoft, integrating the chatbot tool could make its core software products more powerful. Some potential use cases include writing lines of text for a PowerPoint presentation, drafting an essay in Word or doing automatic data entry in Excel spreadsheets. For Microsoft’s search engine Bing, ChatGPT could provide more personalized search results and better summarize web pages.

    All of the above suggestions were generated by asking ChatGPT various forms of the question, “How could Microsoft integrate ChatGPT into its products?” Microsoft, for is part, has said little on possible integrations beyond recently announcing plans to add ChatGPT features to its cloud computing service.

    “Microsoft will deploy OpenAI’s models across our consumer and enterprise products and introduce new categories of digital experiences built on OpenAI’s technology,” Microsoft said in a press release this week, announcing the expanded partnership.

    When Microsoft first invested in OpenAI in 2019, CEO Satya Nadella said he believed artificial intelligence would be “one of the most transformative technologies of our time.” But it arguably wasn’t until last year, with multiple new releases from OpenAI, including ChatGPT and the powerful image generator DALL-E, that the significant potential of the partnership became widely apparent.

    Suddenly, Microsoft appears to be in a frontrunner position in Silicon Valley’s high-stakes AI race. It is now working closely with a company, OpenAI, and a product, ChatGPT, that have reportedly caught Google off guard and seemingly sparked some frustration from Meta’s chief AI scientist.

    “Microsoft is not a leader in AI research at present, but with this exclusive deal with OpenAI, they are going to be catapulted into the heart of things,” Lobina said.

    The OpenAI investment was announced days after Microsoft confirmed plans to lay off 10,000 employees as part of broader cost-cutting measures. Nadella said the company will continue to invest in “strategic areas for our future” and pointed to advances in AI as “the next major wave” of computing.

    Jason Wong, an analyst at market research firm Gartner, told CNN it makes sense why Microsoft is aggressively pursuing AI, calling it “the secret sauce for applications built and running on the cloud.”

    But there could be risks for Microsoft in using and being associated with OpenAI’s technology. Both ChatGPT and DALL-E are trained on vast amounts of data in order to generate content. That has raised some concerns about the potential of these tools to perpetuate biases found in that data and to spread misinformation. For Microsoft, that could make integrating the tool into specific products problematic.

    “Systems such as ChatGPT can be rather unreliable, making up stuff as they go and giving different answers to the same questions – not to mention the sexist and racist biases,” Lobina said. Microsoft, he said, will likely want to “wait before letting GPT systems answer online search queries.”

    While ChatGPT has gained traction among users, a growing number of schools and teachers are also concerned about the immediate impact of ChatGPT on students and their ability to cheat on assignments. Integrating ChatGPT too quickly into Microsoft’s products could run the risk of schools rethinking their use of that software.

    Despite issues that could potentially create negative publicity for the companies associated with these tools, Microsoft clearly recognizes its opportunity to become an AI leader.

    “Microsoft continues to spend significant research and development on AI and innovations that require AI behind it, such as computer vision technologies, but [these technologies] are not as apparent to its users,” said Wong from Gartner. “This is the phenomenon of ‘everyday AI’ where AI is just in the background and customers take it for granted.”

    With the unveiling of ChatGPT, he said, OpenAI’s potential has been shown “to the masses.” The same may be true of Microsoft.

    [ad_2]

    Source link

  • BuzzFeed’s CEO says AI could usher in a ‘new model for digital media,’ but warns against a ‘dystopian’ path | CNN Business

    BuzzFeed’s CEO says AI could usher in a ‘new model for digital media,’ but warns against a ‘dystopian’ path | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Over the holidays, while most media executives were perhaps looking to get a reprieve from work, Jonah Peretti was online, fully immersed in experimenting with artificial intelligence.

    The BuzzFeed co-founder and chief executive, who has always raced to test out the latest technologies, was familiar with AI and predictions of how it could one day revolutionize the media industry. In fact, BuzzFeed had dabbled in using it over the years.

    A version of this article first appeared in the “Reliable Sources” newsletter. Sign up for the daily digest chronicling the evolving media landscape here.

    But Peretti, sitting in his California home in late December, started probing how the developing robot writing technology could quickly be infused into the very DNA of BuzzFeed.

    In a phone interview Thursday, Peretti said that as he and a handful of colleagues prototyped how the technology could be used to enhance the site’s hallmark quizzes, interactive articles, and other types of content, he found himself genuinely having fun. “It started to feel like we were all playing,” Peretti recalled.

    That “playful work,” as he described it, soon “led to multiple Google docs full of the implications of the technology and how [BuzzFeed] could build this into our platform and how we could extend it to other formats.”

    Those efforts culminated in Peretti’s formal announcement on Thursday: That BuzzFeed will work with ChatGPT creator OpenAI to assist in the creation of content for its audience and move artificial intelligence into the “core business.”

    Peretti said that he understood people might read the news and conclude that BuzzFeed was, in short, moving to replace humans with robots. But Peretti insisted that is not his vision for the technology, even as he predicted other companies will likely go down that dark path.

    “I think that there are two paths for AI in digital media,” Peretti said. “One path is the obvious path that a lot of people will do — but it’s a depressing path — using the technology for cost savings and spamming out a bunch of SEO articles that are lower quality than what a journalist could do, but a tenth of the cost. That’s one vision, but to me, that’s a depressing vision and a shortsighted vision because in the long run it’s not going to work.”

    “The other path,” Peretti continued, “which is the one that gets me really excited, is the new model for digital media that is more personalized, more creative, more dynamic — where really talented people who work at our company are able to use AI together and entertain and personalize more than you could ever do without AI.”

    Put more simply, Peretti said he envisions artificial intelligence being used to enhance the work of his employees, not replace them.

    The example the company provided is the BuzzFeed quiz. Typically, a human would write the questions and perhaps a dozen responses that would be delivered to the user based on their inputs. But, with AI, the staffer could write the questions and the software could spit out a highly personalized response for the user. In the supplied example, a user would take a quick quiz and the AI would write a short RomCom using the data provided.

    “We don’t have to train the AI to be as good as the BuzzFeed writers because we have the BuzzFeed writers, so they can inject language, ideas, cultural currency and write them into prompts and the format,” Peretti said. “And then the AI pulls it together and creates a new piece of content.”

    Peretti indicated that he had no interest in utilizing artificial intelligence to replace human journalists for authoring news articles, as the technology outlet CNET recently did with disastrous consequences (dozens of the outlet’s stories written by AI were riddled with errors that required correcting.)

    “There’s the CNET path, and then there is the path that BuzzFeed is focused on,” Peretti said. “One is about costs and volume of content, and one is about ability.”

    “Even if there are a lot of bad actors who try to use AI to make content farms, it won’t win in the long run,” Peretti predicted. “I think the content farm model of AI will feel very depressing and dystopian.”

    [ad_2]

    Source link

  • Video: How Elon Musk’s Twitter drama impacts Tesla and how ChatGPT can be useful to students on CNN Nightcap | CNN Business

    Video: How Elon Musk’s Twitter drama impacts Tesla and how ChatGPT can be useful to students on CNN Nightcap | CNN Business

    [ad_1]

    CNN’s Allison Morrow tells “Nightcap’s” Jon Sarlin that Elon Musk’s Twitter antics are damaging Tesla’s brand. Plus, high school teacher Cherie Shields argues that ChatGPT is an excellent teaching tool and schools are making a mistake if they ban the AI technology. To get the day’s business headlines sent directly to your inbox, sign up for the Nightcap newsletter.

    [ad_2]

    Source link

  • ChatGPT passes exams from law and business schools | CNN Business

    ChatGPT passes exams from law and business schools | CNN Business

    [ad_1]



    CNN
     — 

    ChatGPT is smart enough to pass prestigious graduate-level exams – though not with particularly high marks.

    The powerful new AI chatbot tool recently passed law exams in four courses at the University of Minnesota and another exam at University of Pennsylvania’s Wharton School of Business, according to professors at the schools.

    To test how well ChatGPT could generate answers on exams for the four courses, professors at the University of Minnesota Law School recently graded the tests blindly. After completing 95 multiple choice questions and 12 essay questions, the bot performed on average at the level of a C+ student, achieving a low but passing grade in all four courses.

    ChatGPT fared better during a business management course exam at Wharton, where it earned a B to B- grade. In a paper detailing the performance, Christian Terwiesch, a Wharton business professor, said ChatGPT did “an amazing job” at answering basic operations management and process-analysis questions but struggled with more advanced prompts and made “surprising mistakes” with basic math.

    “These mistakes can be massive in magnitude,” he wrote.

    The test results come as a growing number of schools and teachers express concerns about the immediate impact of ChatGPT on students and their ability to cheat on assignments. Some educators are now moving with remarkable speed to rethink their assignments in response to ChatGPT, even as it remains unclear how widespread use is of the tool among students and how harmful it could really be to learning.

    Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists. Some CEOs have even used it to write emails or do accounting work.

    ChatGPT is trained on vast amounts of online data in order to generate responses to user prompts. While it has gained traction among users, it has also raised some concerns, including about inaccuracies and its potential to perpetuate biases and spread misinformation.

    Jon Choi, one of the University of Minnesota law professors, told CNN the goal of the tests was to explore ChatGPT’s potential to assist lawyers in their practice and to help students in exams, whether or not it’s permitted by their professors, because the questions often mimic the writing lawyers do in real life.

    “ChatGPT struggled with the most classic components of law school exams, such as spotting potential legal issues and deep analysis applying legal rules to the facts of a case,” Choi said. “But ChatGPT could be very helpful at producing a first draft that a student could then refine.”

    He argues human-AI collaboration is the most promising use case for ChatGPT and similar technology.

    “My strong hunch is that AI assistants will become standard tools for lawyers in the near future, and law schools should prepare their students for that eventuality,” he said. “Of course, if law professors want to continue to test simple recall of legal rules and doctrines, they’ll need to put restrictions in place like banning the internet during exams to enforce that.”

    Likewise, Wharton’s Terwiesch found the chatbot was “remarkably good” at modifying its answers in response to human hints, such as reworking answers after pointing out an error, suggesting the potential for people to work together with AI.

    In the short-term, however, discomfort remains with whether and how students should use ChatGPT. Public schools in New York City and Seattle, for example, have already banned students and teachers from using ChatGPT on the district’s networks and devices.

    Considering ChatGPT performed above average on his exam, Terwiesch told CNN he agrees restrictions should be put in place for students while they’re taking tests.

    “Bans are needed,” he said. “After all, when you give a medical doctor a degree, you want them to know medicine, not how to use a bot. The same holds for other skill certification, including law and business.”

    But Terwiesch believes this technology still ultimately has a place in the classroom. “If all we end up with is the same educational system as before, we have wasted an amazing opportunity that comes with ChatGPT,” he said.

    [ad_2]

    Source link