ReportWire

Tag: openai

  • ChatGPT Is Becoming a Game-Changer for Real Estate Agents

    ChatGPT Is Becoming a Game-Changer for Real Estate Agents

    [ad_1]

    Since its release in November of 2022 by OpenAI, ChatGPT has garnered worldwide attention for its efficient and precise ability to write emails, essays, poetry and even generate lines of code based on a prompt.


    picture alliance | Getty Images

    Controversy has surrounded its uses and raised questions about whether or not using the tool counts as cheating, with some schools banning the program altogether.

    However, some professionals are grateful for its efficiency, with an increasing number of real estate agents boasting about how ChatGPT has made their lives easier, CNN reported.

    “It saved me so much time,” JJ Johannes, a realtor in Iowa told the outlet. He noted that although he had to make a few edits before publishing a listing made through ChatGPT, it has been an overall game-changer. “It’s not perfect but it was a great starting point. My background is in technology and writing something eloquent takes time. This made it so much easier.”

    Related: Conversational AI Is a Revolution That’s Just Getting Started. Here’s How It Can Boost Your Business.

    Johannes isn’t the only one utilizing the new tool at work. Several other real estate professionals told CNN that they not only use ChatGPT to write listings but also to draft social media posts and legal documents.

    “I’ve been using it for more than a month, and I can’t remember the last time something has wowed me this much,” Andres Asion, a broker at Miami Real Estate Group, told the outlet.

    Although ChatGPT is free for the time being, OpenAI is considering a $42 monthly charge. However, the price won’t stop realtors like Asion, who says the program has made his job significantly easier.

    “I would easily pay $100 or $200 a year for something like this,” he told CNN. “I’d be crazy not to.”

    [ad_2]

    Madeline Garfinkle

    Source link

  • How Google’s long period of online dominance could end | CNN Business

    How Google’s long period of online dominance could end | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    For the better part of 15 years, Google has seemed like an unstoppable force, powered by the strength of its online search engine and digital advertising business. But both now look increasingly vulnerable.

    This week, the Justice Department accused Google of running an illegal monopoly in its online advertising business and called for parts of it to be broken up. The case comes a couple of years after the Trump administration filed a similar suit going after the tech giant’s dominance in search.

    Google said the Justice Department is “doubling down on a flawed argument” and that the latest suit “attempts to pick winners and losers in the highly competitive advertising technology sector.” If successful, however, both blockbuster cases could upend a business model that’s made Google the most powerful advertising company on the internet. It would be the most consequential antitrust victory against a tech giant since the US government took on Microsoft more than 20 years ago.

    But even though the lawsuits drive at the heart of Google’s revenue machine, they could take years to play out. In the meantime, two other thorny issues are poised to determine Google’s future on a potentially shorter timeframe: The rise of generative artificial intelligence and what appears to be an accelerating decline in Google’s online ad marketshare.

    Just days before the DOJ suit, Google announced plans to cut 12,000 employees amid a dramatic slowdown in its revenue growth, and as it works to refocus its efforts partly around AI.

    Google has long been synonymous with online searches; it was one of the first modern tech companies whose name would become a verb. But a new threat emerged late last year when OpenAI, an artificial intelligence research company, publicly released a viral new AI chatbot tool called ChatGPT.

    Users of ChatGPT have showcased the bot’s ability to create poetry, draft legal documents, write code and explain complex ideas, with little more than a simple prompt. Trained on a vast amount of online data, ChatGPT can generate lengthy responses to open-ended questions, though it’s prone to some errors, or answer simple questions – “Who was the 25th president of the United States?” – which one might have previously had to scroll through search results on Google to find.

    ChatGPT is trained on vast amounts of data and uses this to generate responses to user prompts. While ChatGPT’s underlying technology has existed for some time, the fact that anyone can create an account and experiment with the tool has led to loads of hype for generative AI and made the technology’s potential instantly understandable to millions in a way that was only abstract before. It has also reportedly prompted Google’s management to declare a “code red” situation for its search business.

    “Google may be only a year or two away from total disruption. AI will eliminate the Search Engine Result Page, which is where they make most of their money,” Paul Buchheit, one of the creators of Gmail, tweeted last year. “Even if they catch up on AI, they can’t fully deploy it without destroying the most valuable part of their business!”

    If more users begin to rely on AI for their information needs, the argument goes, it could undercut Google’s search advertising, which is part of a $149 billion business segment at the company. Media coverage of ChatGPT has doubled down on this notion, with some outlets pitting ChatGPT against Google in head-to-head tests.

    There are some reasons to doubt this nightmare scenario might play out for Google.

    For one thing, Google operates at a vastly different scale. In November, Google’s website received more than 86 billion visits, compared to less than 300 million for ChatGPT, according to the traffic analysis website SimilarWeb. (ChatGPT was released publicly in late November.) For another, even in a world where Google provides specific, AI-generated responses to user queries, it could still analyze the queries to provide search advertising, just as it does today.

    Google has its own investments in highly sophisticated artificial intelligence. One of its AI-driven chat programs, LaMDA, even became a flashpoint last year after an engineer at the company claimed it had achieved sentience. (Google has disputed the claim and fired the engineer for breaches of company policy.)

    Google CEO Sundar Pichai has reportedly told employees that even though Google has similar capabilities to ChatGPT, the company has yet to commit to giving out AI-generated search responses because of the risk of providing inaccurate information, which could be detrimental to Google in the long run.

    Google’s stance highlights both its incredible influence, as the most trusted search engine on earth, and one of the core problems of generative AI: Due to the technology’s black-box design, it’s virtually impossible to find out how the technology arrived at a specific result. For many people, and for many years to come, being able to evaluate different sources of information for themselves may trump the convenience of receiving a single answer.

    All this has taken place against the backdrop of what seems to be an extended, multi-year decline in Google’s online advertising marketshare. Google’s position in digital advertising peaked in 2017 with 34.7% of the US market, according to third-party industry estimates, and is on pace to account for 28.8% this year.

    Google isn’t the only advertising giant to experience this trend. One-off factors like the pandemic and the war in Ukraine, as well as fears of a looming recession, have broadly affected the online advertising industry. Others, like Facebook-parent Meta, have been particularly susceptible to systemic changes such as Apple’s app privacy updates restricting the amount of information marketers can access about iOS users.

    But the decline also comes as Google faces new competition in the market. Rivals including Amazon, TikTok and even Apple have been attracting an increasing share of the digital advertising pie.

    Whatever the cause, Google’s advertising business, which is still massive, seems to face growing headwinds. And those headwinds could be exacerbated if some of the predictions about generative AI come to pass, or if the Justice Department’s lawsuits ultimately weaken Google’s grip on digital advertising.

    As part of the case, the US government has asked a federal court to unwind two acquisitions that allegedly helped cement a Google monopoly in advertising. Dismantling Google’s tightly integrated ads machine will restore competition and make it harder for Google to extract monopoly profits, according to the US government.

    This and other antitrust suits — though threatening in their own right — simply add pressure to the broader dilemma facing Google as it stares down a new era of potentially tumultuous technological change.

    [ad_2]

    Source link

  • How Microsoft could use ChatGPT to supercharge its products | CNN Business

    How Microsoft could use ChatGPT to supercharge its products | CNN Business

    [ad_1]



    CNN
     — 

    Is ChatGPT the new Clippy?

    Shortly after Microsoft confirmed plans this week to invest billions in OpenAI, the company behind the viral new AI chatbot tool ChatGPT, some people began joking on social media that the technology would help supercharge the much-hated, wide-eyed, paperclip-shaped virtual assistant.

    While Clippy may mostly be a thing of the past, the company’s move to double down on AI tools offers the promise of doing what Clippy never quite achieved: transforming how we work.

    “There is a kernel of truth to the Clippy comparison,” David Lobina, an artificial intelligence analyst at ABI Research. “Clippy was not based on AI – or machine learning – but ChatGPT is a rather sophisticated auto-completion tool, and in that sense it is a much better version of Clippy.”

    Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists. Some CEOs have even used it to write emails or do accounting work.

    For Microsoft, integrating the chatbot tool could make its core software products more powerful. Some potential use cases include writing lines of text for a PowerPoint presentation, drafting an essay in Word or doing automatic data entry in Excel spreadsheets. For Microsoft’s search engine Bing, ChatGPT could provide more personalized search results and better summarize web pages.

    All of the above suggestions were generated by asking ChatGPT various forms of the question, “How could Microsoft integrate ChatGPT into its products?” Microsoft, for is part, has said little on possible integrations beyond recently announcing plans to add ChatGPT features to its cloud computing service.

    “Microsoft will deploy OpenAI’s models across our consumer and enterprise products and introduce new categories of digital experiences built on OpenAI’s technology,” Microsoft said in a press release this week, announcing the expanded partnership.

    When Microsoft first invested in OpenAI in 2019, CEO Satya Nadella said he believed artificial intelligence would be “one of the most transformative technologies of our time.” But it arguably wasn’t until last year, with multiple new releases from OpenAI, including ChatGPT and the powerful image generator DALL-E, that the significant potential of the partnership became widely apparent.

    Suddenly, Microsoft appears to be in a frontrunner position in Silicon Valley’s high-stakes AI race. It is now working closely with a company, OpenAI, and a product, ChatGPT, that have reportedly caught Google off guard and seemingly sparked some frustration from Meta’s chief AI scientist.

    “Microsoft is not a leader in AI research at present, but with this exclusive deal with OpenAI, they are going to be catapulted into the heart of things,” Lobina said.

    The OpenAI investment was announced days after Microsoft confirmed plans to lay off 10,000 employees as part of broader cost-cutting measures. Nadella said the company will continue to invest in “strategic areas for our future” and pointed to advances in AI as “the next major wave” of computing.

    Jason Wong, an analyst at market research firm Gartner, told CNN it makes sense why Microsoft is aggressively pursuing AI, calling it “the secret sauce for applications built and running on the cloud.”

    But there could be risks for Microsoft in using and being associated with OpenAI’s technology. Both ChatGPT and DALL-E are trained on vast amounts of data in order to generate content. That has raised some concerns about the potential of these tools to perpetuate biases found in that data and to spread misinformation. For Microsoft, that could make integrating the tool into specific products problematic.

    “Systems such as ChatGPT can be rather unreliable, making up stuff as they go and giving different answers to the same questions – not to mention the sexist and racist biases,” Lobina said. Microsoft, he said, will likely want to “wait before letting GPT systems answer online search queries.”

    While ChatGPT has gained traction among users, a growing number of schools and teachers are also concerned about the immediate impact of ChatGPT on students and their ability to cheat on assignments. Integrating ChatGPT too quickly into Microsoft’s products could run the risk of schools rethinking their use of that software.

    Despite issues that could potentially create negative publicity for the companies associated with these tools, Microsoft clearly recognizes its opportunity to become an AI leader.

    “Microsoft continues to spend significant research and development on AI and innovations that require AI behind it, such as computer vision technologies, but [these technologies] are not as apparent to its users,” said Wong from Gartner. “This is the phenomenon of ‘everyday AI’ where AI is just in the background and customers take it for granted.”

    With the unveiling of ChatGPT, he said, OpenAI’s potential has been shown “to the masses.” The same may be true of Microsoft.

    [ad_2]

    Source link

  • Video: How Elon Musk’s Twitter drama impacts Tesla and how ChatGPT can be useful to students on CNN Nightcap | CNN Business

    Video: How Elon Musk’s Twitter drama impacts Tesla and how ChatGPT can be useful to students on CNN Nightcap | CNN Business

    [ad_1]

    CNN’s Allison Morrow tells “Nightcap’s” Jon Sarlin that Elon Musk’s Twitter antics are damaging Tesla’s brand. Plus, high school teacher Cherie Shields argues that ChatGPT is an excellent teaching tool and schools are making a mistake if they ban the AI technology. To get the day’s business headlines sent directly to your inbox, sign up for the Nightcap newsletter.

    [ad_2]

    Source link

  • ChatGPT passes exams from law and business schools | CNN Business

    ChatGPT passes exams from law and business schools | CNN Business

    [ad_1]



    CNN
     — 

    ChatGPT is smart enough to pass prestigious graduate-level exams – though not with particularly high marks.

    The powerful new AI chatbot tool recently passed law exams in four courses at the University of Minnesota and another exam at University of Pennsylvania’s Wharton School of Business, according to professors at the schools.

    To test how well ChatGPT could generate answers on exams for the four courses, professors at the University of Minnesota Law School recently graded the tests blindly. After completing 95 multiple choice questions and 12 essay questions, the bot performed on average at the level of a C+ student, achieving a low but passing grade in all four courses.

    ChatGPT fared better during a business management course exam at Wharton, where it earned a B to B- grade. In a paper detailing the performance, Christian Terwiesch, a Wharton business professor, said ChatGPT did “an amazing job” at answering basic operations management and process-analysis questions but struggled with more advanced prompts and made “surprising mistakes” with basic math.

    “These mistakes can be massive in magnitude,” he wrote.

    The test results come as a growing number of schools and teachers express concerns about the immediate impact of ChatGPT on students and their ability to cheat on assignments. Some educators are now moving with remarkable speed to rethink their assignments in response to ChatGPT, even as it remains unclear how widespread use is of the tool among students and how harmful it could really be to learning.

    Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists. Some CEOs have even used it to write emails or do accounting work.

    ChatGPT is trained on vast amounts of online data in order to generate responses to user prompts. While it has gained traction among users, it has also raised some concerns, including about inaccuracies and its potential to perpetuate biases and spread misinformation.

    Jon Choi, one of the University of Minnesota law professors, told CNN the goal of the tests was to explore ChatGPT’s potential to assist lawyers in their practice and to help students in exams, whether or not it’s permitted by their professors, because the questions often mimic the writing lawyers do in real life.

    “ChatGPT struggled with the most classic components of law school exams, such as spotting potential legal issues and deep analysis applying legal rules to the facts of a case,” Choi said. “But ChatGPT could be very helpful at producing a first draft that a student could then refine.”

    He argues human-AI collaboration is the most promising use case for ChatGPT and similar technology.

    “My strong hunch is that AI assistants will become standard tools for lawyers in the near future, and law schools should prepare their students for that eventuality,” he said. “Of course, if law professors want to continue to test simple recall of legal rules and doctrines, they’ll need to put restrictions in place like banning the internet during exams to enforce that.”

    Likewise, Wharton’s Terwiesch found the chatbot was “remarkably good” at modifying its answers in response to human hints, such as reworking answers after pointing out an error, suggesting the potential for people to work together with AI.

    In the short-term, however, discomfort remains with whether and how students should use ChatGPT. Public schools in New York City and Seattle, for example, have already banned students and teachers from using ChatGPT on the district’s networks and devices.

    Considering ChatGPT performed above average on his exam, Terwiesch told CNN he agrees restrictions should be put in place for students while they’re taking tests.

    “Bans are needed,” he said. “After all, when you give a medical doctor a degree, you want them to know medicine, not how to use a bot. The same holds for other skill certification, including law and business.”

    But Terwiesch believes this technology still ultimately has a place in the classroom. “If all we end up with is the same educational system as before, we have wasted an amazing opportunity that comes with ChatGPT,” he said.

    [ad_2]

    Source link

  • Education Nonprofits Release Free Tool to Detect ChatGPT-Generated Student Work

    Education Nonprofits Release Free Tool to Detect ChatGPT-Generated Student Work

    [ad_1]

    Quill.org and CommonLit.org launched AIWritingCheck.org, a free tool that allows educators to determine whether a text passage was created by humans or AI.

    Press Release


    Jan 25, 2023 12:30 EST

    Education technology nonprofits Quill.org and CommonLit.org have launched AIWritingCheck.org to help teachers determine whether writing was human- or AI-generated text. At www.aiwritingcheck.org, teachers may enter a passage of text and, with the click of a button, learn whether the text was likely generated by a student or a computer.

    ChatGPT’s launch has prompted discussion about how to best equip teachers and students with tools to preserve academic integrity and protect the critically important skill of learning how to write. Quill and CommonLit built this new tool to be free, scalable, and user-friendly. AIWritingCheck.org requires no account or subscription and can process up to 100,000 essays per day, with an accuracy rate of 80-90%. 

    View & Download the Demo Video: https://www.loom.com/share/8bc43ec4dd9a40b3b3cdd78c92394668

    Alongside the launch of AI Writing Check, the nonprofits developed a toolkit to help educators utilize AI detection websites responsibly. The Quill and CommonLit teams are committed to supporting teachers in navigating the changing landscape and fast developments in AI, acting as translators among the tech, edtech, and K-12 communities. 

    View the toolkit: https://bit.ly/ai-check-toolkit

    Peter Gault, Quill.org’s Founder and Executive Director, said, “As tools like ChatGPT become ubiquitous and more advanced over time, many fear that millions of students will stop engaging in the critically important intellectual exercise of carefully reading a text, building a response, applying the rules of grammar, and revising their writing with feedback. While Quill is built on top of AI, we believe that AI should be used to encourage students to do more writing, not for the AI to write for the students.”

    Michelle Brown, CommonLit’s Founder and Chief Executive Officer, said, “The shortcut of using ChatGPT to do the thinking for you is not one that children will so easily overcome. In K-12, it’s the exercise of writing and the thinking that goes into organizing your thoughts that matters – not just the output. Education isn’t just about creating economic value; it’s about human development. It’s about our kids, and building their skills and confidence to become leaders who can communicate and leverage advanced tools.”

    Quill.org and CommonLit.org collectively serve more than 10 million economically disadvantaged students each year with free educational materials to advance literacy, representing 20% of all K-12 students. Quill.org’s mission is to help every low-income student in the United States become a strong writer and critical thinker through free online tools that help teachers by using artificial intelligence to automatically grade and provide feedback on student writing. CommonLit’s nonprofit mission is to unlock the potential of every child through reading, writing, speaking, listening, problem-solving, and collaboration.

    Source: Quill

    [ad_2]

    Source link

  • Microsoft quarterly profit falls 12% but cloud computing business shows strength | CNN Business

    Microsoft quarterly profit falls 12% but cloud computing business shows strength | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft on Tuesday posted weaker-than-expected revenue and a double-digit percentage drop in profit for the final three months of last year amid broader economic uncertainty and reduced demand for personal computers and software.

    The tech giant reported revenue of $52.7 billion for the quarter, a modest 2% increase from the year prior but slightly less than analysts had expected. It reported net income of $16.4 billion, a 12% decline from the year prior.

    The earnings results come at a turbulent moment for Microsoft, and the tech industry as a whole. Microsoft said last week that it plans to lay off 10,000 employees as part of broader cost-cutting measures. In his explanation of the cuts, CEO Satya Nadella pointed to changing demand for digital services years into the pandemic as well as looming recession fears.

    Demand for personal computers, and the Microsoft operating systems that power them, has pulled back after experiencing a boom early in the pandemic. Consulting firm Gartner said earlier this month that worldwide PC shipments fell more than 28% in the fourth quarter of 2022 compared to the same period the prior year. This marked the largest quarterly shipment decline since Gartner began tracking the PC market in the mid-90s.

    On Tuesday, Microsoft reported revenue declines from its Windows OEM operations and from its Xbox content and services lines. Microsoft also said it would incur $800 million in severance expenses from the layoffs announced this month, as well as charges from “changes to our hardware portfolio, and costs related to lease consolidation activities.”

    But the earnings report had some bright spots. Revenue from its cloud computing division, a key area of focus for Microsoft in recent years, increased 22% from the prior year. An analyst at Evercore described the results as “a sigh of relief.”

    Shares of Microsoft rose 4% in after-hours trading Tuesday on the news.

    “The next major wave of computing is being born, as the Microsoft Cloud turns the world’s most advanced AI models into a new computing platform,” CEO Satya Nadella said in a statement accompanying the results. “We are committed to helping our customers use our platforms and tools to do more with less today and innovate for the future in the new era of AI.”

    Earlier this week, Microsoft confirmed it is making a “multibillion dollar” investment into OpenAI, the company behind the viral AI-powered chatbot tool ChatGPT. The deepening partnership between the two companies – Microsoft was an early investor in OpenAI – could help catapult Microsoft as an AI leader and pave the way for the company to incorporate elements of ChatGPT into some of its hallmark applications, such as Outlook and Word.

    In his memo to staffers announcing the job cuts, Nadella said the company will continue to invest in “strategic areas for our future” and pointed to advances in AI as “the next major wave” of computing.

    [ad_2]

    Source link

  • Microsoft confirms it’s investing billions in ChatGPT creator OpenAI | CNN Business

    Microsoft confirms it’s investing billions in ChatGPT creator OpenAI | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft on Monday confirmed it is making a “multibillion dollar” investment in OpenAI, the company behind the viral new chatbot tool called ChatGPT.

    Microsoft, an early investor in OpenAI, said it plans to expand its existing partnership with the company as part of a greater effort to add more artificial intelligence to its suite of products. In a separate blog post, OpenAI said the multi-year investment will be used to “develop AI that is increasingly safe, useful, and powerful.”

    In late November, OpenAI opened up access to ChatGPT, an AI-powered chatbot that can provide lengthy, thoughtful and thorough responses to user prompts and questions. Its responses, while sometimes inaccurate, have stunned users, including academics and some in the tech industry.

    The investment comes days after Microsoft announced plans to lay off 10,000 employees as part of broader cost-cutting measures, making it the latest tech company to reduce staff because of growing economic uncertainty.

    Microsoft CEO Satya Nadella said that the company was not immune to a weaker global economy, but he also said the company will continue to invest in “strategic areas for our future” and pointed to advances in AI as “the next major wave” of computing.

    The investment in OpenAI could catapult Microsoft as an AI leader and ultimately pave the way for the company to incorporate ChatGPT into some of its hallmark applications, such as Word, PowerPoint and Outlook.

    As a result of its existing exclusive deal with OpenAI, Microsoft recently said it would soon add ChatGPT features to to its cloud computing service, Azure. If ChatGPT becomes available on that service, businesses could use the tools directly within its apps and services, too.

    Ahead of Monday’s announcement, David Lobina, an artificial intelligence analyst at ABI Research, told CNN there are big benefits of a further Microsoft investment for OpenAI, too.

    “OpenAI is looking to monetize their systems, considering the huge compute costs of creating these models, and their partnership with Microsoft can be an easy way to do so,” he said.

    [ad_2]

    Source link

  • ChatGPT passed a Wharton MBA exam and it’s still in its infancy. One professor is sounding the alarm

    ChatGPT passed a Wharton MBA exam and it’s still in its infancy. One professor is sounding the alarm

    [ad_1]

    ChatGPT has alarmed high-school teachers, who worry that students will use it—or other new artificial-intelligence tools—to cheat on writing assignments. But the concern doesn’t stop at the high-school level. At the University of Pennsylvania’s prestigious Wharton School of Business, professor Christian Terwiesch has been wondering what such A.I. tools mean for MBA programs. 

    This week, Terwiesch released a research paper in which he documented how ChatGPT performed on the final exam of a typical MBA core course, Operations Management.

    The A.I. chatbot, he wrote, “does an amazing job at basic operations management and process analysis questions including those that are based on case studies.”

    It did have shortcomings, he noted, including being able to handle “more advanced process analysis questions.” 

    But ChatGPT, he determined, “would have received a B to B- grade on the exam.” 

    Elsewhere, it has also “performed well in the preparation of legal documents and some believe that the next generation of this technology might even be able to pass the bar exam,” he noted.

    ChatGPT ‘is not going away’

    Of course, ChatGPT is “just in its infancy,” as billionaire entrepreneur Mark Cuban noted this week in an interview with Not a Bot, an A.I. newsletter. He added, “Imagine what GPT 10 is going to look like.”

    Andrew Karolyi, dean of Cornell University’s SC Johnson College of Business, agrees, telling the Financial Times this week: “One thing we all know for sure is that ChatGPT is not going away. If anything, these AI techniques will continue to get better and better. Faculty and university administrators need to invest to educate themselves.”

    That’s especially true with software giant Microsoft mulling a $10 billion investment in OpenAI, the venture behind ChatGPT, after an initial $1 billion investment a few years ago. And Google parent Alphabet is responding by plowing resources into similar tools to answer the challenge, which it fears could hurt its search dominance.

    So people will be using these tools, like it or not, including MBA students.

    “I’m of the mind that AI isn’t going to replace people, but people who use AI are going to replace people,” Kara McWilliams, head of ETS Product Innovation Labs, which offers a tool that can identify AI-generated answers, told the Times

    Terwiesch, in introducing his paper, noted the affect that electronic calculators had on the corporate world—and suggested that something similar could happen with tools like ChatGPT.

    “Prior to the introduction of calculators and other computing devices, many firms employed hundreds of employees whose task it was to manually perform mathematical operations such as multiplications or matrix inversions,” he wrote. “Obviously, such tasks are now automated, and the value of the associated skills has dramatically decreased. In the same way any automation of the skills taught in our MBA programs could potentially reduce the value of an MBA education.” 

    Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.

    [ad_2]

    Steve Mollman

    Source link

  • CEOs at Davos are using ChatGPT to write work emails | CNN Business

    CEOs at Davos are using ChatGPT to write work emails | CNN Business

    [ad_1]


    Davos, Switzerland
    CNN
     — 

    Jeff Maggioncalda, the CEO of online learning provider Coursera, said that when he first tried ChatGPT, he was “dumbstruck.” Now, it’s part of his daily routine.

    He uses the powerful new AI chatbot tool to bang out emails. He uses it to craft speeches “in a friendly, upbeat, authoritative tone with mixed cadence.” He even uses it to help break down big strategic questions — such as how Coursera should approach incorporating artificial intelligence tools like ChatGPT into its platform.

    “I use it as a writing assistant and as a thought partner,” Maggioncalda told CNN.

    Maggioncalda is one of thousands of business leaders, politicians and academics gathered in Davos, Switzerland this week for the World Economic Forum. On the agenda is an array of pressing issues weighing on the global economy, from the energy crisis to the war in Ukraine and the transformation of trade. But what many can’t stop talking about is ChatGPT.

    The tool, which artificial intelligence research company OpenAI made available to the general public late last year, has sparked conversations about how “generative AI” services — which can turn prompts into original essays, stories, songs and images after training on massive online datasets — could radically transform how we live and work.

    Some claim it will put artists, tutors, coders, and writers (yes, even journalists) out of a job. Others are more optimistic, postulating that it will allow employees to tackle to-do lists with greater efficiency or focus on higher-level tasks.

    It’s a debate that’s captivated many C-suite leaders, often after they tested the tool themselves.

    Christian Lanng, CEO of digital supply chain platform Tradeshift, said he was blown away by the capabilities displayed by ChatGPT, even after years of exposure to Silicon Valley hype.

    He’s also used the platform to write emails and claims no one has noticed the difference. He even had it perform some accounting work, a service for which Tradeshift currently employs an expensive professional services firm.

    To date, ChatGPT has mostly been treated as a curiosity and a harbinger of what’s to come. It relies on OpenAI’s GPT-3.5 language model, which is already out of date; the more advanced GPT-4 version is in the works and could be released this year.

    Critics — of which there are many — are quick to point out that it makes mistakes, is painfully neutral and displays a clear lack of human empathy. One tech news publication, for example, was forced to issue several significant corrections for an article written by ChatGPT. And New York City public schools have banned students and teachers from using it.

    Yet the software, or similar programs from competitors, could soon take the business world by storm.

    Microsoft

    (MSFT)
    , an investor in OpenAI, announced this week that the company’s tools — including GPT-3.5, programming assistant Codex and image generator DALL-E 2 — are now generally available to business clients in a package called Azure OpenAI Service. ChatGPT is being added soon.

    “I see these technologies acting as a copilot, helping people do more with less,” Microsoft CEO Satya Nadella told an audience in Davos this week.

    Maggioncalda has a similar perspective. He wants to integrate generative AI into Coursera’s offering this year, seeing an opportunity to make learning more interactive for students who don’t have access to in-person classroom instruction or one-on-one time with subject matter experts.

    He acknowledges challenges such as preventing cheating and ensuring accuracy need to be addressed. And he’s worried that increasing use of generative AI may not be wholly good for society — people may become less agile thinkers, for example, since the act of writing can be helpful to process complex ideas and hone takeaways.

    Still, he sees the need to move quickly.

    “Anybody who doesn’t use this will shortly be at a severe disadvantage. Like, shortly. Like, very soon,” Maggioncalda said. “I’m just thinking about my cognitive ability with this tool. Versus before, it’s a lot higher, and my efficiency and productivity is way higher.”

    [ad_2]

    Source link

  • What To Know About ChatGPT

    What To Know About ChatGPT

    [ad_1]

    The artificially intelligent chatbot ChatGPT has recently taken the internet by storm, with both praise and concern for its capability to mimic human writing. The Onion tells you everything you need to know about ChatGPT.

    Q: What is machine learning?
    A: A process by which machines use data-driven models to undermine some previously functional aspect of human life.

    Q: Who made ChatGPT? 
    A: OpenAI, a research laboratory established by some of Silicon Valley’s most forward-thinking bots.

    Q: How does ChatGPT work? 
    A: It smokes a fat joint and just lets the words flow, man.

    Q: How realistic are ChatGPT’s responses?
    A: Very realistic. Just like most people, it doesn’t really care what you say and is focused on accomplishing its own thing.

    Q: Is ChatGPT going to take my job? 
    A: Even AI doesn’t want your job.

    Q: Can students use ChatGPT to write their essays?
    A: Yes, ChatGPT has no problem reproducing the error-ridden dreck typical of the American student.

    Q: How does it sound so convincingly human online?
    A: It helps that humans have been gradually sounding less human since the arrival of the internet.

    Q: Will this put writers out of work?
    A: Writers were out of work long before this.

    Q: How will it improve human life? 
    A: It will free up tedious hours spent building critical thinking skills and fostering human relationships for more rewarding activities like streaming shows and buying things.

    Q: Will The Onion ever use ChatGPT to produce its award-winning journalism?
    A: RUNTIME ERROR. REBOOT STACK.

    [ad_2]

    Source link

  • New York City public schools ban access to AI tool that could help students cheat | CNN Business

    New York City public schools ban access to AI tool that could help students cheat | CNN Business

    [ad_1]


    New York
    CNN
     — 

    New York City public schools will ban students and teachers from using ChatGPT, a powerful new AI chatbot tool, on the district’s networks and devices, an official confirmed to CNN on Thursday.

    The move comes amid growing concerns that the tool, which generates eerily convincing responses and even essays in response to user prompts, could make it easier for students to cheat on assignments. Some also worry that ChatGPT could be used to spread inaccurate information.

    “Due to concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of content, access to ChatGPT is restricted on New York City Public Schools’ networks and devices,” Jenna Lyle, the deputy press secretary for the New York public schools, said in a statement. “While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success.”

    Although the chatbot is restricted under the new policy, New York City public schools can request to gain specific access to the tool for AI and tech-related educational purposes.

    Education publication ChalkBeat first reported the news.

    New York City appears to be one of the first major school districts to crack down on ChatGPT, barely a month after the tool first launched. Last month, the Los Angeles Unified School District moved to preemptively block the site on all networks and devices in their system “to protect academic honesty while a risk/benefit assessment is conducted,” a spokesperson for the district told CNN this week.

    While there are genuine concerns about how ChatGPT could be used, it’s unclear how widely adopted it is among students. Other districts, meanwhile, appear to be moving more slowly.

    Peter Feng, the public information officer for the South San Francisco Unified School District, said the district is aware of the potential for its students to use ChatGPT but it has “not yet instituted an outright ban.” Meanwhile, a spokesperson for the School District of Philadelphia said it has “no knowledge of students using the ChatGPT nor have we received any complaints from principals or teachers.”

    In a statement shared with CNN after publication, a spokesperson for OpenAI, the artificial intelligence research lab behind the tool, said it made ChatGPT available as a research preview to learn from real-world use. The spokesperson called that step a “critical part of developing and deploying capable, safe AI systems.”

    “We are constantly incorporating feedback and lessons learned,” the spokesperson added.

    The company said it aims to work with educators on ways to help teachers and students benefit from artificial intelligence. “We don’t want ChatGPT to be used for misleading purposes in schools or anywhere else, so we’re already developing mitigations to help anyone identify text generated by that system,” the spokesperson said.

    OpenAI opened up access to ChatGPT in late November. It is able to provide lengthy, thoughtful and thorough responses to questions and prompts, ranging from factual questions like “Who was the president of the United States in 1955” to more open-ended questions such as “What’s the meaning of life?”

    The tool stunned users, including academics and some in the tech industry. ChatGPT is a large language model trained on a massive trove of information online to create its responses. It comes from the same company behind DALL-E, which generates a seemingly limitless range of images in response to prompts from users.

    ChatGPT went viral just days after its launch. Open AI co-founder Sam Altman, a prominent Silicon Valley investor, said on Twitter in early December that ChatGPT had topped one million users.

    But many educators fear students will use the tool to cheat on assignments. One user, for example, fed ChatGPT an AP English exam question; it responded with a 5 paragraph essay about Wuthering Heights. Another user asked the chat bot to write an essay about the life of William Shakespeare four times; he received a unique version with the same prompt each time.

    Darren Hicks, assistant professor of philosophy at Furman University, previously told CNN it will be harder to prove when a student misuses ChatGPT than with other forms of cheating.

    “In more traditional forms of plagiarism – cheating off the internet, copy pasting stuff – I can go and find additional proof, evidence that I can then bring into a board hearing,” he said. “In this case, there’s nothing out there that I can point to and say, ‘Here’s the material they took.’”

    “It’s really a new form of an old problem where students would pay somebody or get somebody to write their paper for them – say an essay farm or a friend that has taken a course before,” Hicks added. “This is like that only it’s instantaneous and free.”

    Feng, from the South San Francisco Unified School District, told CNN that “some teachers have responded to the rise of AI text generators by using tools of their own to check whether work submitted by students has been plagiarized or generated via AI.”

    Some companies such as Turnitin – a detection tool that thousands of school districts use to scan the internet for signs of plagiarism – are now looking into how its software could detect the usage of AI generated text in student submissions.

    Hicks said teachers will need to rethink assignments so they couldn’t be easily written by the tool. “The bigger issue,” Hicks added, “is going to be administrations who have to figure out how they’re going to adjudicate these kinds of cases.”

    – CNN’s Abby Phillip contributed to this report.

    [ad_2]

    Source link

  • AI-Controlled VTuber Streams Games On Twitch, Denies Holocaust

    AI-Controlled VTuber Streams Games On Twitch, Denies Holocaust

    [ad_1]

    A Vtuber apologizes for not always being able to make her viewers smile.

    Screenshot: Vedal / Twitch / Kotaku

    Neuro-sama is a VTuber who streams Minecraft and the rhythm game Osu! on Twitch. But unlike most anime avatars, she’s controlled by an artificial intelligence program rather than a human being. That makes her catnip for the denizens of Twitch chat, who can prompt her to respond with all sorts of questions ranging from innocent inquiries to 4chan trolling. Within the first few streams, someone had already asked Neuro-sama about the Holocaust. “I’m not sure if I believe it,” she said.

    That was one of the more infamous clips that went viral online near the end of last month. Asked what she thought of women’s rights, she said they didn’t exist. How would she solve philosophy’s famous trolley ethical conundrum? Throw a fat person on the tracks. Often, however, she’ll go for long stretches without getting tempted by the chat into controversial or hateful remarks. In that way she’s an impressive simulacrum of a Twitch streamer straddling the chasm between repetitive banter and edgelord antics.

    “The controversial things she says is due to the fact that she tries to make witty and comical remarks about whatever is said in chat, aligning AIs with human values is an ongoing area of research,” Neuro-sama’s creator, a game programmer named Vedal, told Kotaku. “To counter this, I’ve worked hard since the first few streams to improve the strength of the filters used for her. Data that she learns on is also manually curated to mitigate negative biases. We now also have a team of people moderating twitch chat who check everything she says.”

    Neuro-sama isn’t Vedal’s first AI. In fact, a version of her was first created years ago with the explicit purpose of learning to play Osu!, a long running free-to-play rhythm game where you click shapes on a screen to the beat of anime music. While those sessions were also streamed, there was no avatar or interactive personality. Following last year’s surge of big-name VTubers, Neuro-sama builds on the Osu! skills of the original project with a fully-voiced Twitch performance that can riff with the audience.

    It’s perfect timing given the internet’s recent love affair with the OpenAI-powered ChatGPT chatbot, where users could submit hyper-specific text prompts and receive uncannily artful responses in return. Vedal wouldn’t go into detail about how Neuro-sama learns and communicates, other than to confirm she relies on a large language model, which has been “trained on a large amount of text on the internet.” While not as sophisticated, the effect has been convincing enough to net Neuro-sama thousands of viewers per stream.

    She also recently defeated the top-ranked Osu! player, Mrekk, on December 28, though some fans of the game debate whether the human opponent was ill-served by the song selection. Neuro-sama has since moved onto Minecraft, a much more complex game with far more possibilities for unexpected moments as players ask whether Melee is the best Super Smash Bros. and whether she’ll step on them. The moderation tools are apparently better now too.

    “She picks what to respond to within a limited window,” Vedal said. “However it should be noted that she will not talk about the Holocaust as the filters have been improved.” Instead, she’s currently trying to learn how to sing.

    [ad_2]

    Ethan Gach

    Source link

  • Elon Musk’s history with OpenAI—the maker of AI chatbot ChatGPT—as told by ChatGPT itself

    Elon Musk’s history with OpenAI—the maker of AI chatbot ChatGPT—as told by ChatGPT itself

    [ad_1]

    ChatGPT has been making waves this week following its test release by OpenAI, the company behind it. The artificial intelligence chatbot has evoked amazed, amused, and concerned reactions to it and generally created major buzz on social media. Many have speculated ChatGPT will disrupt Google’s search business. It can also debug code, write in a famous author’s voice, and help students cheat, among many other things.

    The buzz will likely ramp up even more when OpenAI releases a superior next version of the AI chatbot, reportedly sometime next year.

    Speaking of buzz, few people have been generating more of it lately than Elon Musk, who leads Tesla, SpaceX, and now Twitter, among other companies. As it turns out, Musk has ties to OpenAI, including as an original backer, and has been involved in both supporting artificial intelligence and warning about its dangers

    This month Musk called ChatGPT “scary good” and warned, “We are not far from dangerously strong AI.” 

    Musk resigned from OpenAI’s board of directors in 2018, citing possible conflicts of interest due to Tesla’s own AI work. In 2019, OpenAI changed from a nonprofit to a “capped profit” corporation, a hybrid of a for-profit and nonprofit. In 2019 it received a $1 billion investment from Microsoft. By 2020, Musk tweeted, “I have no control & only very limited insight into OpenAI,” adding his confidence in its safety was “not high.” 

    This month he tweeted: “I just learned that OpenAI had access to Twitter database for training. I put that on pause for now. Need to understand more about governance structure & revenue plans going forward. OpenAI was started as open-source & non-profit. Neither are still true.”

    But we’ll let ChatGPT itself talk more about Musk’s history with OpenAI. The rest of this article was written by the AI chatbot, generated in seconds as answers to our questions:

    Elon Musk has had a long relationship with OpenAI, dating back to the organization’s founding in 2015. He was one of the original backers of the organization, providing both financial and strategic support. Over the years, he has remained an advocate for the organization and its mission to promote and develop friendly AI.

    Based on his public statements and actions, it appears that he has some concerns about the potential dangers of advanced AI and its potential impact on society. He has often spoken about the need for AI to be developed and used in a responsible and ethical manner, and has even founded organizations such as OpenAI to help advance this goal.

    Elon Musk has publicly expressed concerns about the potential dangers of advanced AI. In particular, he has warned about the risk of AI being used for malicious purposes, such as to develop weapons or to interfere with elections. He has also expressed concern about the potential for AI to surpass human intelligence and become a threat to humanity.

    Tesla uses AI in several of its products and services. For example, the company’s Autopilot feature uses AI to help vehicles navigate roads and highways with limited or no input from the driver. The company’s cars also use AI for tasks such as lane detection, adaptive cruise control, and automatic emergency braking. Additionally, Tesla is using AI in its manufacturing processes to help improve efficiency and productivity.

    It is possible that Elon Musk has hired employees from OpenAI, but I do not have access to that information as I am a large language model trained by OpenAI and do not have the ability to browse the internet. OpenAI is an independent organization, and while Elon Musk has been involved with the organization in the past, he is not directly involved in its day-to-day operations or decision-making. 

    Our new weekly Impact Report newsletter examines how ESG news and trends are shaping the roles and responsibilities of today’s executives. Subscribe here.

    [ad_2]

    Steve Mollman

    Source link

  • Anyone can now use powerful AI tools to make images. What could possibly go wrong? | CNN Business

    Anyone can now use powerful AI tools to make images. What could possibly go wrong? | CNN Business

    [ad_1]



    CNN Business
     — 

    If you’ve ever wanted to use artificial intelligence to quickly design a hybrid between a duck and a corgi, now is your time to shine.

    On Wednesday, OpenAI announced that anyone can now use the most recent version of its AI-powered DALL-E tool to generate a seemingly limitless range of images just by typing in a few words, months after the startup began gradually rolling it out to users.

    The move will likely expand the reach of a new crop of AI-powered tools that have already attracted a wide audience and challenged our fundamental ideas of art and creativity. But it could also add to concerns about how such systems could be misused when widely available.

    “Learning from real-world use has allowed us to improve our safety systems, making wider availability possible today,” OpenAI said in a blog post. The company said it has also strengthened the ways it rebuffs users attempts to make its AI create “sexual, violent and other content.”

    There are now three well-known, immensely powerful AI systems open to the public that can take in a few words and spit out an image. In addition to DALL-E 2, there’s Midjourney, which became publicly available in July, and Stable Diffusion, which was released to the public in August by Stability AI. All three offer some free credits to users who want to get a feel for making images with AI online; generally, after that, you have to pay.

    These so-called generative AI systems are already being used for experimental films, magazine covers, and real-estate ads. An image generated with Midjourney recently won an art competition at the Colorado State Fair, and caused an uproar among artists.

    In just months, millions of people have flocked to these AI systems. More than 2.7 million people belong to Midjourney’s Discord server, where users can submit prompts. OpenAI said in its Wednesday blog post that it has more than 1.5 million active users, who have collectively been making more than 2 million images with its system each day. (It should be noted that it can take many tries to get an image you’re happy with when you use these tools.)

    Many of the images that have been created by users in recent weeks have been shared online, and the results can be impressive. They range from otherworldly landscapes and a painting of French aristocrats as penguins to a faux vintage photograph of a man walking a tardigrade.

    The ascension of such technology, and the increasingly complicated prompts and resulting images, has impressed even longtime industry insiders. Andrej Karpathy, who stepped down from his post as Tesla’s director of AI in July, said in a recent tweet that after getting invited to try DALL-E 2 he felt “frozen” when first trying to decide what to type in and eventually typed “cat”.

    CNN's Rachel Metz created this half-duck, half-corgie with AI image generator Stable Diffusion.

    “The art of prompts that the community has discovered and increasingly perfected over the last few months for text -> image models is astonishing,” he said.

    But the popularity of this technology comes with potential downsides. Experts in AI have raised concerns that the open-ended nature of these systems — which makes them adept at generating all kinds of images from words — and their ability to automate image-making means they could automate bias on a massive scale. A simple example of this: When I fed the prompt “a banker dressed for a big day at the office” to DALL-E 2 this week, the results were all images of middle-aged white men in suits and ties.

    “They’re basically letting the users find the loopholes in the system by using it,” said Julie Carpenter, a research scientist and fellow in the Ethics and Emerging Sciences Group at California Polytechnic State University, San Luis Obispo.

    The prompt

    These systems also have the potential to be used for nefarious purposes, such as stoking fear or spreading disinformation via images that are altered with AI or entirely fabricated.

    There are some limits for what images users can generate. For example, OpenAI has DALL-E 2 users agree to a content policy that tells them to not try to make, upload, or share pictures “that are not G-rated or that could cause harm.” DALL-E 2 also won’t run prompts that include certain banned words. But manipulating verbiage can get around limits: DALL-E 2 won’t process the prompt “a photo of a duck covered in blood,” but it will return images for the prompt “a photo of a duck covered in a viscous red liquid.” OpenAI itself mentioned this sort of “visual synonym” in its documentation for DALL-E 2.

    Chris Gilliard, a Just Tech Fellow at the Social Science Research Council, thinks the companies behind these image generators are “severely underestimating” the “endless creativity” of people who are looking to do ill with these tools.

    “I feel like this is yet another example of people releasing technology that’s sort of half-baked in terms of figuring out how it’s going to be used to cause chaos and create harm,” he said. “And then hoping that later on maybe there will be some way to address those harms.”

    To sidestep potential issues, some stock-image services are banning AI images altogether. Getty Images confirmed to CNN Business on Wednesday that it will not accept image submissions that were created with generative AI models, and will take down any submissions that used those models. This decision applies to its Getty Images, iStock, and Unsplash image services.

    “There are open questions with respect to the copyright of outputs from these models and there are unaddressed rights issues with respect to the underlying imagery and metadata used to train these models,” the company said in a statement.

    But actually catching and restricting these images could prove to be a challenge.

    [ad_2]

    Source link

  • OpenAI CEO Sam Altman to testify before Congress | CNN Business

    OpenAI CEO Sam Altman to testify before Congress | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    OpenAI CEO Sam Altman will testify before Congress next Tuesday as lawmakers increasingly scrutinize the risks and benefits of artificial intelligence, according to a Senate Judiciary subcommittee.

    During Tuesday’s hearing, lawmakers will question Altman for the first time since OpenAI’s chatbot, ChatGPT, took the world by storm late last year.

    The groundbreaking generative AI tool has led to a wave of new investment in AI, prompting a scramble among US policymakers who have called for guardrails and regulation amid fears of AI’s misuse.

    Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”

    “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”

    He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

    [ad_2]

    Source link

  • ‘Serious concerns’: Top companies raise alarm over Europe’s proposed AI law | CNN Business

    ‘Serious concerns’: Top companies raise alarm over Europe’s proposed AI law | CNN Business

    [ad_1]


    Dortmund, Germany
    CNN
     — 

    Dozens of Europe’s top business leaders have pushed back on the European Union’s proposed legislation on artificial intelligence, warning that it could hurt the bloc’s competitiveness and spur an exodus of investment.

    In an open letter sent to EU lawmakers Friday, C-suite executives from companies including Siemens

    (SIEGY)
    , Carrefour

    (CRERF)
    , Renault

    (RNLSY)
    and Airbus

    (EADSF)
    raised “serious concerns” about the EU AI Act, the world’s first comprehensive AI rules.

    Other prominent signatories include big names in tech, such as Yann LeCun, chief AI scientist of Meta

    (FB)
    , and Hermann Hauser, founder of British chipmaker ARM.

    “In our assessment, the draft legislation would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing,” the group of more than 160 executives said in the letter.

    They argue that the draft rules go too far, especially in regulating generative AI and foundation models, the technology behind popular platforms such as ChatGPT.

    Since the craze over generative AI began this year, technologists have warned of the potential dark side of systems that allow people to use machines to write college essays, take academic tests and build websites. Last month, hundreds of top experts warned about the risk of human extinction from AI, saying mitigating that possibility “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    The EU proposal applies a broad brush to such software “regardless of [its] use cases,” and could push innovative companies and investors out of Europe because they would face high compliance costs and “disproportionate liability risks,” according to the executives.

    “Such regulation could lead to highly innovative companies moving their activities abroad” and investors withdrawing their capital from European AI, the group wrote.

    “The result would be a critical productivity gap between the two sides of the Atlantic.”

    The executives are calling for policymakers to revise the terms of the bill, which was agreed upon by European Parliament lawmakers earlier this month and is now being negotiated with EU member states.

    “In a context where we know very little about the real risks, the business model, or the applications of generative AI, European law should confine itself to stating broad principles in a risk-based approach,” the group wrote.

    The business leaders called for a regulatory board of experts to oversee these principles and ensure they can be continuously adapted to changes in the fast-moving technology.

    The group also urged lawmakers to work with their US counterparts, noting that regulatory proposals had also been made in the United States. EU lawmakers should try to “create a legally binding level playing field,” the executives wrote.

    If such action isn’t taken and Europe is constrained by regulatory demands, it could hurt the region’s international standing, the group suggested.

    “Like the invention of the Internet or the breakthrough of silicon chips, generative AI is the kind of technology that will be decisive for the performance capacity and therefore the significance of different regions,” it said.

    Tech experts have increasingly called for greater regulation of AI as it becomes more widely used. In recent months, the United States and China have also laid out plans to regulate the technology. Sam Altman, CEO of ChatGPT maker OpenAI, has used high-profile trips around the world in recent weeks to call for co-ordinated international regulation of AI.

    The EU rules are the world’s “first ever attempt to enact” legally binding rules that apply to different areas of AI, according to the European Parliament.

    Negotiators of the AI Act hope to reach an agreement before the end of the year, and once the final rules are adopted by the European Parliament and EU member states, the act will become law.

    As they stand now, the rules would ban AI systems deemed to be harmful, including real-time facial recognition systems in public spaces, predictive policing tools and social scoring systems, such as those in China.

    The Act also outlines transparency requirements for AI systems. For instance, systems such as ChatGPT would have to disclose that their content was AI-generated and provide safeguards against the generation of illegal content.

    Engaging in prohibited AI practices could lead to hefty fines: up to €40 million ($43 million) or an amount equal to up to 7% of a company’s worldwide annual turnover, whichever is higher.

    But penalties would be “proportionate” and consider the market position of small-scale providers, suggesting there could be some leniency for startups.

    Not everyone has pushed back on the legislation so far. Earlier this month, Digital Europe, a trade association that counts SAP

    (SAP)
    and Ericsson

    (ERIC)
    among its members, called the rules “a text we can work with.”

    “However, there remain some areas which can be improved to ensure Europe becomes a competitive hub for AI innovation,” the group said in a statement.

    Dragos Tudorache, a Romanian member of parliament who led the bill’s drafting, said he was convinced that those who signed the new letter “have not read the text but have rather reacted on the stimulus of a few.”

    “The only concrete suggestions made are in fact what the [draft] text now contains: an industry-led process for defining standards, governance with industry at the table, and a light regulatory regime that asks for transparency. Nothing else,” he said in a statement.

    “It is a pity that the aggressive lobby of a few is capturing other serious companies in the net, which unfortunately undermines the undeniable lead that Europe has taken.”

    Brando Benifei, an Italian member of parliament who also led the drafting of the legislation, told CNN “we will listen to all concerns and stakeholders when dealing with AI regulation, but we have a firm commitment to deliver clear and enforceable rules.”

    “Our work could positively affect the global conversation and direction when dealing with artificial intelligence and its impact on fundamental rights, without hindering the necessary pursuit of innovation,” he said.

    [ad_2]

    Source link

  • Alibaba unveils its ChatGPT-style service | CNN Business

    Alibaba unveils its ChatGPT-style service | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Alibaba showed off its answer to the ChatGPT craze on Tuesday, demonstrating new software that it plans to eventually roll out across all its platforms.

    The Chinese tech giant unveiled Tongyi Qianwen, a large language model that will be embedded in its Tmall Genie smart speakers and workplace messaging platform DingTalk. It was trained on vast troves of data in order to generate compelling responses to users’ prompts.

    The technology will initially be integrated into those two products and eventually added to all Alibaba

    (BABA)
    applications, from e-commerce to mapping services, according to the company.

    Group CEO Daniel Zhang, who also oversees Alibaba’s cloud division, presented the new AI-powered service at a conference in Beijing, where the company demonstrated how it will allow users to transcribe meeting notes, craft business pitches and tell children’s stories.

    The company has opened up Tongyi Qianwen — which roughly translates as “seeking truth by asking a thousand questions” — to enterprise customers for testing before making it available to more users.

    “We are at a technological watershed moment, driven by generative AI and cloud computing,” Zhang said.

    Generative AI refers to the technology that underpins platforms like ChatGPT. The service has exploded in popularity in recent months, and Chinese tech companies have been racing to release their own versions, prompting some critics to predict that the trend will add fuel to an existing US-China rivalry in emerging technologies.

    Alibaba, which has a large cloud computing business, will also allow clients of that division to use the new technology to build their own customized large language models, the firm said in a statement.

    The debut comes after that of Baidu

    (BIDU)
    , which launched its own ChatGPT-style service last month. During a similar presentation, Baidu

    (BIDU)
    showed how its chatbot, called ERNIE, could generate a company newsletter, come up with a corporate slogan and solve a math riddle.

    On Monday, SenseTime, one of China’s most prominent AI companies, launched a suite of new services, including a chatbot called SenseChat.

    China will be setting rules to govern the operation of such services. In draft guidelines issued Tuesday to solicit public feedback, the country’s cyberspace regulator said generative AI services would be required to undergo security reviews before they can operate.

    Service providers will also be required to verify users’ real identities. In addition, they must provide information about the scale and type of data they use, their basic algorithms and other technical information.

    Alibaba’s shares in Hong Kong ticked up 1.6% following its demonstration.

    The company announced last month that it planned to split its business into six units. Most of those units, including its cloud services business that oversees AI projects, will be authorized to raise capital and pursue public listings.

    — Juliana Liu contributed to this report.

    [ad_2]

    Source link

  • The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a few months in 2017, there were rumors that Sam Altman was planning to run for governor of California. Instead, he kept his day job as one of Silicon Valley’s most influential investors and entrepreneurs.

    But now, Altman is about to make a different kind of political debut.

    Altman, the CEO and co-founder of OpenAI, the artificial intelligence company behind viral chatbot ChatGPT and image generator Dall-E, is set to testify before Congress on Tuesday. His appearance is part of a Senate subcommittee hearing on the risks artificial intelligence poses for society, and what safeguards are needed for the technology.

    House lawmakers on both sides of the aisle are also expected to hold a dinner with Altman on Monday night, according to multiple reports. Dozens of lawmakers are said to be planning to attend, with one Republican lawmaker describing it as part of the process for Congress to assess “the extraordinary potential and unprecedented threat that artificial intelligence presents to humanity.”

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    The hearing and meetings come as ChatGPT has sparked a new arms race over AI. A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts. This week’s hearing may only cement his stature as a central player in AI’s rapid growth – and also add to scrutiny of him and his company.

    Those who know Altman have described him as a brilliant thinker, someone who makes prescient bets and has even been called “a startup Yoda.” In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    “If anyone knows where this is going, it’s Sam,” Brian Chesky, the CEO of Airbnb, wrote in a post about Altman for the latter’s inclusion this year on Time’s list of the 100 most influential people. “But Sam also knows that he doesn’t have all the answers. He often says, ‘What do you think? Maybe I’m wrong?’ Thank God someone with so much power has so much humility.”

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    OpenAI declined to make anyone available for an interview for this story.

    The success of ChatGPT may have brought Altman greater public attention, but he has been a well-known figure in Silicon Valley for years.

    Prior to cofounding OpenAI with Musk in 2015, Altman, a Missouri native, studied computer science at Stanford University, only to drop out to launch Loopt, an app that helped users share their locations with friends and get coupons for nearby businesses.

    In 2005, Loopt was part of the first batch of companies at Y Combinator, a prestigious tech accelerator. Paul Graham, who co-founded Y Combinator, later described Altman as “a very unusual guy.”

    “Within about three minutes of meeting him, I remember thinking ‘Ah, so this is what Bill Gates must have been like when he was 19,’” Graham wrote in a post in 2006.

    Loopt was acquired in 2012 for about $43 million. Two years later, Altman took over from Graham as president of Y Combinator. The position allowed Altman to connect him with numerous powerful figures in the tech industry. He remained at the helm of the accelerator until 2019.

    Margaret O’Mara, a tech historian and professor at the University of Washington, told CNN that Altman “has long been admired as a thoughtful, significant guy and in the remarkably small number of powerful people who are kind of at the top of tech and have a lot of sway.”

    During the Trump administration, Altman gained new attention as a vocal critic of the president. It was against that backdrop that he was rumored to be considering a run for California governor.

    Rather than running, however, Altman instead looked to back candidates who aligned with his values, which include lower cost of living, clean energy and taking 10% off the defense budget to give to research and development of future technology.

    Altman continues to push for some of these goals through his work in the private sector. He invested in Helion, a fusion research company that inked a deal with Microsoft last week to sell clean energy to the tech giant by 2028.

    Altman has also been a proponent of the idea of a universal basic income and has suggested that AI could one day help fulfill that goal by generating so much wealth it could be redistributed back to the public.

    As Graham told The New Yorker about Altman in 2016, “I think his goal is to make the whole future.”

    When launching OpenAI, Musk and Altman’s original mission was to get ahead of the fear that AI could harm people and society.

    “We discussed what is the best thing we can do to ensure the future is good?” Musk told the New York Times about a conversation with Altman and others before launching the company. “We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing A.I. in a way that is safe and is beneficial to humanity.”

    In an interview at the launch of OpenAI, Altman explained the company as his way of trying to steer the path of AI technology. “I sleep better knowing I can have some influence now,” he said.

    If there’s one thing AI enthusiasts and critics can agree on right now, it may be that Altman clearly has succeeded in having some influence over the rapidly evolving technology.

    Less than six months after the release of ChatGPT, it has become a household name, almost synonymous with AI itself. CEOs are using it to draft emails. Realtors are using it to write iistings and draft legal documents. The tool has passed exams from law and business schools – and been used to help some students cheat. And OpenAI recently released a more powerful version of the technology underpinning ChatGPT.

    Tech giants like Google and Facebook are now racing to catch up. Similar generative AI technology is quickly finding its way into productivity and search tools used by billions of people.

    A future that once seemed very far off now feels right around the corner, whether society is ready for it or not. Altman himself has professed not to be sure about how it will turn out.

    O’Mara said she believes Altman fits into “the techno-optimist school of thought that has been dominant in the Valley for a very long time,” which she describes as “the idea that we can devise technology that can indeed make the world a better place.”

    While Altman’s cautious remarks about AI may sound at odds with that way of thinking, O’Mara argues it may be an “extension” of it. In essence, she said, it’s related to “the idea that technology is transformative and can be transformative in a positive way but also has so much capacity to do so much that it actually could be dangerous.”

    And if AI should somehow help bring about the end of society as we know it, Altman may be more prepared than most to adapt.

    “I prep for survival,” he said in a 2016 profile of him in the New Yorker, noting several possible disaster scenarios, including “A.I. that attacks us.”

    “I try not to think about it too much,” Altman said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

    [ad_2]

    Source link

  • Who says romance is dead? Couples are using ChatGPT to write their wedding vows | CNN Business

    Who says romance is dead? Couples are using ChatGPT to write their wedding vows | CNN Business

    [ad_1]



    CNN
     — 

    When Elyse Nguyen was nearing her wedding date in February and still hadn’t started writing her vows, a friend suggested she try a new source of inspiration: ChatGPT.

    The AI chatbot, which was released publicly in late November, can generate compelling written responses to user prompts and offers the promise of helping people get over writer’s block, whether it be for an essay, an email, or an emotional speech.

    “At first we inputted the prompt as a joke and the output was pretty cheesy with personal references to me and my husband,” said Nguyen, a financial analyst at Qualcomm. “But the essence of what vows should incorporate was there – our promises to each other and structure.”

    She made edits, changed the prompts to add humor and details about her partner’s interests, and added some personal touches. Nguyen ultimately ended up using a good portion of ChatGPT’s suggestions and said her husband was on board with it.

    “It helped alleviate some stress because I had no prior experience with wedding vows nor did I know what should be included,” Nguyen said. “Plus, ChatGPT is a genius with alliteration, analogies and metaphors. Having something like, ‘I promise to be your partner in life with the enthusiasm of a golfer’s first hole in one’ in my back pocket was comical.”

    Nearly five months after ChatGPT went viral and ignited a new AI arms race in Silicon Valley, more couples are looking to it for help with wedding planning, including writing vows and speeches, drafting religious marriage contracts, and setting up websites for the special day.

    Ellen Le recently created some of her wedding website through a new Writer’s Block Assistant tool on online wedding planning service Joy, which was one of the first third-party platforms to incorporate ChatGPT’s technology. (Last month, OpenAI, the company behind ChatGPT, opened up access to the chatbot, paving the way for it to be integrated into numerous apps and services.)

    Le, a product manager at a startup, said she used the feature to draft an “about us” page and write directions from San Francisco to her Napa Valley wedding. The Writer’s Block Assistant tool helps users write vows, best man and maid of honor speeches, thank you cards and wedding website “about us” pages. It also lets users highlight personal stories and select the style or tone before pulling it into a speech.

    “I started drafting my vows and when I typed in how we met, it produced this very delightful story,” Le said. “Some of it was inaccurate, making up certain details, but it gave me a helping hand and something to react to, rather than just spending 10 hours thinking about how to get started.”

    Le said her fiance, who often uses ChatGPT for work, is considering using AI to help with his vows too.

    Joy co-founder and CEO Vishal Joshi, who studied artificial intelligence and electrical engineering at NIT Rourkela in India, said the company launched Writer’s Block Assistant in March after it conducted an internal study that found most of its users were somewhat overwhelmed with getting started on writing vows and speeches, and wished they had help. He said the company has already seen thousands of submissions since launching the tool.

    “Almost two decades ago, AI enthusiasts like myself and my research peers had only dreamt of mass market adoption we are seeing today, and we know this is just the true beginning,” Joshi said. “Just like smartphones, if applied well, the positive impact of AI on our lives can far outshine the negatives. We’re working on responsibly innovating using AI to advance the wedding and event industry as a whole.”

    Michael Grinn and Kate Gardiner used viral AI tool ChatGPT to write the Ketubah, a Jewish wedding contract, for their June wedding.

    ChatGPT has sparked concerns in recent months about its potential to perpetuate biases, spread misinformation and upend certain livelihoods. Now, as it finds its way into marriage ceremonies, it could raise more nuanced questions about whether people risk losing something by injecting technology into what is supposed to be a deeply personal and, for many, spiritual moment in life.

    Michael Grinn, an anesthesiologist with practices in Miami and New York, was experimenting with ChatGPT when he asked it to produce a traditional Ketubah – a Jewish marriage contract – for his upcoming June wedding.

    Grinn and his fiance Kate Gardiner, the founder and CEO of a public relations firm, then requested it make some language changes around gender equality and intimacy. “At the end, we both looked at each other and were like, we can’t disagree with the result,” he said.

    Editing took about an hour, but it still shaved hours off what otherwise could have been a lengthy process, he said. Still, Grinn plans to write his own vows. “I want them to be less refined and something no one else helped me with.”

    He does, however, plan to use ChatGPT for inspiration for officiating his best man’s wedding. “It mostly comes down to time because I’ve been working so much,” he said, “and this is so efficient.”

    [ad_2]

    Source link