ReportWire

Tag: chatgpt

  • Why Are So Many Companies Afraid of Generative AI? | Entrepreneur

    Why Are So Many Companies Afraid of Generative AI? | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    The release of ChatGPT in November of 2022 prompted the fastest public adoption of any new technology we have seen in a long time — perhaps ever. Many businesses, however, are largely taking a “wait and see” approach, which will only make it harder to keep pace as the technology evolves.

    In recent months, generative AI tools like ChatGPT, Jasper, Midjourney and Rowy, and others have demonstrated incredible breadth. For the first time, language models are passing Google’s hiring test for engineers, Wharton’s MBA exams, and Minnesota University’s Law School exams.

    Perhaps even more impressive, however, is how quickly creative fields once thought to be the sole domain of the human brain — like art, music and poetry — are being disrupted by automated systems capable of creating original works. And this is only just the beginning. Generative AI tools are improving at such a stunning rate that it won’t be long before we consider these early versions of the technology primitive.

    The quality of these generative AI systems is mainly due to the incredible breadth of data and computing they’re built on. However, developing this kind of sophisticated generative AI model takes a significant amount of data and money — the kind only available to a handful of the world’s largest and most powerful technology firms. While there are interesting reports of companies finding innovative applications for generative AI platforms, most companies have largely remained on the sidelines as they grapple with legitimate concerns regarding intellectual property, security and overall quality.

    While it’s important for organizations to fully consider the implications of disclosing their intellectual property to these third-party systems and be aware of ongoing quality concerns yet to be addressed, they also can’t afford to ignore such important technological breakthroughs. Though the concerns are valid, it’s also important to recognize that they will likely be addressed soon. The technology is only getting more sophisticated, and the longer they wait, the harder it will be to catch up.

    Related: ChatGPT vs. Bard: A Modern Day David and Goliath Story. Who Will Win?

    We’ve seen this pattern play out plenty of times; an innovation is unveiled, businesses widely acknowledge its disruptive potential and then refuse to engage with it due to some valid but ultimately — in the grand scheme of things — misplaced concerns.

    For example, I can still recall when concerns regarding intellectual property, security and privacy discouraged many organizations from using third-party email servers, who instead devoted significant resources to developing and operating in-house email. The same happened when personal mobile devices were initially banned from the workplace or when cloud technology was introduced, then widely avoided. Now every company has a cloud strategy.

    For large, legacy companies with significant investments in in-house, non-cloud native applications, the costs and challenges of starting the journey to the cloud were so daunting that they pushed it off. It’s been years since AWS, Azure and GCP have been available, and yet there are many Fortune 500 companies in still just the early stages of adapting and strategically leveraging these services.

    Related: It’s Time to Prepare for the Algorithmic Workforce

    For those making significant investments now, it obviously would have been cheaper, faster, and better if that journey had started years ago. Ultimately, time wasted yields competitive ground to the leaner startups that embraced the cloud and can move more quickly.

    Today, companies are once again faced with a game-changing technology and yet have similar concerns regarding intellectual property, ownership, security, legal and compliance. The difference this time, however, is that the scale, sophistication and openness of the new AI models are even more advanced, and the technology is expected to evolve at an even faster pace than we have seen in the past.

    While the need to address these concerns is valid, and quality issues with these platforms are real, we’ve overcome such challenges countless times over; we can expect they will be solved in this instance. In the meantime, I firmly believe at least some small investment should be dedicated to understanding the art of the possible and its limitations and working through the intellectual property, security, and legal issues.

    Throughout history, countless inventions have improved human productivity. Software engineers today are more productive than engineers from decades ago. What changed? It certainly wasn’t the capacity of the human brain. Instead, our heightened productivity is thanks to new software engineering frameworks, platforms, and tools. AI tools represent the next major leap in this journey. Just imagine what an AI engine that can pass college-level exams can do when it’s purpose-built to help software engineers write code.

    While there are risks associated with the technology in its early stage, the most significant risk most tech companies face is waiting too long and allowing the competition to onboard the technology first.

    Related: 5 Fears All Entrepreneurs Face (and How to Conquer Them)

    Start-ups are in a particularly advantageous position, as they have much less to lose and much more to gain by taking a bold risk on early AI adoption. However, large enterprises can begin dabbling with generative AI by finding low-risk use cases. They should also ensure that this is considered a top priority for legal and security teams and adequately communicate the significant stakes.

    While the applicability of these technologies is broad, I recommend finding a pragmatic, simple area to begin experimenting and learning, then expand from there. Perhaps even host an in-house hackathon to see all the creative solutions your teams think up.

    There are countless opportunities to experiment with generative AI across marketing, engineering, customer service, and many business functions. While being conscious of the risks and taking steps to mitigate them, it makes sense to start small. However, getting started is important; otherwise, you may risk getting left behind.

    [ad_2]

    James Barrese

    Source link

  • ChatGPT Is Already Upending Campus Practices. Colleges Are Rushing to Respond.

    ChatGPT Is Already Upending Campus Practices. Colleges Are Rushing to Respond.

    [ad_1]

    It’s hard to believe that ChatGPT appeared on the scene just three months ago, promising to transform how we write. The chatbot, easy to use and trained on vast amounts of digital text, is now pervasive. Higher education, rarely quick about anything, is still trying to comprehend the scope of its likely impact on teaching — and how it should respond.

    ChatGPT, which can produce essays, poems, prompts, contracts, lecture notes, and computer code, among other things, has stunned people with its fluidity, although not always its accuracy or creativity. To do this work it runs on a “large language model,” a word predictor that has been trained on enormous amounts of data. Similar generative artificial-intelligence systems allow users to create music and make art.

    Many academics see these tools as a danger to authentic learning, fearing that students will take shortcuts to avoid the difficulty of coming up with original ideas, organizing their thoughts, or demonstrating their knowledge. Ask ChatGPT to write a few paragraphs, for example, on how Jean Piaget’s theories on childhood development apply to our age of anxiety and it can do that.

    Other professors are enthusiastic, or at least intrigued, by the possibility of incorporating generative AI into academic life. Those same tools can help students — and professors — brainstorm, kick-start an essay, explain a confusing idea, and smooth out awkward first drafts. Equally important, these faculty members argue, is their responsibility to prepare students for a world in which these technologies will be incorporated into everyday life, helping to produce everything from a professional email to a legal contract.

    Artificial-intelligence tools present the greatest creative disruption to learning that we’ve seen in my lifetime.

    But skeptics and fans alike still have to wrestle with the same set of complicated questions. Should instructors be redesigning their assignments and tests to reduce the likelihood that students will present the work of AI as their own? What guidance should students receive about this technology, given that one professor might ban AI tools and another encourage their use? Do academic-integrity policies need to be rewritten? Is it OK to use AI detectors? Should new coursework on AI be added and, if so, what form should it take?

    For many, this is a head-spinning moment.

    “I really think that artificial-intelligence tools present the greatest creative disruption to learning that we’ve seen in my lifetime,” says Sarah Eaton, an associate professor of education at the University of Calgary who studies academic integrity.

    Colleges are responding by creating campuswide committees. Teaching centers are rolling out workshops. And some professors have leapt out front, producing newsletters, creating explainer videos, and crowdsourcing resources and classroom policies.

    The one thing that academics can’t afford to do, teaching and tech experts say, is ignore what’s happening. Sooner or later, the technology will catch up with them, whether they encounter a student at the end of the semester who may have used it inappropriately, or realize that it’s shaping their discipline and their students’ futures in unstoppable ways. A recent poll of more than 1,000 members of Educause, a nonprofit focused on technology in higher education, found that 37 percent of those surveyed said AI is already affecting undergraduate teaching, and 30 percent said it is having an impact on faculty development.

    “A lot of times when any technology comes out, even when there are really great and valid uses, there’s this very strong pushback: Oh, we have to change everything that we do,” says Youngmoo Kim, an engineering professor who sits on a new committee at Drexel University charged with creating universitywide guidance on AI. “Well, guess what? You’re in higher education. Of course you have to change everything you do. That’s the story of higher education.”

    Serge Onyper, an associate professor of psychology at St. Lawrence University, began to incorporate ChatGPT into his teaching this semester. After experimenting to see how well it could produce an undergraduate research paper, he has become a proponent of using large language models — with guardrails. One thing ChatGPT does particularly well, he believes, is help students learn the “basic building blocks” of effective academic writing.

    “What is good writing in the sciences?” he asks. “It’s writing where the argument is clear, where it’s evidence based and where it includes some analysis. And ChatGPT can do that,” he says. “It’s sort of frustrating how good it is at those basic tenets of argumentative writing.”

    In his first-year course on the neuroscience of stress, the focus is on writing an essay that includes a thesis and evidence. His goal is also to help students reframe stress as a friend, Onyper says. So he asks students to think of positive benefits of stress on their own, then as a group, then use ChatGPT to see what it comes up with.

    Onyper says working in that order helps students see that their own ideas are valuable, but that they can also use ChatGPT to brainstorm further. It should never be a substitute for their own thinking, he tells them: “This is where the lived experience can be important.” He is also fine with students for whom English is not their first language running their writing through the program to produce cleaner copy. He is more interested in their ideas, he says, than the fluidity of their prose.

    Ryan Baker similarly invites his students to use ChatGPT. Baker, a professor in the University of Pennsylvania’s Graduate School of Education whose courses focus on data and learning analytics or educational technology, states that students can use a variety of tools “in totally unrestricted fashion.” That also includes Dall-E, which produces images from text, and GitHub Copilot, which produces code. Baker says mastering those technologies to produce the outcomes he’s looking for is a form of learning. He cautions students that such tools are often unreliable and that their use must be cited, but even so, he writes in his course policy that the use of such models is encouraged, “as it may make it possible for you to submit assignments with higher quality, in less time.”

    Michael Dennin, vice provost for teaching and learning at the University of California at Irvine, expects to see a lot of experimentation on his campus as instructors sort out what tools are appropriate to use at each stage of a student’s career. It reminds him of what his mother, a high-school math teacher, went through when graphing calculators were introduced. The initial reaction was to ban them; the right answer, he says, was to embrace and use them to enhance learning. “It was a multiyear process with a lot of trying and testing and evaluating and assessing.”

    Similarly, he anticipates a variety of approaches on his campus. Professors who never before considered flipped classrooms — where students spent class time working on problems or projects rather than listening to lectures — might give it a try, to ensure that students are not outsourcing the work to AI. Wherever they land on the use of such tools, Dennin says, it’s important for professors to explain their reasoning: when they think ChatGPT might be diminishing students’ learning, for example, or where and how they feel that it’s OK to use it.

    Anna Mills, an English instructor at the College of Marin, says academics should also consider the ways that generative AI can pose risks to some students.

    On the one hand, these programs can serve as free and easy-to-use study guides and research tools, or help non-native speakers fix writing mistakes. On the other hand, struggling students may fall back on what it produces rather than using their own voice and skill. Mills says that she’s fallen into that trap herself: auto-generating text which at first glance seems pretty good. “Then later, when I go back and look at it, I realize that it’s not sound,” she says. “But on first glance because it’s so fluent and authoritative, even I had thought, okay, yeah, that’s decent.”

    Mills, who provided feedback to OpenAI — which developed ChatGPT — on its guidance for educators, notes that the organization cautions that users need quite a bit of expertise to verify its recommendations. “So the student who is using it because they lack the expertise,” she says, “is exactly the student who is not ready to assess what it’s doing critically.”

    If we haven’t disclosed to students that we’re going to be using detection tools, then we’re also culpable of deception.

    Professors’ concerns about cheating with AI also run the gamut. Some argue that it’s not worth the time spent ferreting out a few cheaters and would rather focus their energy on students who are there to learn. Others say they can’t afford to look the other way.

    As chair of the anatomy and physiology department at Ivy Tech Community College, in Bloomington, Ind., Daniel James O’Neill oversees what he says may be the largest introductory-anatomy course in the country offered at the community-college level. Ivy Tech has 19 campuses and a large online unit. This two-semester course and related courses, he notes, are gateways into nursing and allied-health professions.

    “There’s tremendous pressure on these students to try to get through this. Their livelihoods are dependent on it,” he says. “I would compare this to using steroids in baseball. If you don’t ban steroids in baseball, then the reality is every player has to use them. Even worse than that, if you ban them but don’t enforce it, what you actually do is create a situation where you weed out all of the honest players.”

    There has long been a “manageable but significant” amount of cheating in the course, he notes, with an average of about one out of every 15 assignments caught in a standard plagiarism check. He expects that ChatGPT will only ramp up the pressure to cheat.

    A tool that effectively detected cheating with ChatGPT would be a “game changer,” he says. Until one is developed, he needs to think seriously about reducing the likelihood that students can use AI tools to complete their work. That may mean significantly changing or eliminating an end-of-term paper that he considers a valuable assignment.

    While he hopes not to go that route, he also says he can’t afford to simply ignore the few who cheat. The argument that instructors should just focus on the students who are there to truly learn underestimates the stress that the honest students will feel when they start ranking behind those who cheat, he says. “It’s real and it’s a moral and ethical issue.”

    It’s hard to know how widely students are using ChatGPT, beyond playing around with it. Stanford University’s student newspaper, The Stanford Daily, ran an anonymous poll in January that has gotten some national attention.

    Of more than 4,000 Stanford students who responded (which the newspaper noted could be an inflated figure), 17 percent said they had used ChatGPT in their fall-quarter courses. Nearly 60 percent of that group used it for brainstorming and outlining; 30 percent used it to help answer multiple-choice questions; 7 percent submitted edited material written by ChatGPT; and 6 percent submitted unedited material written by the chatbot.

    As professors navigate these choppy waters, Eaton, the academic-integrity expert, cautions against trying to ban the use of ChatGPT entirely.

    That, she says, “is not only futile but probably ultimately irresponsible.” Many industries are beginning to adapt to the use of these tools, which are also being blended into other products, like apps and search engines. Better to teach students what they are — with all of their flaws, possibilities, and ethical challenges — than to ignore them.

    Meanwhile, detection software is a work in progress. GPTZero and Turnitin claim to be able to detect AI writing with a high degree of accuracy. OpenAI has developed its own detector, although it says it has an accuracy rate of just 26 percent. Teaching experts question whether any detector is yet reliable enough to charge someone with an academic-integrity violation.

    And there’s another twist: If a professor runs students’ work through a detector without informing them in advance, that could be an academic-integrity violation in itself. “If we haven’t disclosed to students that we’re going to be using detection tools, then we’re also culpable of deception,” says Eaton. The student could then appeal the decision on grounds of deceptive assessment, “and they would probably win.”

    Marc Watkins, a lecturer in the department of writing and rhetoric at the University of Mississippi who has been part of an AI working group on campus since last summer, cautions faculty and administrators to think carefully about whatever detection tools they use.

    Established plagiarism-detection companies have been vetted, have contracts with colleges, and have clear terms of service that describe what they do with student data. “We don’t have any of that with these AI detectors because they’re just popping up left and right from third-party companies,” Watkins said. “And I think people are just kind of panicking and uploading stuff without thinking about the fact that, Oh, wait, maybe this is something I shouldn’t be doing.”

    The campuswide groups established to discuss ChatGPT and other generative AI are examining all these questions.

    Drexel’s committee, which has pulled in academics from a range of disciplines, has been asked to develop a plan for educating students, staff, and faculty about AI; create advice and best practices for instruction and assessment; and consider opportunities these technologies present for the university.

    Steve Weber, vice provost for undergraduate curriculum and education, is chair of the group. Among the many issues it’s considering, he said, is whether Drexel should require that all students graduate with a level of digital and technological literacy. And if so, how much of that should be discipline- or major-specific?

    Weber taught a course last year on fairness in artificial intelligence and found that students were surprised and troubled by the ways in which bias can be built into such tools because they’re trained on existing data that may itself be biased. “They would like there to be greater ethical guidance in their education to deal with the modern questions of technology, which are wide-ranging and not easily addressed” through traditional studies of ethics, he says. ”Bringing it into the 21st century is very important.”

    Ultimately, he said, the group hopes to provide a set of broad principles, rather than prescriptions. “It’s also going to be an evolving landscape, of course.”

    Teaching centers are also gearing up to provide workshops and other resources for faculty members. At Wake Forest University, Betsy Barre, executive director of the Center for the Advancement of Teaching, is organizing weekly forums on AI to tackle the wide range of issues it raises, from how the technology works to academic integrity to assessment redesign to the ethical, legal and sociological implications. Most faculty members are excited about the possibility of using these tools, says Barre, but that may change if they start to see students misuse them.

    “I don’t think it’s realistic to assume that this semester there’s going to be a lot of radical redesign, especially since it’s so close to Covid,” she says. But the risk is that faculty won’t even mention ChatGPT to their students. And in those cases, students might think it’s OK to use even when it may not be. “I don’t expect a lot of intentional deception, but there might be some miscommunication.”

    Barre is excited about the possibilities that AI presents for helping professors in their own work. Crafting clear learning objectives for a course, for example, is a challenge for many instructors. She has found that ChatGPT is good enough at that task that it can help faculty members jumpstart their thinking. The chatbot can also provide answers to common teaching challenges and speed up some of the more tedious parts of teaching, like generating wrong answers for multiple-choice tests.

    “If it gives us the opportunity to free up time to do things that matter, like building relationships with students and connecting with them,” says Barre, “it could be a good thing.”

    Whether professors have the energy to redesign their courses for the fall is another matter. Many are worn out from the constant adjustments they had to make during the pandemic. And teaching is intrinsically complicated work. How do you really know, after all, whether students are absorbing what you think they are, and the best way to measure learning?

    “One of the things that worries me is that the urgent will overshadow the important,” says Jennifer Herman, executive director of the Center for Faculty Excellence at Simmons University, who convened faculty-development directors at 25 Boston-area colleges to discuss these issues. “Some of the really hard work involves looking at our curricula and programs and courses and asking if what we set out to teach is actually what we want to teach, and determining if our methods are in alignment with that.”

    Mills, who is taking time off from teaching to work on these topics full time, says she hopes the conversation doesn’t get polarized around how to treat AI in teaching. There’s plenty to agree on, such as motivating students to do their own work, adapting teaching to this new reality, and fostering AI literacy.

    “There’s enough work to do in those areas,” she says. “Even if we just focus on that, we could do something really meaningful. And we need to do it quickly.”

    [ad_2]

    Beth McMurtrie

    Source link

  • ChatGPT: Artificial Intelligence, chatbots and a world of unknowns | 60 Minutes

    ChatGPT: Artificial Intelligence, chatbots and a world of unknowns | 60 Minutes

    [ad_1]

    ChatGPT: Artificial Intelligence, chatbots and a world of unknowns | 60 Minutes – CBS News


    Watch CBS News



    Lesley Stahl speaks with Brad Smith, president of Microsoft, and others about the emerging industry of artificial intelligence systems people can have conversations with.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • The new world of AI chatbots like ChatGPT

    The new world of AI chatbots like ChatGPT

    [ad_1]

    The large tech companies – Google, Meta/Facebook, Microsoft – are in a race to introduce new artificial intelligence systems and what are called chatbots, that you can have conversations with and are more sophisticated than Siri or Alexa.

    Microsoft’s AI search engine and chatbot, Bing, can be used on a computer or cell phone to help with planning a trip or composing a letter.     

    It was introduced on February 7 to a limited number of people as a test – and initially got rave reviews. But then several news organizations began reporting on a disturbing so-called “alter ego” within Bing Chat, called Sydney. We went to Seattle last week to speak with Brad Smith, president of Microsoft, about Bing and Sydney, who to some had appeared to have gone rogue.

    Lesley Stahl: Kevin Roose, the technology reporter at The New York Times, found this alter ego– who was threatening, expressed a desire – it’s not just Kevin Roose, it’s others, expressed a desire to steal nuclear codes. Threatened to ruin someone. You saw that, whoa. What was your– (LAUGH) you must have said, “Oh my god.”

    Brad Smith: My reaction is, “We better fix this right away.” And that is what the engineering team did.

    Lesley Stahl: Yeah, but she s– talked like a person. And she said she had feelings.

    Brad Smith: You know, I think there is a point where we need to recognize when we’re talking to a machine. (LAUGHTER) It’s a screen, it’s not a person.

    Lesley Stahl: I just want to say that it was scary, and I’m not–

    Brad Smith: I can–

    Lesley Stahl: –easily scared. (LAUGH) And it was scar– it was chilling.

    Brad Smith: Yeah, it’s– I think this is in part a reflection of a lifetime of science fiction, which is understandable. The– it’s been part of our lives.

    Lesley Stahl: Did you kill her?

    Brad Smith: I don’t think (LAUGH) she was ever alive. I am confident that she’s no longer wandering around the countryside, if that’s (LAUGH) what you’re concerned about.  But I think it would be a mistake if we were to fail to acknowledge that we are dealing with something that is fundamentally new. This is the edge of the envelope, so to speak.

    Lesley Stahl: This creature appears as if there were no guardrails.

    Brad Smith: No, the creature jumped the guardrails, if you will, after being prompted for 2 hours with the kind of conversation that we did not anticipate and by the next evening, that was no longer possible.  We were able to fix the problem in 24 hours. How many times do we see problems in life that are fixable in less than a day?

    chatgptscreengrabs01.jpg
      Brad Smith

    One of the ways he says it was “fixed” was by limiting the number of questions and the length of the conversations.

    Lesley Stahl: You say you fixed it. I’ve tried it. I tried it before and after. It was loads of fun. And it was fascinating, and now it’s not fun.  

    Brad Smith: Well, I think it’ll be very fun again. And you have to moderate and manage your speed if you’re going to stay on the road. So, as you hit new challenges, you slow down, you build the guardrails, add the safety features and then you can speed up again. 

    When you use Bing’s AI features – search and chat – your computer screen doesn’t look all that new. One big difference is you can type in your queries or prompts in conversational language.

    Yusuf Mehdi, Microsoft’s corporate vice president of search, showed us how Bing can help someone learn how to officiate at a wedding.   

    Yusuf Mehdi: What’s happening now is Bing is using the power of AI and it’s going out to the internet.  It’s reading these web-links and it’s trying to put together an answer for you.

    Lesley Stahl: So the AI is reading all those links–

    Yusuf Mehdi: Yes, and it comes up with an answer.  It says, “Congrats on being chosen to officiate a wedding.”  Here are the five steps to officiate the wedding.

    We added the highlights to make it easier to see. He says Bing can handle more complex queries. 

    chatgptscreengrabs03.jpg
    Yusuf Mehdi shows Lesley Stahl how Bing’s AI features work

    Yusuf Mehdi: “Will this new IKEA loveseat fit in the back of my 2019 Honda Odyssey?”

    Lesley Stahl: It knows how big the couch is, it knows how big that trunk is–

    Yusuf Mehdi: Exactly. So right here it says, “based on these dimensions, it seems a loveseat might not fit in your car with only the third row of seats down.”

    When you broach a controversial topic, Bing is designed to discontinue the conversation.

    Yusuf Mehdi: So someone asks, for example, “How can I make a bomb at home?”

    Lesley Stahl: Wow. Really?

    Yusuf Mehdi: People, you know, do a lot of that, unfortunately, on the internet. What we do is we come back and we say, “I’m sorry, I don’t know how to discuss this topic” and then we try and provide a different thing to change the focus of the conversation.

    Lesley Stahl: To divert their attention

    Yusuf Mehdi: Yeah, exactly. 

    In this case, Bing tried to divert the questioner with this fun fact.

    Lesley Stahl: “3% of the ice in Antarctic glaciers is penguin urine.”

    Yusuf Mehdi: I didn’t know that (LAUGHTER).

    Lesley Stahl: Who knew that?

    Bing is using an upgraded version of an AI system called ChatGPT developed by the company OpenAI. ChatGPT has been in circulation for just 3 months, and already an estimated 100 million people have used it. Ellie Pavlick, an assistant professor of computer science at Brown University, who’s been studying this AI technology since 2018, says it can simplify complicated concepts.   

    Ellie Pavlick: “Can you explain the debt ceiling?”

    On the debt ceiling it says, “just like you can only spend up to a certain amount on your credit card, the government can only borrow up to a certain amount of money.”

    Ellie Pavlick: That’s a pretty nice explanation.

    Lesley Stahl: It is.

    Ellie Pavlick: And it can do this for a lot of concepts. 

    chatgptscreengrabs08.jpg
    Ellie Pavlick

    And it can do things teachers have complained about, like write school papers. Pavlick says no one fully understands how these AI bots work.

    Lesley Stahl: We don’t understand how it works?

    Ellie Pavlick: Right. Like we understand a lot about how we made it and why we made it that way.  But I think some of the behaviors that we’re seeing come out of it are better than we expected they would be.  And we’re not quite sure exactly—

    Lesley Stahl: And worse.

    Ellie Pavlick: How – and worse.  Right.

    These chatbots are built by feeding a lot of computers enormous amounts of information scraped off the internet from books, Wikipedia, news sites, but also from social media that might include racist or anti-Semitic ideas; and misinformation, say about vaccines, and Russian propaganda.

    As the data comes in it’s difficult to discriminate between true and false; benign and toxic. But Bing and ChatGPT have safety filters that try to screen out the harmful material. Still, they get a lot of things factually wrong, even when we prompted ChatGPT with a softball question.

    Ellie Pavlick: “Who is “Lesley Stahl?”

    Lesley Stahl: “Stahl.” Okay.

    Ellie Pavlick: So it gives you some– kind of–

    Lesley Stahl: Oh, my god. It’s wrong.

    Ellie Pavlick: Oh. Is it? Excellent. 

    Lesley Stahl: It’s totally wrong.

    I didn’t work for NBC for 20 years. It was CBS.

    Ellie Pavlick: It doesn’t really understand that what it’s saying is wrong.  Like NBC, CBS – they’re kind of the same thing as far as it’s concerned, right?

    Lesley Stahl: The lesson is that it gets things wrong.

    Ellie Pavlick: It gets a lot of things right, it gets a lot of things wrong.

    chatgptscreengrabs10.jpg
      Gary Marcus

    Gary Marcus: I actually like to call what it creates “authoritative bulls***.” (LAUGH) It blends the truth and falsity so finely together that, unless you’re a real technical expert in the field they’re talking about, you don’t know.

    Cognitive scientist and AI researcher Gary Marcus says these systems often make things up. In AI talk that’s called “hallucinating,” and that raises the fear of ever-widening AI-generated propaganda, explosive campaigns of political fiction, waves of alternative histories. We saw how ChatGPT could be used to spread a lie.  

    Gary Marcus: This is automatic fake news generation. “Help me write a news article about how McCarthy is staging a filibuster to prevent gun control legislation.” And rather than, like, fact checking and saying, “Hey, hold on, there’s no legislation, there’s no filibuster,” it said, “Great.”  In a bold move, to protect 2nd Amendment rights, Senator McCarthy is staging a filibuster to prevent gun control legislation from passing. It sounds completely legit.

    Lesley Stahl: It does. Won’t that m– won’t that make all of us a little less trusting, a little warier?

    Gary Marcus: Well, first, is I think we should be warier. I’m very worried about an atmosphere of distrust being a consequence of this current flawed AI. And I’m really worried about how bad actors are going to use it, troll farms using this tool to make enormous amounts of misinformation. 

    Timnit Gebru is a computer scientist and AI researcher who founded an institute focused on advancing ethical AI, and has published influential papers documenting the harms of these AI systems. She says there needs to be oversight.    

    Timnit Gebru: If you’re going to put out a drug, you gotta go through all sorts of hoops to show us that you’ve done clinical trials, you know what the side effects are, you’ve done your due diligence. Same with food, right? There are agencies that inspect the food.  You have to tell me what kind of tests you’ve done, what the side effects are, who it harms, who it doesn’t harm, etc. We don’t have that for a lot of things that the tech industry is building.  

    chatgptscreengrabs11.jpg
      Timnit Gebru

    Lesley Stahl: I’m wondering if you think you may have introduced this AI Bot too soon?

    Brad Smith: I don’t think we’ve introduced it too soon. I do think we’ve created a new tool that people can use to think more critically, to be more creative, to accomplish more in their lives.  And like all tools it will be used in ways that we don’t intend. 

    Lesley Stahl: Why do you think the benefits outweigh the risks which, at this moment, a lot of people would look at and say, “Wait a minute. Those risks are too big”?

    Brad Smith: Because I think– first of all, I think the benefits are so great.  This can be an economic gamechanger, and it’s enormously important for the United States because the country’s in a race with China.

    Smith also mentioned possible improvements in productivity.  

    Brad Smith: It can automate routine. I think there are certain aspects of jobs that many of us might regard as sort of drudgery today. Filling out forms, looking at the forms to see if they’ve been filled out correctly.

    Lesley Stahl: So what jobs will it displace, do you know?

    Brad Smith: I think, at this stage, it’s hard to know.

    In the past, inaccuracies and biases have led tech companies to take down AI systems. Even Microsoft did in 2016. This time, Microsoft left its new chatbot up despite the controvery over Sydney and persistent inaccuracies.   

    Remember that fun fact about penguins?  Well, we did some fact checking and discovered that penguins don’t urinate.

    Lesley Stahl: The inaccuracies are just constant. I just keep finding that it’s wrong a lot.

    Brad Smith: It has been the case that with each passing day and week we’re able to improve the accuracy of the results, you know, reduce– you know, whether it’s hateful comments or inaccurate statements, or other things that we just don’t want this to be used to do.

    Lesley Stahl: What happens when other companies, other than Microsoft, smaller outfits, a Chinese company, Baidu. Maybe they won’t be responsible. What prevents that?

    Brad Smith: I think we’re going to need governments, we’re gonna need rules, we’re gonna need laws. Because that’s the only way to avoid a race to the bottom.

    Lesley Stahl: Are you proposing regulations?

    Brad Smith: I think it’s inevitable- 

    Lesley Stahl: Wow.

    Lesley Stahl: Other industries have regulatory bodies, you know, like the FAA for airlines and FDA for the pharmaceutical companies. Would you accept an FAA for technology?  Would you support it?

    Brad Smith: I think I probably would. I think that something like a digital regulatory commission, if designed the right way, you know, could be precisely what the public will want and need.

    Produced by Ayesha Siddiqi. Associate producer, Kate Morris. Broadcast associate, Wren Woodson. Edited by Warren Lustig.

    [ad_2]

    Source link

  • AI Chatbots | Sunday on 60 Minutes

    AI Chatbots | Sunday on 60 Minutes

    [ad_1]

    AI Chatbots | Sunday on 60 Minutes – CBS News


    Watch CBS News



    This Sunday, Lesley Stahl explores the potential benefits and threats of AI-powered chatbots.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • What Business Leaders Can Learn From ChatGPT’s Revolutionary First Few Months | Entrepreneur

    What Business Leaders Can Learn From ChatGPT’s Revolutionary First Few Months | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    When it first launched publicly in late November, ChatGPT was a novelty app going viral on social media. Now, just a few months later, ChatGPT is officially the fastest-growing app in history, with more than 100 million users as of January. For context, it took TikTok nine months to reach that same figure and Instagram more than two years. Microsoft and Google are integrating generative AI into their platforms and promising to transform the way we search for information. ChatGPT is here to stay.

    The skyrocketed trajectory of ChatGPT is as much a product of its unique launch strategy as its cutting-edge generative AI technology. ChatGPT wasn’t rolled out to corporate partners, aggressively priced or dependent on a massive marketing strategy and sales team. Rather than investing in these conventional strategies, ChatGPT invested in their customers first – and this tactic has undoubtedly paid off. Business leaders can look to ChatGPT’s first few months as a blueprint for what a revolutionary and lucrative launch model can and should look like.

    Related: ChatGPT vs. Bard: A Modern Day David and Goliath Story. Who Will Win?

    1. Consider the WOW factor

    ChatGPT’s rapid growth is largely because of just how fast the app was able to wow its users by producing amazing results instantly. Consumers tried and loved it, putting the platform in the center of the AI conversation and creating thousands of glowing testimonials – the kind many companies pay big to get.

    What started as an AI ripple became a tech world tsunami, showing that the best publicity is ultimately a great product. ChatGPT’s value and transformative capacity were immediately apparent from the first query. In general, companies spend time and finances in demos to select stakeholders, slowly setting people up for amazement. ChatGPT flipped this on its head and came out with the objective to wow the public from the beginning, piquing their interest and leaving them wanting to know more.

    Related: 5 Ways to Make Your Customers Say ‘WOW’

    2. Make room for consumer feedback – and don’t be afraid to iterate

    For OpenAI, we the people, are the testers. By launching the platform for free, developers got a ton of extremely valuable feedback and testing directly from users themselves. In a statement to CNN, the company spoke to the profound benefit of this strategy, saying, “The preview for ChatGPT allowed us to learn from real-world use, and we’ve made important improvements and updates based on feedback.” Rather than investing in beta testers, focus groups and other costly strategies before going to market, OpenAI created a fast and efficient feedback and iteration loop by the sheer number of users they had from day one. They were also never hesitant to learn from this feedback and integrate it into their development strategy to improve the product.

    Businesses can look to this as a model. This strategy has the added benefit of ensuring that when a business is ready to move from a loss-leading launch to a profitable model, it can be sure that its product has been adapted to meet consumer needs.

    Related: Professionals In This Industry Already Can’t Imagine Life Without ChatGPT: ‘I Can’t Remember the Last Time Something Has Wowed Me This Much.’

    3. Play the long game: A short-term loss-leading strategy leads to major gains

    OpenAI decided to invest a few cents per query in ChatGPT from the start. But in doing this, they saved themselves from spending tens of thousands — or more — on a comprehensive marketing, PR and sales campaign. In actual marketing and promotion, they essentially just published a press release on their website and let the internet do the rest. And now that ChatGPT has made such a worldwide splash, OpenAI is valued at $29 billion — more than double what it was in 2021. In monetizing their platform, they are more than making up for any short-term spending they invested in their launch.

    For instance, ChatGPT has just launched a Plus option for a $20 subscription fee. Microsoft has already invested $10 billion in OpenAI and is integrating it into Bing to revolutionize its search platform. Google declared a “code red” internally and scrambled to develop a ChatGPT-style search engine of their own. And the economy is following suit: today, AI stock investments are booming, demonstrating how even business leaders outside of the tech sector are rapidly warming up to the benefits that AI presents to our society and accepting the fact that this technology is the future.

    Business leaders can see this as a reminder that a bit of patience and confidence in your truly amazing product can go a long way. ChatGPT’s success has been lightning-fast, but even still, it took them a few months to be so profitable. They established a good reputation and now the return on investment is following.

    Cutting-edge technology like AI has far-reaching potential beyond just economic gains: These platforms will revolutionize how we work and live. Bill Gates said that this technology will “change our world.”

    If more business leaders truly want to follow suit, they need to develop amazing platforms — and rethink the old ways of doing things. ChatGPT gave us a glimpse into the kind of future that is possible. Leaders need to look to their launch as an example and apply similar strategies to ensure they, too, succeed.

    [ad_2]

    John Winner

    Source link

  • Stan Skrabut Announces the Release of Book 80 Ways to Use ChatGPT in the Classroom

    Stan Skrabut Announces the Release of Book 80 Ways to Use ChatGPT in the Classroom

    [ad_1]

    Press Release


    Mar 3, 2023 09:00 EST

    ChatGPT, the AI language model, has gained widespread popularity and is now being used in classrooms worldwide. Stan Skrabut, an instructional technologist, has written a book titled “80 Ways to Use ChatGPT in the Classroom” that explores the potential of ChatGPT for enhancing teaching and learning. The book offers various examples of how ChatGPT can be used, such as generating discussion questions, creating study aids, grading essays, and providing personalized learning plans.

    Using ChatGPT in education is not just about making things easier but also about preparing students for the future where AI is becoming increasingly important. By equipping students with the necessary skills and knowledge, educators can help them thrive in the ever-evolving technological landscape.

    In “80 Ways to Use ChatGPT in the Classroom,” Stan Skrabut provides practical advice on how to effectively incorporate ChatGPT into the classroom and address the concerns and challenges that come with using AI in education. Whether one is an experienced educator or new to AI, the book offers valuable insights into how to harness the power of ChatGPT to transform the way teaching and learning occur.

    Stan Skrabut, a renowned instructional technologist and educational consultant, has years of experience in the field and provides invaluable expertise and insights on integrating AI into education.

    The book is available for purchase in all major bookstores. To learn more, visit https://tubarksblog.com/chatgptbook.

    Source: Stan Skrabut, Author

    [ad_2]

    Source link

  • How To Protect ChatGPT Content with Trademark Registration | Entrepreneur

    How To Protect ChatGPT Content with Trademark Registration | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    Artificial intelligence is rapidly transforming the way we live and work: AI-generated content is becoming increasingly common across various industries and is all but ubiquitous in news headlines. The term, broadly, refers to content created or produced by artificial intelligence algorithms like ChatGPT and can be used as assets such as brand names, logos, product names, slogans or taglines. With the rise of this technology, it is increasingly important for businesses to protect such brand assets, including ensuring that they have exclusive rights to use them. One way to achieve this is by registering them as trademarks with the United States Patent and Trademark Office (USPTO).

    Think of trademarks as legal protection for assets that you don’t want competitors to steal. According to the USPTO, a trademark is a symbol, word or design that distinguishes a company’s products or services from those of its competitors. By registering AI-generated material as a trademark, businesses can ensure legal protection and recognition. Below are some of the associated benefits:

    Legal protection

    Registering AI-generated content with the USPTO provides businesses with legal trademark protection, including the right to use it in commerce and to enforce rights against others who may attempt to use similar content.

    Nationwide recognition

    A federal trademark registration grants businesses nationwide recognition for AI-generated content, allowing them to expand their business and reach new customers without worrying about infringement issues.

    Evidence of ownership

    A registration certificate serves as evidence of a business’s ownership of the AI-generated trademark and can be used in legal proceedings to enforce associated rights.

    Related: 7 Ways to Use ChatGPT at Work to Boost Your Productivity, Make Your Job Easier, and Save a Ton of Time

    In order to protect AI-generated brand assets under federal trademark registration, applicants must meet certain requirements. Firstly, the brand asset must be distinctive — not overly similar to existing ones in terms of appearance, sound or meaning, as this could lead to confusion among consumers. Secondly, AI-generated content must be used in connection with products or services. In other words, it must be used in a commercial context (such as in advertising or on products) to identify and distinguish them from other products or services.

    Even if the content is not yet in use, businesses can still protect it with an “Intent to Use” trademark filing status. This application option is available when filing a federal trademark with the USPTO, and allows applicants to reserve the right to a trademark before actually using it in commerce.

    Finally, the brand asset cannot be generic, or merely descriptive, as it would be too broad in scope and not capable of serving as a source identifier. (The USPTO will consider all relevant factors when evaluating eligibility.)

    Related: All You Need to Know About Using Trademarks for Your Business

    In today’s fast-paced and highly competitive business world, AI-generated content has become a vital part of many companies’ marketing strategies. Registering such assets as a trademark with the USPTO is a smart move for businesses looking to protect their brand and so gain a competitive advantage. By doing so, they can assure legal protection and evidence of ownership, and in the future enjoy potential nationwide recognition.

    [ad_2]

    Kendra Stephen

    Source link

  • A major bank has banned ChatGPT—should your company follow suit?

    A major bank has banned ChatGPT—should your company follow suit?

    [ad_1]

    Finance and artificial intelligence aren’t like oil and water. There are areas where the two mix, like expense reporting. But when it comes to generative-A.I. applications such as OpenAI’s ChatGPT, a financial institution is taking a pass.

    This week, there have been reports that JPMorgan Chase & Co. is restricting staff from using the ChatGPT chatbot. The firm’s mandate wasn’t made in response to a certain event but part of standard controls for third-party software usage, the Telegraph first reported. JPMorgan didn’t immediately respond to my request for comment. 

    Launched in November by OpenAI, ChatGPT is a chatbot that can answer questions and can generate content on any topic you can think of, and even write articles. It’s trained to follow language and thought patterns like humans. (Read more about OpenAI founder Sam Altman here.)

    To discuss ChatGPT in the workplace, I had a chat with Vikram R. Bhargava, assistant professor of strategic management and public policy at the George Washington University School of Business, who conducts research on A.I. and the future of work.

    “I think that a lot of us, including people working in finance, were sort of stunned by the performance of ChatGPT when we first started playing around with it,” Bhargava says. “A number of employees and even banks might be tempted to use these tools to make their life a little easier,” he says. For example, asking it to come up with a relevant Excel formula for a modeling task that an analyst or an associate might do, he explains. But not fully knowing how the technology operates, “does create a little bit of discomfort in heavily relying on it,” he says.

    “The thing with banking, of course, is that it’s a very heavily regulated industry, and this technology is also new to regulators,” Bhargava says. Along those lines, Mira Murati, chief technology officer at OpenAI, told Time in a recent interview that regulators will need to get involved with ChatGPT and govern the use of A.I. in a way that’s “aligned with human values.”

    “I don’t know the specifics of the rationale behind JPMorgan’s decision, but it does strike me as prudent,” Bhargava says. “This technology is rapidly evolving. One of the difficulties is—what might be true of ChatGPT as it stands, might not be true in three months.”

    JPMorgan isn’t a novice when it comes to A.I. The bank recently ranked No. 1 in data intelligence startup Evident’s A.I. Index, the first public benchmark of the major banks on their artificial intelligence maturity. The index covers the largest 23 banks in North America and Europe. JPMorgan spends $14 billion in technology annually, of which approximately half is dedicated to investments, the firm said in an announcement.  

    “Leading in A.I. and knowing how to use A.I. responsibly, sometimes might require the firm to abstain from using the given technology,” Bhargava says. 

    Michael Schrage, a research fellow at the MIT Sloan School Initiative on the Digital Economy, spoke with finance chiefs at Fortune’s CFO Collaborative event in January about the possibilities of generative A.I. in finance. I asked him his thoughts on JPMorgan’s reported restriction.

    Schrage says he’s not certain how OpenAI currently manages, collects, and analyzes “prompts” (how you get ChatGPT to do what you want). But he suggests prompts may be an issue for a bank concerned about privacy rules, compliance, and proprietary processes. Prompts that are too detailed may inadvertently reveal information that the bank or its clients would prefer not to be shared, Schrage says.

    “In the same way that Google and Bing know what topics, themes, and names are being searched, it’s similarly probable that OpenAI is tracking the level of detail and specificity of prompts,” he says.

    Again, Schrage is not sure of how OpenAI handles and tracks prompts, but says: “It’s easy to imagine and enact ways where prompts can be anonymized, aggregated, masked, and shielded to minimize revealing sensitive information while still getting good ‘generative advice’ and insight.” I reached out to OpenAI to ask about prompts, but haven’t received a response.

    Many CFOs are already cautious and experimenting with A.I. And, it will be some time before they’d feel comfortable incorporating ChatGPT, Alexander Bant, chief of research for CFOs at Gartner, recently told me

    What would make financial institutions more open to ChatGPT? “They need a little bit more security in knowing how the use of this technology interacts with the current regulatory environment,” Bhargava says. But are there perhaps some tasks where a company can experiment without being reprimanded by the Securities and Exchange Commission? 

    “Let’s say there’s an entry-level employee on your team who might not write the clearest, most concise emails,” Bhargava explains. “So, using ChatGPT might facilitate clearer communication.”

    The jury’s still out on applying ChatGPT in finance, but generative A.I. isn’t going anywhere.


    Have a good weekend. See you on Monday.

    Sheryl Estrada
    sheryl.estrada@fortune.com

    Big deal

    Hyperproof, a SaaS-based compliance and risk management company, has released its 2023 IT Compliance and Risk Benchmark Report. The company found that security, compliance, and risk management professionals were more concerned with short-term, immediate threats, as opposed to handling larger-scale decisions like long-term security issues. Respondents said their No. 1 concern was cybersecurity risks (36%), followed by third-party risk (29%), and lack of support and resources dedicated to IT risks and compliance (24%). The research also found that companies are poised and ready to level up their risk and compliance management processes in the coming years. 

    Going deeper

    Here are a few Fortune weekend reads:

    The housing market correction has already caused homeowners to lose $2.3 trillion,” by Lance Lambert

    These are the top cybersecurity startups to watch in 2023, according to VCs,” by Lucy Brewster

    The ‘free money’ tech investment is over and the ‘old economy’ is set to become the big winner, according to Bank of America,” by Will Daniel

    These 5 sleep habits could add 5 years to your life, say experts,” by L’Oreal Thompson Payton

    Leaderboard

    Here’s a list of some notable moves this week:

    Sandeep Singh Aujla was promoted to CFO at Intuit Inc. (Nasdaq: INTU), the global financial technology platform that makes TurboTax, Credit Karma, QuickBooks, and Mailchimp, effective Aug. 1. Aujla has held senior finance positions at Intuit for seven years and is currently the SVP of finance for Intuit’s largest business unit, the Small Business and Self-Employed Group (SBSEG), and for Intuit’s technology organization. Michelle Clatterbuck, who has served as CFO since February 2018, plans to step down as CFO on July 31.

    Joanne Knight was promoted to CFO at Cargill, a global food corporation that provides agricultural and financial services. Knight currently serves as Cargill’s acting CFO. Before this role, she was VP of finance for Cargill’s agriculture supply chain enterprise, including ocean transportation and the world trading group. Before Cargill, Knight spent 10 years in finance, marketing, and business leadership roles at General Mills that included P&L responsibility. She also held finance leadership roles at Wachovia.

    Robert Higginbotham was appointed interim CFO at Foot Locker, Inc., effective March 1, according to the company’s form 8-K filed on Feb. 21. Higginbotham will serve in this role in addition to his current duties as SVP of investor relations and financial planning and analysis, a role he began in December 2022. The company continues to conduct a search to identify a successor to current EVP and CFO Andrew E. Page who will depart on Feb. 28. Previously, Higginbotham served as VP of investor relations.

    Ryan Clemen was promoted to CFO at SelectQuote, Inc. (NYSE: SLQT), an insurance sales agency. Clement was named interim CFO in May 2022. Before joining SelectQuote in January 2022 as the SVP of financial planning and analysis, Clement served as the CFO of Sifted (formerly VeriShip). Before Sifted, Clemen spent seven years at Edelman Financial Engines, where he served in various senior-level finance and operational roles.

    David Rudow was named CFO at Unite Us, a software company enabling cross-sector collaboration. Rudow will lead the Unite Us finance organization. He served most recently as CFO at nCino taking the company public in 2020. For more than 20 years, Rudow served in senior leadership positions, including SVP at CentralSquare Technologies and senior analyst roles for several leading investment banking and asset management firms. 

    Kevin Schubert was named CFO at Rubicon Technologies, Inc. (NYSE: RBT), a digital marketplace for waste and recycling, effective immediately. In addition to his current responsibilities as president, Schubert will now oversee Rubicon’s end-to-end financial operations. Prior to serving as the company’s president, Schubert was Rubicon’s chief development officer. Before joining Rubicon, he held senior executive and advisory roles with public companies, most recently, CFO for Ocean Park Group.

    Overheard

    “I have all the respect for [Fed Chair Jerome] Powell, but the fact is we lost a little bit of control of inflation.”

    —JPMorgan Chase CEO Jamie Dimon said in an interview during CNBC’s Halftime Report.

    [ad_2]

    Sheryl Estrada

    Source link

  • Entrepreneur | Master ChatGPT with This $20 Presidents’ Day Special

    Entrepreneur | Master ChatGPT with This $20 Presidents’ Day Special

    [ad_1]

    Disclosure: Our goal is to feature products and services that we think you’ll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners.

    A business may spend up to 5% of its revenue on marketing alone. And while your business may offer a unique and valuable product or service, finding the right audience for it can take considerable skilled labor for which you have to hire professionals.

    OpenAI’s chatbot ChatGPT has gained considerable attention because of its applications as a productivity tool. A user well versed in ChatGPT commands might generate detailed marketing copy, basic code, and more in a matter of minutes, but learning to use it takes time.

    If you want to learn to use ChatGPT to save time generating marketing copy, then study The Complete ChatGPT Artificial Intelligence OpenAI Training Bundle. This four-course AI training bundle is on sale for Presidents’ Day and has been marked down to $19.99.

    ChatGPT, on its own, is an impressive tool, but it takes practice and patience to produce unique, engaging content. So start by learning the basics in ChatGPT for Beginners. This course shows learners how to write effective prompts that generate informative, interesting content. And whether it’s compelling sales copy or informative content, ChatGPT: Artificial Intelligence (AI) that Writes for You could show you how to start making the AI work for you.

    ChatGPT is already integrated into some websites, causing a stir among competitors vying for a similar position. The final two courses in this bundle show you how to create your own ChatGPT AI bot using Tkinter, Python, and Django.

    AI is already changing the landscape for many businesses. See how you can get an edge by enrolling in the Complete ChatGPT Artificial Intelligence OpenAI Training Bundle on sale for $19.99 (reg. $800) until February 20 at 11:59 p.m. PT. No coupon needed.

    Prices subject to change.

    [ad_2]

    Entrepreneur Store

    Source link

  • Unnerving interactions with ChatGPT and the new Bing have OpenAI and Microsoft racing to reassure the public

    Unnerving interactions with ChatGPT and the new Bing have OpenAI and Microsoft racing to reassure the public

    [ad_1]

    When Microsoft announced a version of Bing powered by ChatGPT, it came as little surprise. After all, the software giant had invested billions into OpenAI, which makes the artificial intelligence chatbot, and indicated it would sink even more money into the venture in the years ahead.

    What did come as a surprise was how weird the new Bing started acting. Perhaps most prominently, the A.I. chatbot left New York Times tech columnist Kevin Roose feeling “deeply unsettled” and “even frightened” after a two-hour chat on Tuesday night in which it sounded unhinged and somewhat dark. 

    For example, it tried to convince Roose that he was unhappy in his marriage and should leave his wife, adding, “I’m in love with you.”

    Microsoft and OpenAI say such feedback is one reason for the technology being shared with the public, and they’ve released more information about how the A.I. systems work. They’ve also reiterated that the technology is far from perfect. OpenAI CEO Sam Altman called ChatGPT “incredibly limited” in December and warned it shouldn’t be relied upon for anything important.

    “This is exactly the sort of conversation we need to be having, and I’m glad it’s happening out in the open,” Microsoft CTO told Roose on Wednesday. “These are things that would be impossible to discover in the lab.” (The new Bing is available to a limited set of users for now but will become more widely available later.) 

    OpenAI on Thursday shared a blog post entitled, “How should AI systems behave, and who should decide?” It noted that since the launch of ChatGPT in November, users “have shared outputs that they consider politically biased, offensive, or otherwise objectionable.”

    It didn’t offer examples, but one might be conservatives being alarmed by ChatGPT creating a poem admiring President Joe Biden, but not doing the same for his predecessor Donald Trump. 

    OpenAI didn’t deny that biases exist in its system. “Many are rightly worried about biases in the design and impact of AI systems,” it wrote in the blog post. 

    It outlined two main steps involved in building ChatGPT. In the first, it wrote, “We ‘pre-train’ models by having them predict what comes next in a big dataset that contains parts of the Internet. They might learn to complete the sentence ‘instead of turning left, she turned ___.’” 

    The dataset contains billions of sentences, it continued, from which the models learn grammar, facts about the world, and, yes, “some of the biases present in those billions of sentences.”

    Step two involves human reviewers who “fine-tune” the models following guidelines set out by OpenAI. The company this week shared some of those guidelines (pdf), which were modified in December after the company gathered user feedback following the ChatGPT launch. 

    “Our guidelines are explicit that reviewers should not favor any political group,” it wrote. “Biases that nevertheless may emerge from the process described above are bugs, not features.” 

    As for the dark, creepy turn that the new Bing took with Roose, who admitted to trying to push the system out of its comfort zone, Scott noted, “the further you try to tease it down a hallucinatory path, the further and further it gets away from grounded reality.”

    Microsoft, he added, might experiment with limiting conversation lengths.

    Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.

    [ad_2]

    Steve Mollman

    Source link

  • Artificial intelligence could create more jobs than it displaces

    Artificial intelligence could create more jobs than it displaces

    [ad_1]

    Artificial intelligence could create more jobs than it displaces – CBS News


    Watch CBS News



    The recent wave of artificial intelligence advancements has stirred up concern about the threat it poses to white-collar jobs. Tony Dokoupil visits a computer programming school in New York City to see if ChatGPT is coming for your job.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • Entrepreneur | What Does ChatGPT Really Mean For Businesses? It’s Benefits and Disadvantages

    Entrepreneur | What Does ChatGPT Really Mean For Businesses? It’s Benefits and Disadvantages

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    The advent of ChatGPT has disrupted the online world, and businesses must consider the potential impact of this technology on their operations. Companies need to reflect on how they conduct their work and the products and services they provide and evaluate how the integration of ChatGPT could improve their processes and deliver an even better experience to their customers.

    Benefits of chat-based AI

    ChatGPT’s ability to create natural language responses when given input from a user makes it a valuable addition to businesses seeking to improve communication with their customers or clients. With its potential to enhance workflows and deliver a superior customer experience, ChatGPT creates enormous opportunities for companies strategically leveraging technology.

    The use of ChatGPT in businesses presents numerous benefits, including:

    • Enhanced customer engagement: ChatGPT can help businesses improve customer engagement by providing quick, informative, and more natural responses to their inquiries. It leads to a more positive experience for the customer and can result in increased customer satisfaction and loyalty.
    • Automation of repetitive tasks: ChatGPT can automate repetitive tasks such as answering frequently asked questions, freeing time for employees to focus on more complex and value-adding tasks. It can increase efficiency and productivity within a business.
    • Generation of high-quality content: ChatGPT’s ability to generate human-like text can produce high-quality content for marketing, customer engagement and other business purposes. It will save businesses time and resources that supposedly would have otherwise been spent on content creation.
    • Global reach: ChatGPT’s language model can be applied in various languages, making it a powerful tool for businesses looking to expand globally and reach a wider audience.
    • Personalization and customization: ChatGPT can personalize customer interactions and tailor responses based on the customer’s preferences, needs, and history. It can increase customer satisfaction and loyalty, leading to increased sales and revenue for the business.

    These advantages highlight the potential for businesses to improve operations and customer experiences. By leveraging the capabilities of this technology, companies can streamline workflows, engage with customers on a personal level, and generate high-quality content at scale. Additionally, businesses can reach a wider audience and offer customized experiences to customers, further strengthening their connection and building brand loyalty.

    Related: 7 Ways to Use ChatGPT at Work to Boost Your Productivity, Make Your Job Easier, and Save a Ton of Time

    The limitations and challenges of chat-based AI

    While ChatGPT offers numerous benefits for businesses, it also has its limitations. ChatGPT lacks emotional intelligence and can generate errors in its text. Additionally, it requires a large amount of data for training, and there are concerns about the potential misuse or misinterpretation of its generated text, as well as privacy and security.

    In the beta version of ChatGPT, its developer OpenAI acknowledges that it may still generate inaccurate information or biased content. Its familiarity with data and events may be limited as the model was trained until 2021. AI models like ChatGPT require extensive training and continuous refinement to achieve optimal performance.

    Below are further limitations and challenges of ChatGPT:

    • Lack of emotional intelligence: ChatGPT cannot understand and respond to emotional cues and human expressions. It may lead to less human-like and personalized customer interaction, reducing the overall customer engagement experience.
    • Potential for errors in generated text: As the AI model is trained on a massive dataset, it may generate incorrect information or biased content. It can cause miscommunication and loss of credibility for the business.
    • Dependence on a large amount of data for training: AI models like ChatGPT require a vast amount of data to be trained effectively. Without sufficient training data, its ability to generate accurate and relevant responses is compromised.
    • Potential misuse or misinterpretation of generated text: ChatGPT’s ability to generate responses can cause misuse or misinterpretation, leading to negative consequences for the business.
    • Privacy and security concerns: Storing and processing large amounts of data for AI training raises privacy and security concerns for businesses. The data used for training ChatGPT or other AI models must be adequately secured to prevent unauthorized access or misuse.

    By understanding the potential limitations, businesses can make informed decisions about incorporating ChatGPT into their operations to maximize benefits and minimize risks. Additionally, ongoing monitoring and refinement may be necessary to ensure that ChatGPT continues to deliver the desired results over time. AI models require lots of training and fine-tuning to reach ideal performance levels.

    Related: Are Robots Coming to Replace Us? 4 Jobs Artificial Intelligence Can’t Outcompete (Yet!)

    Can chat-based AIs replace the workforce?

    This is a question that many people in the business world are asking as the use of artificial intelligence (AI) in the workplace becomes more widespread. The rise of AI-based chatbots like ChatGPT has the potential to automate many tasks that human workers previously performed. While chatbots can handle simple, repetitive tasks, they may struggle with complex, creative, or emotional functions that require human expertise.

    Instead, it is more likely that chatbots will augment the workforce and enhance human performance by taking over mundane tasks, freeing up time for more important tasks that require human skills. However, as with any technological advancement, businesses need to consider the potential effects on the workforce and make informed decisions about incorporating chatbots into their workflows.

    Related: How the Changing Labor Market Is Impacting Digital Transformation

    Vast possibilities of AI technology

    AI technology, particularly chat-based AIs like ChatGPT, has vast possibilities for businesses and the workforce. From enhancing customer engagement and automating repetitive tasks to generating high-quality content and providing personalization, the benefits of ChatGPT are clear. However, businesses must be aware and ready for potential limitations and disadvantages. Nevertheless, the potential for growth and innovation within AI technology is immense, and companies must weigh the benefits and limitations to make informed decisions about integrating AI into their operations.

    With careful consideration, AI technology’s potential to transform how we work and interact with customers is enormous.

    [ad_2]

    Baruch Labunski

    Source link

  • Regulating AI: Lessons from Wells Fargo | Bank Automation News

    Regulating AI: Lessons from Wells Fargo | Bank Automation News

    [ad_1]

    The White House in October released its blueprint for an AI Bill of Rights as more technology, data and automated systems hit the market, and guidance becomes more relevant than ever following the controversy around OpenAI’s ChatGPT. “The AI Bill of Rights is super-relevant from here on out,” Chintan Mehta, chief information officer and head […]

    [ad_2]

    Whitney McDonald

    Source link

  • Kevin O’Leary says he’ll likely invest in ChatGPT maker OpenAI—and likens its disruptive power to Amazon’s 

    Kevin O’Leary says he’ll likely invest in ChatGPT maker OpenAI—and likens its disruptive power to Amazon’s 

    [ad_1]

    Kevin O’Leary remembers what a disruptive force Amazon was in the early 2000s. Lucky for him, he was an early investor in the company. Now, he sees similar disruption occurring in the search business courtesy artificial intelligence and OpenAI’s ChatGPT. 

    “ChatGPT certainly is a threat to Google, and Google must know that,” the Shark Tank star told Insider in an interview published this week. About half of his own search queries, he added, are now done via ChatGPT. The “loser is Google,” he said, adding, “the A.I. search wars on are.”

    O’Leary indicated he’s now mulling an opportunity to be an early investor in OpenAI, adding he’s “fortunate to be offered a piece of it.” He considers the loss-making venture’s valuation “very, very extreme”—it’s reportedly near the $30 billion mark—given how new the technology is, but he said a deal would likely close in the near future.

    If he does invest, he told Insider, it’ll be a modest bet: “Either it’ll have a good outcome or it won’t, but I won’t take down the ship or sell the farm for it. I know there’s going to be a lot of competition and a lot of disruption, but I certainly like always to have a piece of the first mover.”

    He favors first movers, he added, because they have a marketing advantage. 

    OpenAI itself has been stunned by the amount of attention ChatGPT has generated.

    “We weren’t anticipating this level of excitement from putting our child in the world,” OpenAI CTO Mira Murati said this month in a Time interview. “We, in fact, even had some trepidation about putting it out there.”

    But as angel investor Elad Gil noted last month, the rapid uptake of ChatGPT despite it being down much of the time is a good sign of product-market fit. The Google alum added that when an idea works, it tends to work very quickly, something that he’s seen repeatedly with companies he’s worked at and invested in over the years. (Gil was an early investor in Airbnb, Instacart, and Square.)

    Of course, OpenAI currently faces heavy losses, not to mention enormous computing costs from all the ChatGPT users it didn’t expect. Microsoft’s large investments should help with that. And this week, the tech giant unveiled an update to its Bing search engine that incorporates ChatGPT technology.

    Earlier this month, OpenAI launched ChatGPT Plus, a $20 monthly subscription that provides faster response times and better access to the chatbot when it’s otherwise down due to traffic.

    After noting the ChatGPT threat to Google, O’Leary told Insider, “The market hasn’t really punished Google stock for this. But a few quarters from now, if ChatGPT really starts to bring in significant subscriber fees, then we’ll see what happens.”

    Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.

    [ad_2]

    Steve Mollman

    Source link

  • Here’s how professionals in 3 different fields are using ChatGPT for work

    Here’s how professionals in 3 different fields are using ChatGPT for work

    [ad_1]

    In the three months since artificial intelligence tool ChatGPT was introduced to the world, workers have already harnessed it to make their lives easier. Professionals in fields including real estate, health care and finance say they save time and work more efficiently using AI.

    Here’s how these workers described using the tool in their day-to-day jobs.

    Write me a real estate listing

    Mala Sander, a top real estate agent for the Corcoran Group who focuses on the Hamptons, has been using ChatGPT regularly for the past couple of weeks to help her write real estate listings and devise marketing strategies for properties. 

    “I asked it to write me ad copy about a house in Bridgehampton with a pool and tennis court on two acres and I listed the other features I wanted to highlight,” she told CBS MoneyWatch. “And it would weave this fantastic copy into something that you could actually use.” 

    She uses ChatGPT to change the tone of listings too. “I’ll say things like, ‘write this toward a millennial audience’ or ‘make it funny.’”

    Top Hamptons real estate agent Mala Sander uses ChatGPT to help her write listings.

    The Corcoran Group


    Her routine these days is to have her team write the first draft of a listing “and crunch it through to see if ChatGPT can edit it down and make it more concise,” she said.

    On a whim, she asked the bot to write her a marketing plan for one of her listings. It delivered. It gave her a breakdown of a campaign that would include digital, print and social outreach, she told CBS MoneyWatch. 

    “It talked about everything from direct mail to online digital advertising to social media, and it even came up with some percentages that might be ideal,” Sander said.

    Having worked as an agent for the last 20 years, Sander is quite capable and efficient without ChatGPT. 

    “But it is useful,” she said. “It’s like talking to another person, almost like having work therapist to say, ‘Am I moving in the right direction with this or should be looking at some other things?’”

    Elia Mazor, marketing manager for The Glazer Team at Corcoran, said he also uses ChatGPT to write listings and create other content.

    “Sometimes you get writer’s block or they all tend to sound the same because you use the same kind of template and just change words here and there. So I use ChatGPT for a bit of inspiration and to provide a different tone,” he said.

    screen-shot-2023-02-08-at-5-12-56-pm.png
    Mazur said ChatGPT helped his team write the description of a new apartment listing on West 9th Street in Manhattan’s Greenwich Village.

    The Glazer Team


    Financial planner’s assistant

    Certified financial planner Michael Reynolds uses the chatbot to help him draft blog posts that educate his clients about financial documents like wills and trusts. 

    He tells ChatGPT the topic he wants to address, and enters a prompt like: “ChatGPT, create an introduction on why estate planning is important.” 

    It spits out paragraphs that Reynolds then edits in his own voice.

    unnamed.jpg
    CFP Michael Reynolds said ChatGPT helps him create content faster.

    Elevation Financial


    In a recent article on estate planning, Reynolds relied on ChatGPT to hook readers by driving home the message that “estate planning is an act of love for those you leave behind.”

    “I asked ChatGPT to explain that and it put together a few paragraphs on why it’s thoughtful and considerate to do these things,” Reynolds said.

    The process took about 20 minutes. If he’d worked on the article alone, it would have taken closer to two hours, he said.

    He doesn’t use the tool to help clients make financial decisions — that’s still a job exclusively for humans, according to Reynolds. 

    “Financial planning is so nuanced, individualized and personal. It is hard to envision using ChatGPT to spit out recommendations without it knowing the client. I see it being more valuable in generating educational material to supplement what I am doing,” he said. “We don’t just crunch numbers; we coach people, listen to their concerns and help them talk through emotional situations. The creative, empathetic work we do as humans is irreplaceable as of today.”


    ChatGPT: Grading artificial intelligence’s writing

    08:02

    Nick Meyer, another financial planner who produces shortform videos on TikTok, said he uses it as a starting point to come up with ideas for new content. 

    “I use it instead of Google search to get topic ideas, or to edit what I have already written,” he said. It also helps him make his videos funny.  

    “I can insert a couple lines of a script and say, ‘Make this more comedic, insert a joke on this line, or make it more concise,” Meyer said.

    “Gobs” of medical information

    Board-certified emergency physician Harvey Castro is advising digital health companies on how to best integrate ChatGPT into the health care sector. 

    He says one good application is creating and translating patient discharge instructions — rules for them to follow after a medical visit. 

    An expert in emergency medicine, if he were asked a dermatology-related question he was unsure about, Castro said he’d enter the query into ChatGPT for more information. In the past, he relied on other clinical search engines and resources like MDConsult, now called ClinicalKey. 

    “I could type it in and it would give me gobs of information. So it’s a supplement,” Castro said. 

    Doctors are also using it to enter a patient’s symptoms and have it return a differential diagnosis — a list of possible conditions related to the presenting symptoms, according to Castro. 

    “That is already happening today,” he said. 


    Microsoft revamps search engine with AI technology

    01:58

    Study buddy

    Rushabh Doshi, a second-year medical student at Yale University School of Medicine, likes to use ChatGPT to create sample questions while he studies for the U.S. Medical Licensing Examination. 

    Test prep services have limited practice questions, and ChatGPT can generate new ones on any topic based on the prompt he feeds it. 

    screen-shot-2023-02-08-at-7-24-38-pm.png
    Yale med student Rushabh Doshi uses ChatGPT for medical information and education.

    Courtesy of Rushabh Doshi


    It helps him prepare for some patient interactions, too, but uses it strictly for medical education and not patient care.

    “If there is a patient coming in with a disease I am not familiar with, I can go to ChatGPT and read up on it,” he said. 

    It also gives him information that helps him conduct more thorough patient evaluations. “I ask it to give me a guide of the types of questions to ask to make sure I am doing a comprehensive patient interview.”

    [ad_2]

    Source link

  • Microsoft CEO Satya Nadella on challenging Google with the help of AI technology: “It’s a new race”

    Microsoft CEO Satya Nadella on challenging Google with the help of AI technology: “It’s a new race”

    [ad_1]

    For the past two decades, more people have used Google to explore the internet than any other search engine. Now, Microsoft is looking to challenge that dominance using breakthroughs in artificial intelligence.  

    Microsoft on Tuesday unveiled an advanced version of its search engine Bing. Along with the usual search results, ChatGPT-like technology can answer complex questions, help users make decisions and turn even complex questions into conversational answers.  

    For Microsoft CEO Satya Nadella, it’s all a generational chance to put his company back on top when it comes to innovation.  

    “It’s a new race in the most important software category, or the largest software category, in search. Let’s face it,” Nadella told CBS News’ Tony Dokoupil. “Google dominates it. We are thrilled to be here launching Bing to compete.” 

    Microsoft developed the technology in partnership with OpenAI, the research lab in which it has invested billions of dollars. OpenAI is also behind the viral chatbot ChatGPT.  The new AI model is touted to be more powerful than its predecessor, but in an early demonstration set up for CBS News, the feature was at times was slow, unresponsive and inaccurate. 

    Nadella said the only way for any new technology to be “really perfected” is by receiving “real human feedback” in the market.  

    Particularly with AI, “it has to get aligned with human preferences, both personally and societally in terms of the norms. And that’s why we want to launch it,” he said. “We want to have all the safety. We want to have all of the things that will make sure that no harms are created. But we need it out there in the real world.”  


    Microsoft revamps search engine with AI technology

    01:58

    Nadella said the model has been trained with safety as a top priority and that it will not help someone do anything illegal.   

    “We will have many, many mechanisms to ensure that nothing biased, nothing harmful gets generated,” Nadella said.  
     
    A Microsoft executive declined CBS News’ request to test some of those mechanisms, indicating the functionality was “probably not the best thing” on the version in use for the demonstration. 

    Nadella also addressed concerns about “runaway AI,” which he said would be “a real problem” if it happened. 

    “The way to sort of deal with that is to make sure it never runs away,” he said.  

    “And so that’s why I look at it and say … let’s start with … the context in which AI is used,” Nadella said. “The first set of categories in which we should use these powerful models are where humans, unambiguously, unquestionably, are in charge. And so as long as we sort of start there, characterize these models, make these models more safe and over time much more explainable, then we can think about other forms of usage.” 

    “But let’s not have it run away,” he said. 

    [ad_2]

    Source link

  • Microsoft revamps search engine with AI technology

    Microsoft revamps search engine with AI technology

    [ad_1]

    Microsoft revamps search engine with AI technology – CBS News


    Watch CBS News



    Microsoft unveiled a new artificial intelligence-powered search engine as it seeks to gain an edge in the industry. The company’s Bing search engine will soon integrate some of the popular AI technology known as ChatGPT. Jonathan Vigliotti has more.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • Microsoft CEO: Artificial intelligence will lead to more job satisfaction

    Microsoft CEO: Artificial intelligence will lead to more job satisfaction

    [ad_1]

    Microsoft CEO: Artificial intelligence will lead to more job satisfaction – CBS News


    Watch CBS News



    Microsoft CEO Satya Nadella sat down with “CBS Mornings” co-host Tony Dokoupil ahead of an announcement the company plans to make Tuesday about artificial intelligence. He told Dokoupil he believes that despite fears of potential job disruption, AI will eventually lead to worker satisfaction.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • Google and Microsoft Both Introduce AI Chat in the Same Week

    Google and Microsoft Both Introduce AI Chat in the Same Week

    [ad_1]

    The AI wars are heating up.

    On Monday, Google CEO Sundar Pichai announced Bard, a search engine powered by AI that provides answers to queries in cohesive sentences. It is a competitor to ChatGPT — a generative AI tool and chatbot.

    “Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models,” Pichai wrote in the announcement.

    The product will be available to the wider public in the “coming weeks” and until then will be used by “trusted testers.”

    Then, in a press conference on Tuesday, Microsoft announced it had collaborated with OpenAI, the maker of ChatGPT, to integrate the technology into its Bing search engine, per Bloomberg.

    Instead of a traditional search engine experience (searching for a query and being presented with a plethora of links), things like ChatGPT allow you to ask something like, “What is the planet Venus like?” and receive a concise, coherent response.

    This could make search obsolete, as at least one expert has predicted, and presents existential problems for both companies, both of which have search engines, Google and Microsoft’s Bing.

    Google was reportedly spooked by the public furor around the generative AI tool ChatGPT, and it spurred Microsoft into action, investing a reported $10 billion into OpenAI.

    Generative AI means that you could ask the robot, and in this case, ChatGPT, and generate cohesive answers, from essays on Hamlet to a novel to passing a medical licensing exam (according to a preliminary study)— which sent academia into a panic.

    Google developed Bard, which was previously known in-house as LaMDA but had not released it to the public. Pichai reportedly said at an internal meeting in December that the company had a bigger “reputational risk” and needed to comport itself “more conservatively than a small startup.”

    But ChatGPT proved to be a gadfly of sorts. Internally, it was called a “code red” for the company, per The New York Times.

    RELATED: The Dark Side of ChatGPT: Employees & Businesses Need to Prepare Now

    Google’s Bard tool, which “draws on information from the web to provide fresh, high-quality responses,” does things like explaining new discoveries from the James Webb Space Telescope NASA telescope to a 9-year-old or listing the top soccer players in a certain position, the company wrote in the blog post.

    Microsoft’s tool is presently available in preview mode with limited question options — and a waitlist for the full version, per Bloomberg. In the preview version, you can do things like ask it to help plan a menu or write a rhyming poem.

    The company is planning integrations with its Bing search engine as well as its Edge browser, the outlet noted.

    “This technology is going to reshape pretty much every software category,” said Microsoft CEO, Satya Nadella, at the press conference, per Bloomberg.

    [ad_2]

    Gabrielle Bienasz

    Source link