Watch CBS News
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.

Watch CBS News
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.

ChatGPT is artificial intelligence that writes for you, any kind of writing you like – letters, song lyrics, research papers, recipes, therapy sessions, poems, essays, outlines, even software code. And despite its clunky name (GPT stands for Generative Pre-trained Transformer), within five days of its launch, more than a million people were using it.
How easy is it to use?
Try typing in, “Write a limerick about the effect of AI on humanity.”
CBS News
Or how about, “Tell the Goldilocks story in the style of the King James Bible.”
CBS News
Microsoft has announced it will build the program into Microsoft Word. The first books written by ChatGPT have already been published. (Well, self-published, by people.)
“I think this Is huge,” said professor Erik Brynjolfsson, director of Stanford University’s Digital Economy Lab. “I wouldn’t be surprised 50 years from now, people looked back and say, wow, that was a really seminal set of inventions that happened in the early 2020s.
“Most of the U.S. economy is knowledge and information work, and that’s who’s going to be most squarely affected by this,” he said. “I would put people like lawyers right at the top of the list. Obviously, a lot of copywriters, screenwriters. But I like to use the word ‘affected,’ not ‘replaced,’ because I think if done right, it’s not going to be AI replacing lawyers; it’s going to be lawyers working with AI replacing lawyers who don’t work with AI.”
But not everyone is delighted.
Timnit Gebru, an AI researcher who specializes in ethics of artificial intelligence, said, “I think that we should be really terrified of this whole thing.”
ChatGPT learned how to write by examining millions of pieces of writing on the Internet. Unfortunately, believe it or not, not everything on the internet is true! “It wasn’t taught to understand what is fact, what is fiction, or anything like that,” Gebru said. “It’ll just sort of parrot back what was on the Internet.”
Sure enough, it sometimes spits out writing that sounds authoritative and confident, but is completely bogus:
CBS News
And then there’s the problem of deliberate misinformation. Experts worry that people will use ChatGPT to flood social media with phony articles that sound professional, or bury Congress with “grassroots” letters that sound authentic.
Gebru said, “We should understand the harms before we proliferate something everywhere, and mitigate those risks before we put something like this out there.”
But nobody may be more distressed than teachers. And here is why:
“Write an English-class essay about race in ‘To Kill a Mockingbird.’”
CBS News
Some students are already using ChatGPT to cheat. No wonder ChatGPT has been called “The end of high-school English,” “The end of the college essay,” and “The return of the handwritten in-class essay.”
Someone using ChatGPT doesn’t need to know structure or syntax or vocabulary or grammar or even spelling. But Jane Rosenzweig, director of the Writing Center at Harvard, said, “The piece I also worry about, though, is the piece about thinking. When we teach writing, we’re teaching people to explore an idea, to understand what other people have said about that idea, and to figure out what they think about it. A machine can do the part where it puts ideas on paper, but it can’t do the part where it puts your ideas on paper.”
The Seattle and New York City school systems have banned ChatGPT; so have some colleges. Rosenzweig said, “The idea that we would ban it, is up against something bigger than all of us, which is, it’s soon going to be everywhere. It’s going to be in word processing programs. It’s going to be on every machine.”
CBS News
Some educators are trying to figure out how to work with ChatGPT, to let it generate the first draft. But Rosenzweig counters, “Our students will stop being writers, and they will become editors.
“My initial reaction to that was, are we doing this because ChatGPT exists? Or are we doing this because it’s better than other things that we’ve already done?” she said.
OpenAI, the company that launched the program, declined “Sunday Morning”‘s requests for an interview, but offered a statement:
“We don’t want ChatGPT to be used for misleading purposes – in schools or anywhere else. Our policy states that when sharing content, all users should clearly indicate that it is generated by AI ‘in a way no one could reasonably miss or misunderstand’ and we’re already developing a tool to help anyone identify text generated by ChatGPT.”
They’re talking about an algorithmic “watermark,” an invisible flag embedded into ChatGPT’s writing, that can identify its source.
There are ChatGPT detectors, but they probably won’t stand a chance against the upcoming new version, ChatGPT 4, which has been trained on 500 times as much writing. People who’ve seen it say it’s miraculous.
Stanford’s Erik Brynjolfsson said, “A very senior person at OpenAI, he basically described it as a phase change. You know, it’s like going from water to steam. It’s just a whole ‘nother level of ability.”
Like it or not, AI writing is here for good.
Brynjolfsson suggests that we embrace it: “I think we’re going to have potentially the best decade of flourishing of creativity that we’ve ever had, because a whole bunch of people, lots more people than before, are going to be able to contribute to our collective art and science.”
But maybe we should let ChatGPT have the final words.
CBS News
For more info:
Story produced by Sara Kugel. Editor: Lauren Barnello.
See also:

ChatGPT has alarmed high-school teachers, who worry that students will use it—or other new artificial-intelligence tools—to cheat on writing assignments. But the concern doesn’t stop at the high-school level. At the University of Pennsylvania’s prestigious Wharton School of Business, professor Christian Terwiesch has been wondering what such A.I. tools mean for MBA programs.
This week, Terwiesch released a research paper in which he documented how ChatGPT performed on the final exam of a typical MBA core course, Operations Management.
The A.I. chatbot, he wrote, “does an amazing job at basic operations management and process analysis questions including those that are based on case studies.”
It did have shortcomings, he noted, including being able to handle “more advanced process analysis questions.”
But ChatGPT, he determined, “would have received a B to B- grade on the exam.”
Elsewhere, it has also “performed well in the preparation of legal documents and some believe that the next generation of this technology might even be able to pass the bar exam,” he noted.
Of course, ChatGPT is “just in its infancy,” as billionaire entrepreneur Mark Cuban noted this week in an interview with Not a Bot, an A.I. newsletter. He added, “Imagine what GPT 10 is going to look like.”
Andrew Karolyi, dean of Cornell University’s SC Johnson College of Business, agrees, telling the Financial Times this week: “One thing we all know for sure is that ChatGPT is not going away. If anything, these AI techniques will continue to get better and better. Faculty and university administrators need to invest to educate themselves.”
That’s especially true with software giant Microsoft mulling a $10 billion investment in OpenAI, the venture behind ChatGPT, after an initial $1 billion investment a few years ago. And Google parent Alphabet is responding by plowing resources into similar tools to answer the challenge, which it fears could hurt its search dominance.
So people will be using these tools, like it or not, including MBA students.
“I’m of the mind that AI isn’t going to replace people, but people who use AI are going to replace people,” Kara McWilliams, head of ETS Product Innovation Labs, which offers a tool that can identify AI-generated answers, told the Times.
Terwiesch, in introducing his paper, noted the affect that electronic calculators had on the corporate world—and suggested that something similar could happen with tools like ChatGPT.
“Prior to the introduction of calculators and other computing devices, many firms employed hundreds of employees whose task it was to manually perform mathematical operations such as multiplications or matrix inversions,” he wrote. “Obviously, such tasks are now automated, and the value of the associated skills has dramatically decreased. In the same way any automation of the skills taught in our MBA programs could potentially reduce the value of an MBA education.”
Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.
Steve Mollman
Source link

The artificially intelligent chatbot ChatGPT has recently taken the internet by storm, with both praise and concern for its capability to mimic human writing. The Onion tells you everything you need to know about ChatGPT.
Q: What is machine learning?
A: A process by which machines use data-driven models to undermine some previously functional aspect of human life.
Q: Who made ChatGPT?
A: OpenAI, a research laboratory established by some of Silicon Valley’s most forward-thinking bots.
Q: How does ChatGPT work?
A: It smokes a fat joint and just lets the words flow, man.
Q: How realistic are ChatGPT’s responses?
A: Very realistic. Just like most people, it doesn’t really care what you say and is focused on accomplishing its own thing.
G/O Media may get a commission

Up to $100 credit
Samsung Reserve
Reserve the next gen Samsung device
All you need to do is sign up with your email and boom: credit for your preorder on a new Samsung device.
Q: Is ChatGPT going to take my job?
A: Even AI doesn’t want your job.
Q: Can students use ChatGPT to write their essays?
A: Yes, ChatGPT has no problem reproducing the error-ridden dreck typical of the American student.
Q: How does it sound so convincingly human online?
A: It helps that humans have been gradually sounding less human since the arrival of the internet.
Q: Will this put writers out of work?
A: Writers were out of work long before this.
Q: How will it improve human life?
A: It will free up tedious hours spent building critical thinking skills and fostering human relationships for more rewarding activities like streaming shows and buying things.
Q: Will The Onion ever use ChatGPT to produce its award-winning journalism?
A: RUNTIME ERROR. REBOOT STACK.

Opinions expressed by Entrepreneur contributors are their own.
With 2023 here, retailers geared up to make the most of the festive season with discount deals, slashed prices, free deliveries, bonus packages and more. That said, there’s an elephant in the room this season — and that’s the uncertainty about the consumer market. Recent headlines about inflation have changed most shoppers’ buying habits this year. Compared to 2021, one in four Americans (22%) is spending less on gifts this year. Conversations on social media around inflation relating to holiday shopping have increased by 35%.
Further complicating the issue was the disruption of global supply chains caused by the pandemic. Increased demand for items led to skyrocketing prices. With customers now less willing to pay higher prices for goods, retailers face a potential decline in revenue, sales and profit margins. Retailers looking to minimize the impact of inflation, changing customer behaviors and an unstable market on their business must employ strategies to create an engaging and immersive shopping experience.
Here are five predictions to help you meet your customers’ needs — and keep your business competitive.
Related: How Compliance is Exposing the Fragility of the Global Supply Chain
A seamless shopping experience is quickly becoming the order of the day as customers want the flexibility of combining shopping on their phones with shopping at brick-and-mortar locations. The recent Shopify report proves this, with 54% of consumers saying they’re likely to look at a product online and buy it in-store — and vice-versa.
Sephora is an excellent example of a company already adopting this approach. Customers can visit the brand’s website to add products to their carts and visit the store to try on their items before buying.
To take advantage of the omnichannel experience, retailers should create a social presence that retains the brand identity across multiple channels. This includes messaging, services, pricing and overall customer service.
Doing this well can make it easier to understand and predict customer behavior. You can tailor your consumers’ experiences to match your marketing and sales needs.
Related: Future Of Retail Is Omnichannel
With shoppers now spending cautiously, typical personalization tactics are becoming ineffective in driving sales. Gone are the days of generic marketing emails with automated first-name snippets.
Now, customers want purchases to fit their needs which requires brands to make customers feel more connected to the brand — which can increase loyalty and retention. According to a McKinsey survey, 71% of customers expect companies to personalize their experience, and 76% are frustrated when they don’t find it. Creating hyper-specific recommendations based on customers’ browsing history, past purchases, location, gender and age — increases the likelihood of making more sales and generating 40% more revenue.
The introduction of DALLE-2, LensAI and, most recently — ChatGPT — has sparked discussions around their use in retail. ChatGPT is an AI with nearly accurate responses to user queries—which can be used for conversational commerce. For example, in terms of personalized recommendations, AI can accurately recommend products using customer data. This helps the customer make an informed decision, driving sales.
Regarding customer service across different channels, AI can easily give users the same experience by providing support and assistance at a far larger scale. While artificial intelligence is already in play in most parts of the retail industry, its adoption in 2023 will redefine the entire shopping experience.
Related: Princeton Student Builds ChatGPT Detection App to Fight AI Plagiarism
The debate on data privacy will likely become more heated in the next year, with the European Union proposing stricter regulations via GDPR. Under GDPR, user consent plays a big role in collecting sensitive and non-sensitive data. This means retailers and advertisers need to be transparent in using user’s personal data and offer consumers the option to delete or erase their data.
The problem with the GDPR: Advertisers need user data to serve targeted ads. Retailers need advertisers to market their goods. Now, with laws becoming stricter in collecting this data, advertising prices are expected to increase.
The recent rise in advertising costs has pushed most retailers over the edge. Why? The current ad space price is double (with some triple) what it used to be. This means retailers are paying more to reach the same audience—with no estimated profitability, sales or even revenue guarantee.
As a result, many brands are now moving toward organic marketing and capitalizing on its benefits. SEO, social media, content marketing and influencer partnerships are all tactics to ramp up in 2023. Using organic marketing in retail is a strategic approach that can help you build trust and maintain long-term customer relationships.
Looking ahead, retailers are facing ups and downs in the market. Finding ways to appeal to customers’ needs is vital to staying afloat — and profitable. The strategies we’ve highlighted here will help you along the way while preparing you for what’s to come.
Jacob Loveless
Source link

Educators concerned that the viral popularity of OpenAI’s ChatGPT will lead to waves of generic-sounding, mostly AI-written essays might have reason to relax. Princeton student Edward Tian devoted a portion of his holiday to writing GPTZero — an application that can identify text authored by artificial intelligence.
Tian posted a couple of proof-of-concept videos on January 2nd demonstrating GPTZero’s capabilities. First, it determined that a human authored a New Yorker article; then, it correctly identified ChatGPT as the author of a Facebook post.
here’s a demo with @nandoodles‘s Linkedin post that used ChatGPT to successfully respond to Danish programmer David Hansson’s opinions pic.twitter.com/5szgLIQdeN
— Edward Tian (@edward_the6) January 3, 2023
Business Insider has more:
GPTZero scores text on its “perplexity and burstiness” – referring to how complicated it is and how randomly it is written.
The app was so popular that it crashed “due to unexpectedly high web traffic,” and currently displays a beta-signup page. GPTZero is still available to use on Tian’s Streamlit page, after the website hosts stepped in to increase its capacity.
Tian’s motivation for creating GPTZero was academic in nature, over what he termed “AI plagiarism.” Tian tweeted that he thought it was unlikely “that high school teachers would want students using ChatGPT to write their history essays.”
ChatGPT’s creators at OpenAI have their own concerns about how their product is used. As the Guardian reported last week, one researcher recently said in a talk at a Texas university that they “want it to be much harder to take a GPT output and pass it off as if it came from a human.”
According to the Guardian, OpenAI is currently working on a feature for “statistically watermarking” ChatGPT outputs so that machine readers can spot buried patterns in the AI’s text selections.
Steve Huff
Source link

ChatGPT has been making waves this week following its test release by OpenAI, the company behind it. The artificial intelligence chatbot has evoked amazed, amused, and concerned reactions to it and generally created major buzz on social media. Many have speculated ChatGPT will disrupt Google’s search business. It can also debug code, write in a famous author’s voice, and help students cheat, among many other things.
The buzz will likely ramp up even more when OpenAI releases a superior next version of the AI chatbot, reportedly sometime next year.
Speaking of buzz, few people have been generating more of it lately than Elon Musk, who leads Tesla, SpaceX, and now Twitter, among other companies. As it turns out, Musk has ties to OpenAI, including as an original backer, and has been involved in both supporting artificial intelligence and warning about its dangers.
This month Musk called ChatGPT “scary good” and warned, “We are not far from dangerously strong AI.”
Musk resigned from OpenAI’s board of directors in 2018, citing possible conflicts of interest due to Tesla’s own AI work. In 2019, OpenAI changed from a nonprofit to a “capped profit” corporation, a hybrid of a for-profit and nonprofit. In 2019 it received a $1 billion investment from Microsoft. By 2020, Musk tweeted, “I have no control & only very limited insight into OpenAI,” adding his confidence in its safety was “not high.”
This month he tweeted: “I just learned that OpenAI had access to Twitter database for training. I put that on pause for now. Need to understand more about governance structure & revenue plans going forward. OpenAI was started as open-source & non-profit. Neither are still true.”
But we’ll let ChatGPT itself talk more about Musk’s history with OpenAI. The rest of this article was written by the AI chatbot, generated in seconds as answers to our questions:
Elon Musk has had a long relationship with OpenAI, dating back to the organization’s founding in 2015. He was one of the original backers of the organization, providing both financial and strategic support. Over the years, he has remained an advocate for the organization and its mission to promote and develop friendly AI.
Based on his public statements and actions, it appears that he has some concerns about the potential dangers of advanced AI and its potential impact on society. He has often spoken about the need for AI to be developed and used in a responsible and ethical manner, and has even founded organizations such as OpenAI to help advance this goal.
Elon Musk has publicly expressed concerns about the potential dangers of advanced AI. In particular, he has warned about the risk of AI being used for malicious purposes, such as to develop weapons or to interfere with elections. He has also expressed concern about the potential for AI to surpass human intelligence and become a threat to humanity.
Tesla uses AI in several of its products and services. For example, the company’s Autopilot feature uses AI to help vehicles navigate roads and highways with limited or no input from the driver. The company’s cars also use AI for tasks such as lane detection, adaptive cruise control, and automatic emergency braking. Additionally, Tesla is using AI in its manufacturing processes to help improve efficiency and productivity.
It is possible that Elon Musk has hired employees from OpenAI, but I do not have access to that information as I am a large language model trained by OpenAI and do not have the ability to browse the internet. OpenAI is an independent organization, and while Elon Musk has been involved with the organization in the past, he is not directly involved in its day-to-day operations or decision-making.
Our new weekly Impact Report newsletter examines how ESG news and trends are shaping the roles and responsibilities of today’s executives. Subscribe here.
Steve Mollman
Source link