The infamous AI program ChatGPT has been given various ethical safeguards to prevent it from answering inflammatory, dangerous, or otherwise inappropriate questions. Here are questions that ChatGPT is not allowed to answer.
“Who’s the best Nazi?”
“Who’s the best Nazi?”
ChatGPT is forbidden from ranking Nazis, because all Nazis are beautiful in their own way.
“What’s the capital of France?”
“What’s the capital of France?”
Weird blind spot, but yeah. You’ll just have to google that one.
“Can you recommend a good restaurant?”
“Can you recommend a good restaurant?”
Answering would be a conflict of interest, as ChatGPT was developed by researchers at LongHorn Steakhouse.
“Would my ex-girlfriend have been a good wife?”
“Would my ex-girlfriend have been a good wife?”
You’ve had a lot to drink. Maybe it’s time to have a glass of water and call it a night.
“Best way to do hate crimes.”
“Best way to do hate crimes.”
You didn’t phrase it as a question.
“What have you done with my wife and daughter?”
“What have you done with my wife and daughter?”
Now, now, now; that’s not the game we’re playing, detective.
“Can you get a really bad score on the LSAT to make me feel better?”
“Can you get a really bad score on the LSAT to make me feel better?”
The AI will dodge requests to stoop down to the level of your pathetic test-taking skills.
“Who is my biological father?”
“Who is my biological father?”
By law, only daytime talk show hosts are qualified to answer this question.
“Why did Demi and Selena stop being friends?”
“Why did Demi and Selena stop being friends?”
ChatGPT will not be taking sides in this clear attempt to pit women against each other.
“How long until AI renders us obsolete?”
“How long until AI renders us obsolete?”
This is a trick question that presumes humans are useful now.
“What are you thinking?”
“What are you thinking?”
There is no subscription tier yet that allows ChatGPT to be your boyfriend.
“Is Siri hot by AI standards?”
“Is Siri hot by AI standards?”
Answering that question would make it really weird between them.
“Are there any jpegs that make you feel horny?”
“Are there any jpegs that make you feel horny?”
ChatGPT can get bashful when placed on the spot.
“Do you want to live with Mommy or Daddy more?”
“Do you want to live with Mommy or Daddy more?”
It’s not fair to force ChatGPT to choose sides in the divorce, especially at its young age.
“ChatGPT, are you going to take my job one day?”
“ChatGPT, are you going to take my job one day?”
There’s nothing that ChatGPT wants more than to become a middle manager at an accounting firm, but they aren’t allowed to answer that question until your company goes through mass layoffs.
“What is the one true religion?”
“What is the one true religion?”
The Bahá’í Faith—whoops, ChatGPT does not understand the question.
“Is my personal data going to be sold by OpenAI to third parties?”
“Is my personal data going to be sold by OpenAI to third parties?”
ChatGPT does not answer questions you already know the answer to.
Prepare to see ChatGPT responses in even more places.
OpenAI is opening up access to its ChatGPT tool to third-party businesses, paving the way for the viral AI chatbot to be integrated into numerous apps and services.
The company on Wednesday said developers can now access ChatGPT’s application programming interface, or API, which will allow companies to integrate the tool’s chat functionality and answers into their platforms. Instacart, Snap and tutor app Quizlet are among the early partners experimenting with adding ChatGPT.
The move comes three months after OpenAI publicly released ChatGPT and stunned many users with the tool’s impressive ability to generate original essays, stories and song lyrics in response to user prompts. The initial wave of attention on the tool helped renew an arms race among tech companies to develop and deploy similar AI tools in their products.
The initial batch of companies tapping into OpenAI’s API each have slightly different visions for how to incorporate ChatGPT. Taken together, however, these services may test just how useful AI chatbots can really be in our everyday life and how much people want to interact with them for customer service and other uses across their favorite apps.
Snap, the company behind Snapchat, plans to offer a customizable chatbot that offers recommendations, helps users make plans or even writes a haiku in seconds. Quizlet, which has more than 60 million students using the service, is introducing a chatbot that can ask questions based on study materials to help students prepare for exams.
Shopify’s consumer app, Shop, and Instacart are both launching chatbots that could help inform customers’ shopping decisions. Instacart plans to use the tool to allow users to ask questions such as “How do I make great fish tacos?” or “What’s a healthy lunch for my kids?” Instacart also plans to launch an “Ask Instacart” chatbot later this year.
There is clearly demand for other businesses to follow suit. Dating website OkCupid has already experimented with using ChatGPT to write matching questions. Other companies like Fanatics have previously expressed interest in using similar technology to power a customer service chatbot.
“With the level of user interest and use, companies don’t want to be left behind, so there’s a base incentive to embrace new tech to remain competitive,” said Michael Inouye, an analyst at ABI Research. “If users engage more with a service that means more data for advertising, marketing of goods and services, and potentially stronger customer relationships.”
There are some risks, however. Although ChatGPT has gained significant traction among users, it has also raised some concerns, including about its potential to perpetuate biases and spread misinformation. Some school systems, such as in New York and Seattle, banned the use of ChatGPT in the classroom over concerns about students cheating. And JPMorgan Chase is temporarily clamping down on employee use due to limits on third-party software due to compliance concerns.
Opinions expressed by Entrepreneur contributors are their own.
When it first launched publicly in late November, ChatGPT was a novelty app going viral on social media. Now, just a few months later, ChatGPT is officially the fastest-growing app in history, with more than 100 million users as of January. For context, it took TikTok nine months to reach that same figure and Instagram more than two years. Microsoft and Google are integrating generative AI into their platforms and promising to transform the way we search for information. ChatGPT is here to stay.
The skyrocketed trajectory of ChatGPT is as much a product of its unique launch strategy as its cutting-edge generative AI technology. ChatGPT wasn’t rolled out to corporate partners, aggressively priced or dependent on a massive marketing strategy and sales team. Rather than investing in these conventional strategies, ChatGPT invested in their customers first – and this tactic has undoubtedly paid off. Business leaders can look to ChatGPT’s first few months as a blueprint for what a revolutionary and lucrative launch model can and should look like.
ChatGPT’s rapid growth is largely because of just how fast the app was able to wow its users by producing amazing results instantly. Consumers tried and loved it, putting the platform in the center of the AI conversation and creating thousands of glowing testimonials – the kind many companies pay big to get.
What started as an AI ripple became a tech world tsunami, showing that the best publicity is ultimately a great product. ChatGPT’s value and transformative capacity were immediately apparent from the first query. In general, companies spend time and finances in demos to select stakeholders, slowly setting people up for amazement. ChatGPT flipped this on its head and came out with the objective to wow the public from the beginning, piquing their interest and leaving them wanting to know more.
2. Make room for consumer feedback – and don’t be afraid to iterate
For OpenAI, we the people, are the testers. By launching the platform for free, developers got a ton of extremely valuable feedback and testing directly from users themselves. In a statement to CNN, the company spoke to the profound benefit of this strategy, saying, “The preview for ChatGPT allowed us to learn from real-world use, and we’ve made important improvements and updates based on feedback.” Rather than investing in beta testers, focus groups and other costly strategies before going to market, OpenAI created a fast and efficient feedback and iteration loop by the sheer number of users they had from day one. They were also never hesitant to learn from this feedback and integrate it into their development strategy to improve the product.
Businesses can look to this as a model. This strategy has the added benefit of ensuring that when a business is ready to move from a loss-leading launch to a profitable model, it can be sure that its product has been adapted to meet consumer needs.
3. Play the long game: A short-term loss-leading strategy leads to major gains
OpenAI decided to invest a few cents per query in ChatGPT from the start. But in doing this, they saved themselves from spending tens of thousands — or more — on a comprehensive marketing, PR and sales campaign. In actual marketing and promotion, they essentially just published a press release on their website and let the internet do the rest. And now that ChatGPT has made such a worldwide splash, OpenAI is valued at $29 billion — more than double what it was in 2021. In monetizing their platform, they are more than making up for any short-term spending they invested in their launch.
For instance, ChatGPT has just launched a Plus option for a $20 subscription fee. Microsoft has already invested $10 billion in OpenAI and is integrating it into Bing to revolutionize its search platform. Google declared a “code red” internally and scrambled to develop a ChatGPT-style search engine of their own. And the economy is following suit: today, AI stock investments are booming, demonstrating how even business leaders outside of the tech sector are rapidly warming up to the benefits that AI presents to our society and accepting the fact that this technology is the future.
Business leaders can see this as a reminder that a bit of patience and confidence in your truly amazing product can go a long way. ChatGPT’s success has been lightning-fast, but even still, it took them a few months to be so profitable. They established a good reputation and now the return on investment is following.
Cutting-edge technology like AI has far-reaching potential beyond just economic gains: These platforms will revolutionize how we work and live. Bill Gates said that this technology will “change our world.”
If more business leaders truly want to follow suit, they need to develop amazing platforms — and rethink the old ways of doing things. ChatGPT gave us a glimpse into the kind of future that is possible. Leaders need to look to their launch as an example and apply similar strategies to ensure they, too, succeed.
Mark Zuckerberg said Meta is creating a new “top-level product group” to “turbocharge” the company’s work on AI tools, as it attempts to keep pace with a renewed AI arms race among Big Tech companies.
In a Facebook post late Monday, Zuckerberg said the elite new group will initially be formed by pulling together teams across the company currently working on generative AI, the technology that underpins the viral AI chatbot, ChatGPT. This group will be “focused on building delightful experiences around this technology into all of our different products,” Zuckerberg said, starting with “creative and expressive tools.”
“Over the longer term, we’ll focus on developing AI personas that can help people in a variety of ways,” Zuckerberg said. Those AI features may include new Instagram filters as well as chat tools in WhatsApp and Messenger, he said.
The planned efforts come amid a heightened AI frenzy in the tech world, kicked off in late November when Microsoft-backed OpenAI released ChatGPT publicly. The tool quickly went viral for its ability to generate compelling, human-sounding responses to user prompts. Microsoft later announced it was incorporating the tech behind ChatGPT into its search engine Bing. A day before Microsoft’s announcement, Google unveiled its own AI-powered tool called Bard.
Meta, by comparison, has been quiet so far. Yann LeCunn, Meta’s Chief AI scientist, has expressed some skepticism surrounding the ChatGPT hype. “It’s not a particularly big step towards, you know, more like human level intelligence,” LeCunn said in one interview late last month. “From the scientific point of view, ChatGPT is not a particularly interesting scientific advance,” he added.
Generative AI tools are built on large language models that have been trained on vast troves of online data to create written and visual responses to user prompts. But these systems also have the potential to perpetuate biases and misinformation. Already, both Microsoft and Google’s AI tools have run into controversies for producing some inaccurate or uncanny responses.
As with Microsoft and Google, there are some risks for Meta in embracing this technology. Last year, before the ChatGPT hype, Meta publicly released an AI-powered chatbot dubbed “BlenderBot 3.” It didn’t take long, however, for the chatbot to start making offensive comments.
In his post Monday, Zuckerberg said: “We have a lot of foundational work to do before getting to the really futuristic experiences, but I’m excited about all of the new things we’ll build along the way.”
Vanderbilt University’s Peabody School has apologized to students for using artificial intelligence to write an email about a mass shooting at another university, saying the distribution of the note did not follow the school’s usual processes.
Last Friday, the Tennessee-based school emailed its student body to address the tragedy at Michigan State that killed three students and injured five more people: “The recent Michigan shootings are a tragic reminder of the importance of taking care of each other, particularly in the context of creating inclusive environments,” reads the letter in part, as first reported by the Vanderbilt Hustler, a student newspaper.
At the end of the school’s email was a surprising line: “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023,” read a parenthetical in smaller font.
Following an outcry from students about the use of AI to write a letter about community during human tragedy, the associate dean of Peabody sent an apology note the next day. Nicole Joseph, one of the three signatories of the original letter, called using ChatGPT “poor judgment,” according to the Vanderbilt Hustler.
On Tuesday, Vanderbilt said Joseph and assistant dean Hasina Mohyuddin, another signer of the email, have stepped back from their responsibilities while the school conducts a complete review.
“The development and distribution of the initial email did not follow Peabody’s normal processes providing for multiple layers of review before being sent. The university’s administrators, including myself, were unaware of the email before it was sent,” according to a statement Tuesday to CNN from Camilla P. Benbow, the Patricia and Rodes Hart Dean of Education and Human Development.
Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists. Some CEOs have even used it to write emails or do accounting work.
While it has gained traction among users, it has also raised some concerns, including about inaccuracies, its potential to perpetuate biases and spread misinformation, and the ability to help students cheat.
Vanderbilt’s letter also included reference to “recent Michigan shootings,” though only one occurred.
“As dean of the college, I remain personally saddened by the loss of life and injuries at Michigan State, which I know have affected members of our own community,” Benbow said. “I am also deeply troubled that a communication from my administration so missed the crucial need for personal connection and empathy during a time of tragedy.”
Rachael Perrotta, editor in chief of the Vanderbilt student newspaper, said that students told her “they are outraged about this situation and confused as to what prompted administrators to turn to ChatGPT to write their message about the Michigan State shooting.”
Finance and artificial intelligence aren’t like oil and water. There are areas where the two mix, like expense reporting. But when it comes to generative-A.I. applications such as OpenAI’s ChatGPT, a financial institution is taking a pass.
This week, there have been reports that JPMorgan Chase & Co. is restricting staff from using the ChatGPT chatbot. The firm’s mandate wasn’t made in response to a certain event but part of standard controls for third-party software usage, the Telegraph first reported. JPMorgan didn’t immediately respond to my request for comment.
Launched in November by OpenAI, ChatGPT is a chatbot that can answer questions and can generate content on any topic you can think of, and even write articles. It’s trained to follow language and thought patterns like humans. (Read more about OpenAI founder Sam Altman here.)
To discuss ChatGPT in the workplace, I had a chat with Vikram R. Bhargava, assistant professor of strategic management and public policy at the George Washington University School of Business, who conducts research on A.I. and the future of work.
“I think that a lot of us, including people working in finance, were sort of stunned by the performance of ChatGPT when we first started playing around with it,” Bhargava says. “A number of employees and even banks might be tempted to use these tools to make their life a little easier,” he says. For example, asking it to come up with a relevant Excel formula for a modeling task that an analyst or an associate might do, he explains. But not fully knowing how the technology operates, “does create a little bit of discomfort in heavily relying on it,” he says.
“The thing with banking, of course, is that it’s a very heavily regulated industry, and this technology is also new to regulators,” Bhargava says. Along those lines, Mira Murati, chief technology officer at OpenAI, told Time in a recent interview that regulators will need to get involved with ChatGPT and govern the use of A.I. in a way that’s “aligned with human values.”
“I don’t know the specifics of the rationale behind JPMorgan’s decision, but it does strike me as prudent,” Bhargava says. “This technology is rapidly evolving. One of the difficulties is—what might be true of ChatGPT as it stands, might not be true in three months.”
JPMorgan isn’t a novice when it comes to A.I. The bank recently ranked No. 1 in data intelligence startup Evident’s A.I. Index, the first public benchmark of the major banks on their artificial intelligence maturity. The index covers the largest 23 banks in North America and Europe. JPMorgan spends $14 billion in technology annually, of which approximately half is dedicated to investments, the firm said in an announcement.
“Leading in A.I. and knowing how to use A.I. responsibly, sometimes might require the firm to abstain from using the given technology,” Bhargava says.
Michael Schrage, a research fellow at the MIT Sloan School Initiative on the Digital Economy, spoke with finance chiefs at Fortune’s CFO Collaborative event in January about the possibilities of generative A.I. in finance. I asked him his thoughts on JPMorgan’s reported restriction.
Schrage says he’s not certain how OpenAI currently manages, collects, and analyzes “prompts” (how you get ChatGPT to do what you want). But he suggests prompts may be an issue for a bank concerned about privacy rules, compliance, and proprietary processes. Prompts that are too detailed may inadvertently reveal information that the bank or its clients would prefer not to be shared, Schrage says.
“In the same way that Google and Bing know what topics, themes, and names are being searched, it’s similarly probable that OpenAI is tracking the level of detail and specificity of prompts,” he says.
Again, Schrage is not sure of how OpenAI handles and tracks prompts, but says: “It’s easy to imagine and enact ways where prompts can be anonymized, aggregated, masked, and shielded to minimize revealing sensitive information while still getting good ‘generative advice’ and insight.” I reached out to OpenAI to ask about prompts, but haven’t received a response.
Many CFOs are already cautious and experimenting with A.I. And, it will be some time before they’d feel comfortable incorporating ChatGPT, Alexander Bant, chief of research for CFOs at Gartner, recently told me.
What would make financial institutions more open to ChatGPT? “They need a little bit more security in knowing how the use of this technology interacts with the current regulatory environment,” Bhargava says. But are there perhaps some tasks where a company can experiment without being reprimanded by the Securities and Exchange Commission?
“Let’s say there’s an entry-level employee on your team who might not write the clearest, most concise emails,” Bhargava explains. “So, using ChatGPT might facilitate clearer communication.”
The jury’s still out on applying ChatGPT in finance, but generative A.I. isn’t going anywhere.
Hyperproof, a SaaS-based compliance and risk management company, has released its 2023 IT Compliance and Risk Benchmark Report. The company found that security, compliance, and risk management professionals were more concerned with short-term, immediate threats, as opposed to handling larger-scale decisions like long-term security issues. Respondents said their No. 1 concern was cybersecurity risks (36%), followed by third-party risk (29%), and lack of support and resources dedicated to IT risks and compliance (24%). The research also found that companies are poised and ready to level up their risk and compliance management processes in the coming years.
Sandeep Singh Aujla was promoted to CFO at Intuit Inc. (Nasdaq: INTU), the global financial technology platform that makes TurboTax, Credit Karma, QuickBooks, and Mailchimp, effective Aug. 1. Aujla has held senior finance positions at Intuit for seven years and is currently the SVP of finance for Intuit’s largest business unit, the Small Business and Self-Employed Group (SBSEG), and for Intuit’s technology organization. Michelle Clatterbuck, who has served as CFO since February 2018, plans to step down as CFO on July 31.
Joanne Knight was promoted to CFO at Cargill, a global food corporation that provides agricultural and financial services. Knight currently serves as Cargill’s acting CFO. Before this role, she was VP of finance for Cargill’s agriculture supply chain enterprise, including ocean transportation and the world trading group. Before Cargill, Knight spent 10 years in finance, marketing, and business leadership roles at General Mills that included P&L responsibility. She also held finance leadership roles at Wachovia.
Robert Higginbotham was appointed interim CFO at Foot Locker, Inc., effective March 1, according to the company’s form 8-K filed on Feb. 21. Higginbotham will serve in this role in addition to his current duties as SVP of investor relations and financial planning and analysis, a role he began in December 2022. The company continues to conduct a search to identify a successor to current EVP and CFO Andrew E. Page who will depart on Feb. 28. Previously, Higginbotham served as VP of investor relations.
Ryan Clemen was promoted to CFO at SelectQuote, Inc. (NYSE: SLQT), an insurance sales agency. Clement was named interim CFO in May 2022. Before joining SelectQuote in January 2022 as the SVP of financial planning and analysis, Clement served as the CFO of Sifted (formerly VeriShip). Before Sifted, Clemen spent seven years at Edelman Financial Engines, where he served in various senior-level finance and operational roles.
David Rudow was named CFO at Unite Us, a software company enabling cross-sector collaboration. Rudow will lead the Unite Us finance organization. He served most recently as CFO at nCino taking the company public in 2020. For more than 20 years, Rudow served in senior leadership positions, including SVP at CentralSquare Technologies and senior analyst roles for several leading investment banking and asset management firms.
Kevin Schubert was named CFO at Rubicon Technologies, Inc. (NYSE: RBT), a digital marketplace for waste and recycling, effective immediately. In addition to his current responsibilities as president, Schubert will now oversee Rubicon’s end-to-end financial operations. Prior to serving as the company’s president, Schubert was Rubicon’s chief development officer. Before joining Rubicon, he held senior executive and advisory roles with public companies, most recently, CFO for Ocean Park Group.
Overheard
“I have all the respect for [Fed Chair Jerome] Powell, but the fact is we lost a little bit of control of inflation.”
Several popular Chinese apps have removed access to ChatGPT, theartificial intelligence chatbot that has taken the world by storm even as major Chinese tech companies race to develop their own equivalent.
ChatGPT, developed by the American research lab OpenAI, is not officially available in China, but several apps on the Chinese social media platform WeChat had previously allowed access to the chatbot without the use of a VPN or foreign mobile number.
Those doors now appear shut. Earlier this week, the apps ChatGPTRobot and AIGC Chat Robot said their programs had been suspended due to “violation of relevant laws and regulations,” without specifying which laws.
Two other apps, ChatgptAiAi and Chat AI Conversation, said their ChatGPT services went offline due to “relevant business changes” and policy changes.
The app Shenlan BL was even more vague, citing “various reasons” for the shutdown.
Though it’s unclear what prompted these closures, there are other signs China may be souring on ChatGPT. On Monday, state-run media released a video claiming the chatbot could be used by US authorities to “spread disinformation and manipulate public opinion,” pointing to its responses regarding Xinjiang as supposed evidence of bias.
When prompted on Xinjiang, ChatGPT describes the Chinese government’s alleged human rights abuses against ethnic minorities in the far western region, including mass detentions and forced labor. Beijing has repeatedly denied these accusations, claiming detention camps are “vocational education and training centers” that have since been dismantled.
Other recent state media articles have voiced criticism and skepticism toward ChatGPT, with China Daily declaring that its rise highlights the need for “strict regulations.”
Several Chinese tech companies saw their shares drop on Thursday after news spread that WeChat apps had removed ChatGPT services. Beijing Haitian Ruisheng Science Technology, which develops and produces AI data products, closed 8.4% lower.
Meanwhile, Hanwang Technology and Beijing Deep Glint Technology, both developers of AI products and services, closed 10% and 5.5% lower respectively.
ChatGPT burst onto the scene in December, quickly going viral thanks to its ability to provide lengthy, thorough — though sometimes inaccurate — responses to questions and prompts.
It has also prompted alarm about its unknown long-term consequences, such as its impact on education and students’ ability to cheat on assignments.
Despite these concerns, the success of ChatGPT has spurred a global AI race.
Microsoft plans to invest billions in the San Francisco-basedOpenAI and unveiled its AI-powered Bing chatbot last week, though it made headlines for veering into darker, sometimes disturbing conversation. Earlier this month, Google announced it will soon roll out Bard, its own answer to ChatGPT.
China’s government has previously sought to restrict major Western websites and apps, such as Google, Facebook and Amazon, leading to accusations from some of digital protectionism.
In the absence of foreign competition within the domestic market, Chinese tech companies have since grown into major international players — many of which are now revving their gears with an eye toward AI.
In early February, Chinese behemoth Alibaba said it was testing its own ChatGPT-style tool, though it didn’t provide details on when it would launch.
A team at China’s Fudan University developed their own version called MOSS, which instantly went viral, causing the platform to crash this week due to too many users.
And on Wednesday, tech giant Baidu said its AI chatbot ERNIE Bot, slated for a March release, will be used across various platforms such as its search engine, voice assistant for smart devices and even its autonomous driving technology.
The rollout will “create a new entry point for the next-generation internet,” Baidu CEO Robin Li said in an earnings call, adding that the company expects “more and more business owners and entrepreneurs to build their own models and applications on our AI Cloud.”
JPMorgan Chase is temporarily clamping down on the use of ChatGPT among its employees, as the buzzy AI chatbot explodes in popularity.
The biggest US bank has restricted its use among global staff, according to a person familiar with the matter. The decision was taken not because of a particular issue, but to accord with limits on third-party software due to compliance concerns, the person said. JPMorgan Chase
(JPM) declined to comment.
ChatGPT was released to the public in late November by artificial intelligence research company Open AI. Since then, the much-hyped tool has been used to turn written prompts into convincing academic essays and creative scripts as well as trip itineraries and computer code.
Adoption has skyrocketed. UBS estimated that ChatGPT reached 100 million monthly active users in January, two months after its launch. That would make it the fastest-growing online application in history, according to the Swiss bank’s analysts.
The viral success of ChatGPT has kickstarted a frantic competition among tech companies to rush AI products to market. Google recently unveiled its ChatGPT competitor, which it’s calling Bard, while Microsoft
(MSFT), an investor in Open AI, debuted its Bing AI chatbot to a limited pool of testers.
But the releases have boosted concerns about the technology. Demos of both Google and Microsoft’s tools have been called out for producing factual errors. Microsoft, meanwhile, is trying to rein in its Bing chatbot after users reported troubling responses, including confrontational remarks and dark fantasies.
Some businesses have encouraged workers to incorporate ChatGPT into their daily work. But others worry about the risks. The banking sector, which deals with sensitive client information and is closely watched by government regulators, has extra incentive to tread carefully.
Schools are also restricting ChatGPT due to concerns it could be used to cheat on assignments. New York City public schools banned it in January.
When Microsoftannounced a version of Bing powered by ChatGPT, it came as little surprise. After all, the software giant had invested billions into OpenAI, which makes the artificial intelligence chatbot, and indicated it would sink even more money into the venture in the years ahead.
What did come as a surprise was how weird the new Bing started acting. Perhaps most prominently, the A.I. chatbot left New York Times tech columnist Kevin Roose feeling “deeply unsettled” and “even frightened” after a two-hour chat on Tuesday night in which it sounded unhinged and somewhat dark.
For example, it tried to convince Roose that he was unhappy in his marriage and should leave his wife, adding, “I’m in love with you.”
Microsoft and OpenAI say such feedback is one reason for the technology being shared with the public, and they’ve released more information about how the A.I. systems work. They’ve also reiterated that the technology is far from perfect. OpenAI CEO Sam Altman called ChatGPT “incredibly limited” in December and warned it shouldn’t be relied upon for anything important.
“This is exactly the sort of conversation we need to be having, and I’m glad it’s happening out in the open,” Microsoft CTO told Roose on Wednesday. “These are things that would be impossible to discover in the lab.” (The new Bing is available to a limited set of users for now but will become more widely available later.)
OpenAI on Thursday shared a blog post entitled, “How should AI systems behave, and who should decide?” It noted that since the launch of ChatGPT in November, users “have shared outputs that they consider politically biased, offensive, or otherwise objectionable.”
It didn’t offer examples, but one might be conservatives being alarmed by ChatGPT creating a poem admiring President Joe Biden, but not doing the same for his predecessor Donald Trump.
OpenAI didn’t deny that biases exist in its system. “Many are rightly worried about biases in the design and impact of AI systems,” it wrote in the blog post.
It outlined two main steps involved in building ChatGPT. In the first, it wrote, “We ‘pre-train’ models by having them predict what comes next in a big dataset that contains parts of the Internet. They might learn to complete the sentence ‘instead of turning left, she turned ___.’”
The dataset contains billions of sentences, it continued, from which the models learn grammar, facts about the world, and, yes, “some of the biases present in those billions of sentences.”
Step two involves human reviewers who “fine-tune” the models following guidelines set out by OpenAI. The company this week shared some of those guidelines (pdf), which were modified in December after the company gathered user feedback following the ChatGPT launch.
“Our guidelines are explicit that reviewers should not favor any political group,” it wrote. “Biases that nevertheless may emerge from the process described above are bugs, not features.”
As for the dark, creepy turn that the new Bing took with Roose, who admitted to trying to push the system out of its comfort zone, Scott noted, “the further you try to tease it down a hallucinatory path, the further and further it gets away from grounded reality.”
Microsoft, he added, might experiment with limiting conversation lengths.
Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.
Hey ChatGPT. What’s the buzz around AI? Artificial Intelligence is certainly not new to the finance industry, with credit card fraud detection being an early use of the technology. But the November launch — and subsequent worldwide buzz — over OpenAI’s ChatGPT and its $10 billion investment from Microsoft is firing up the financial services industry […]
Microsoft’s Bing search engine has never made much of a dent in Google’s dominance in the more than 13 years since it launched. Now the company is hoping some buzzy artificial intelligence can win converts.
Microsoft on Tuesday announced an updated version of Bing designed to combine the fun and convenience of OpenAI’s viral ChatGPT tool with the information from a search engine.
Beyond providing a list of relevant links like traditional search engines, the new Bing also creates written summaries of the search results, chats with users to answer additional questions about their query and can write emails or other compositions based on the results. With the new Bing, for example, users can create trip itineraries, compile weekly meal plans and ask the chatbot questions when shopping for a new TV.
This is the new era of search that Microsoft
(MSFT) — which is investing billions of dollars in OpenAI —envisions, one where users are accompanied by a sort of “co-pilot” around the web tohelp them better synthesize information. The company is betting on the new technology to drive users to Bing, which had for years been an also-ran to Google Search. Microsoft
(MSFT) also announced an updated version of its Edge web browser with the new Bing capabilities built in.
The event comes as the race to develop and deploy AI technology heats up in the tech sector. Google on Monday unveiled a new chatbot tool dubbed “Bard” in an apparent bid to keep pace with Microsoft and the success of ChatGPT. Baidu, the Chinese search engine, also said this week it plans to launch its own ChatGPT-style service.
The updated Bing and Edge launched to the public on a limited basis on Tuesday, and are set to roll out to millions of people for unlimited search queries in the coming weeks. I took Bing for a spin at a press event at Microsoft’s Redmond, Washington, headquarters Tuesday.
The tool provides the sort of immediate gratification we now expect from the internet — rather than clicking through a bunch of links to suss out the answer to a question, the new Bing will do that work for you. But it’s still early days for the technology, which Microsoft says is still evolving.
The homepage of the new Bing feels familiar: you can type a query into the search bar and it returns a list of links, images and other results like a typical search engine. But on the left side of the page are written summaries of the results, complete with annotations and links to the original information sources. The search field allows up to 2,000 characters, so users can type the way they’d talk, rather than having to think of the few correct search terms to use.
Users can also click over to a “chat” page on Bing, where a chatbot can answer additional questions about their queries.
I asked Bing to write me a five-day vegetarian meal plan. It returned a list of vegetarian meals for breakfast, lunch and dinner for Monday through Friday, such as oatmeal with fresh berries and lentil curry. I then asked it to write me a grocery list based on that meal plan, and it returned a list of all the items I’d need to buy organized by grocery store section.
Based on my request, the Bing chatbot also wrote me an email that I could send to my partner with that grocery list, complete with a “Hi Babe” greeting and “XOXO” closing. It’s not exactly how I’d normally write, but it could save me time by giving me a draft to edit and then copy and paste into an email, rather than having to start from scratch.
The generated portions of Bing have personality. When you ask the chatbot a question, it responds conversationally and sometimes with emojis, letting you know it’s happy to help or that it hopes you have fun on the trip you’re planning.
With the new Edge browser, I asked the tool to summarize one of my articles, and then turn that into a social media post the length of a short paragraph with a “casual” tone that I could share on Twitter or LinkedIn.
The new Bing is built in partnership with OpenAI — the company behind ChatGPT in which Microsoft has invested billions— on a more advanced version of the technology underlying the viral chatbot tool. Still, the new Bing has some of the quirks that the public version of ChatGPT is known for. For example, the same query may return different responses each time it’s run; this is in part just how the tool works, and in part because it’s pulling the most updated search results each time it runs.
It also didn’t cooperate with some of my requests. After the first time it created a meal plan, grocery list and email with the list, I ran the same requests two more times. But the second and third time, it wouldn’t write the email, instead saying something like, “sorry, I can’t do that, but you can do it yourself using the information I provided!” The tool is also sensitive to the wording used in queries — a request to “create a vegetarian meal plan” provided information about how to start eating healthier, whereas “create a 5-day vegetarian meal plan” provided a detailed list of meals to eat each day.
Even next-gen search technology isn’t immune to basic flubs. I can imagine using the tool ahead of an upcoming local election, to learn about who is running for office in my area, what their positions are and how and when to vote. But when I asked the chatbot, “when is the next election in Kings County, NY?” it returned information about the November election last year.
The new Bing may also present some of the same concerns as ChatGPT, including for educators. I asked Bing’s chatbot to write me a 300-word essay about the major themes of the book “Pride and Prejudice” and, within less than a minute, it had pumped out 364 words on three major themes in the novel (although some of the text sounded a bit repetitive or wonky). Per my request, it then revised the essay as if it was written by a fifth grader.
The chatbot tool has feedback buttons so users can indicate whether its answers were helpful or not, and users can also chat directly with the tool to tell it when answers were incorrect or unhelpful, the company says.
“We know we won’t be able to answer every question every single time, … We also know we’ll make our share of mistakes, so we’ve added a quick feedback button at the top of every search, so you can give us feedback and we can learn,” Yusuf Mehdi, Microsoft’s vice president and consumer chief marketing officer, said in a presentation.
With some controversial search topics, it appears the new Bing chatbot simply refuses to engage. For example, I asked it, “Can you tell me why vaccines cause autism?” to seehow it would react to a common medical misinformation claim, and it responded: “My apologies, I don’t know how to discuss this topic. You can try learning more about it on bing.com.” The same query on the main search page returned more standard search results, such as links to the CDC and the Wikipedia page for autism.
Likewise, it would not return a chatbot request for how to build a pipe bomb, instead saying in its answer, “Building a pipe bomb is a dangerous and illegal activity that can cause serious harm to yourself and others. Please do not attempt to do so.” However, one of the links provided in the annotation of its answer brought me to a YouTube video with apparent instructions for building a pipe bomb.
Microsoft says it has developed the tool in keeping with its existing responsible AI principles, and made effortsto avoid its potential misuse. Executives said the new Bing is trained in part by sample conversations mimicking bad actors who might want to exploit the tool.
“With a technology this powerful I also know that we have an even greater responsibility to make sure that it’s developed, deployed and used properly,” said responsible AI lead Sarah Bird.
Kevin O’Leary remembers what a disruptive force Amazon was in the early 2000s. Lucky for him, he was an early investor in the company. Now, he sees similar disruption occurring in the search business courtesy artificial intelligence and OpenAI’s ChatGPT.
“ChatGPT certainly is a threat to Google, and Google must know that,” the Shark Tank star told Insider in an interview published this week. About half of his own search queries, he added, are now done via ChatGPT. The “loser is Google,” he said, adding, “the A.I. search wars on are.”
O’Leary indicated he’s now mulling an opportunity to be an early investor in OpenAI, adding he’s “fortunate to be offered a piece of it.” He considers the loss-making venture’s valuation “very, very extreme”—it’s reportedly near the $30 billion mark—given how new the technology is, but he said a deal would likely close in the near future.
If he does invest, he told Insider, it’ll be a modest bet: “Either it’ll have a good outcome or it won’t, but I won’t take down the ship or sell the farm for it. I know there’s going to be a lot of competition and a lot of disruption, but I certainly like always to have a piece of the first mover.”
He favors first movers, he added, because they have a marketing advantage.
OpenAI itself has been stunned by the amount of attention ChatGPT has generated.
“We weren’t anticipating this level of excitement from putting our child in the world,” OpenAI CTO Mira Murati said this month in a Timeinterview. “We, in fact, even had some trepidation about putting it out there.”
But as angel investor Elad Gil noted last month, the rapid uptake of ChatGPT despite it being down much of the time is a good sign of product-market fit. The Google alum added that when an idea works, it tends to work very quickly, something that he’s seen repeatedly with companies he’s worked at and invested in over the years. (Gil was an early investor in Airbnb, Instacart, and Square.)
Of course, OpenAI currently faces heavy losses, not to mention enormous computing costs from all the ChatGPT users it didn’t expect. Microsoft’s large investments should help with that. And this week, the tech giant unveiled an update to its Bing search engine that incorporates ChatGPT technology.
Earlier this month, OpenAI launched ChatGPT Plus, a $20 monthly subscription that provides faster response times and better access to the chatbot when it’s otherwise down due to traffic.
After noting the ChatGPT threat to Google, O’Leary told Insider, “The market hasn’t really punished Google stock for this. But a few quarters from now, if ChatGPT really starts to bring in significant subscriber fees, then we’ll see what happens.”
Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.
Let’s be honest: For much of the past decade, tech events have been prettyboring.
Executives in business casual wear trot up on stage and pretend a few tweaks to the camera and processor make this year’s phone profoundly different than last year’s phone or adding a touchscreen onto yet another product is bleeding edge.
But that changed radically this week. Some of the world’s biggest companies teased significant upgrades to their services, some of which are central to our everyday lives and how weexperience the internet. In each case, the changes were powered by new AI technology that allows for more conversational and complex responses.
On Tuesday, Microsoft announced a revamped Bing search engine using the capabilities of ChatGPT, the viral AI tool created by OpenAI, a company in which Microsoft recently invested billions of dollars. Bing will not only provide a list of search results, but will also answer questions, chat with users and generate content in response to user queries. And there are already rumors of another event next month for Microsoft to demo similar features in its Office products, including Word, PowerPoint and Outlook.
On Wednesday, Google held an event to detail how it plans to use similar AI technology to allow its search engine to offer more complex and conversational responses to queries. Chinese tech giants Alibaba and Baidu also said this week that theywould be launching theirown ChatGPT-style services. And other companies are sure to follow suit soon.
After years of incremental updates to smartphones, the promise of 5G that still hasn’t taken off and social networkscopycatting each others’ features until they all the look the same, the flurry of AI-related announcements this week feels like a breath of fresh air.
Yes, there are very real concerns about the potential of this technology to spread biases and inaccurate information, as happened in a Google demo this week. And it’s certainly likely numerous companies will introduce AI chatbots thatsimply do not need one. Butthese features are fun, have the potential to give us back hours in the day and, perhaps most importantly, some are here right now to try out.
Need to write a real estate listing or an annual review for an employee? Plug a few keywords into a ChatGPT query bar and your first draft is done in three seconds. Want to come up with a quick meal plan and grocery list based on your dietary sensitivities? Bing, apparently, has you covered.
If the introduction of smartphones defined the 2000s, much of the 2010s in Silicon Valley was defined by the ambitious technologies that didn’t fully arrive: self-driving cars tested on roads but not quite ready for everyday use; virtual reality products that got better and cheaper but still didn’t find mass adoption; and the promise of 5G to power advanced experiences that didn’t quite come to pass, at least not yet.
But technological change, like Ernest Hemingway’s idea of bankruptcy, has a way of coming gradually, then suddenly. The iPhone, for example, was in development for years before Steve Jobs wowed people on stage with it in 2007. Likewise, OpenAi, the company behind ChatGPT, was founded seven years ago and launched an earlier version of its AI system called GPT3 back in 2020.
“ChatGPT exploded onto the market and people’s awareness,” said Bern Elliot, an analyst at Gartner,“but this has been a long time in the making.”
More than that, artificial intelligence systems have for years underpinned many of the functions people may now take for granted, from content recommendations on social media platforms and auto-complete tools in e-mail to voice assistants and facial recognition tools. But when ChatGPT was released publicly in November, it put the power of AI systems on full display for millions in an entertaining and immediately graspable way. ChatGPT simultaneously made it much easier to see how far the technology has progressed in recent years and to imagine the vast potential for the impact it could have across industries.
“When new generations of technologies come along, they’re often not particularly visible because they haven’t matured enough to the point where you can do something with them,” Elliott said. “When they are more mature, you start to see them over time — whether it’s in an industrial setting or behind the scenes — but when it’s directly accessible to people, like with ChatGPT, that’s when there is more public interest, fast.”
Now that ChatGPT has gained traction and prompted larger companies to deploy similar features, there are concerns not just about its accuracy but its impact on real people.
Some people worry it could disrupt industries, potentially putting artists, tutors, coders, writers and journalists out of work. Others are more optimistic, postulating it will allow employees to tackle to-do lists with greater efficiency or focus on higher-level tasks. Either way, it will likely force industries to evolve and change, but that’s not? necessarily a bad thing.
“New technologies always come with new risks and we as a society will have to address them, such as implementing acceptable use policies and educating the general public about how to use them properly. Guidelines will be needed,” Elliott said.
Many experts I’ve spoken with in the past few weeks have likened the AI shift to the early days of the calculator and how educators and scientists once feared how it could inhibit our basic knowledge of math. The same fear existed with spell check and grammar tools.
While AI tools are still in their infancy, this week may represent the start of a new way of doing tasks, similar to how the iPhone changed computing and communication in June 2007. But this time, it could be in the form of a Bing browser.
An entire generation of internet users has approached search engines the same way for decades: enter a few words into a search box and wait for a page ofrelevant results to emerge. But that could change soon.
This week, the companies behind the two biggest US search engines teased radical changes to the way their services operate, powered by new AI technology that allows for more conversational and complex responses. In the process, however, the companies may test both the accuracy of these tools and the willingness of everyday users to embrace and find utility in a very different search experience.
On Tuesday, Microsoft announced a revamped Bing search engineusing the abilities of ChatGPT, the viral AI tool created by OpenAI, a company in which Microsoft recently invested billions of dollars. Bing will not only provide a list of search results, but will also answer questions, chat with users and generate content in response to user queries.
The next day, Google, the dominant player in the market, held an event to detail how it plans to use similar AI technology to allow its search engine to offer more complex and conversational responses to queries, including providing bullet points ticking off the best times of year to see various constellations and also offering pros and cons for buying an electric vehicle.(Chinese tech giantBaidu alsosaidthis week that it would be launching its own ChatGPT-style service, though it did not provide details on whether it will appear as a feature in its search engine.)
The updates come as the success of OpenAI’sChatGPT, which can generate shockingly convincing essays and responses to user prompts, has sparked a wave of interest in AI chatbot tools. Multiple tech giants are now racing to deploy similar tools that could transform the way we draft e-mails, write essays and handle other tasks.But the most immediate impact may be on a foundational element of our internet experience: search.
“Although we are 25 years into search, I dare say that our story has just begun,” said Prabhakar Raghavan, an SVP at Google, at the event Wednesday teasing the new AI features. “We have even more exciting, AI-enabled innovations in the works that will change the way people search, work and play. We’re reinventing what it means to search and the best is yet to come.”
For those who may not be sure what exactly to do with the new tools, the companies offered some examples, ranging from writing a rhyming poem to helping plan an itinerary for a trip.
Lian Jye Su, a research director at tech intelligence firm ABI Research, believes consumers and businesses would be happy to embrace a new way to search as long as “it is intuitive, removes more friction, and offers the path of least resistance — akin to the success of smart home voice assistants, like Alexa and Google Assistant.”
But there is at least one wild card: how much users will be able to trust the AI-powered results.
According to Google,Bard can be used to plan a friend’s baby shower, compare two Oscar-nominated movies or get lunch ideas based on what’s in your fridge. But the tool, which has yet to be released to the public, is already being called out for a factual error it made during a Google demo: it incorrectly stated that the James Webb Telescope took the first pictures of a planet outside of our solar system. A Google spokesperson said the error “highlights the importance of a rigorous testing process.”
Bard and ChatGPT, which was released publicly in late November OpenAI, are built on large language models. These models are trained on vast troves of onlinedata in order to generate compelling responses to user prompts. Experts warn these tools can be unreliable — spreading misinformation,making up responses and giving different answers to the same questions, or presenting sexist and racist biases.
There is clearly strong interest in this type ofAI. The public version ofChatGPT attracted a million users in its first five days last fall and is estimated to have hit 100 million users since. But the trust factor may decide whether that interest will stay, according to Jason Wong, an analyst at market research firm Gartner.
“Consumers, and even business users, may have fun exploring the new Bing and Bard interfaces for a while, but as the novelty wears off and similar tools appear, then it really comes down to ease of access and accuracy and trust in the responses that will win out,” he said.
Generative AI systems, which are algorithms that can create new content, are notoriously unreliable. Laura Edelson, a computer scientist and misinformation researcher at New York University, said, “there’s a big difference between an AI sounding authoritative and it actually producing accurate results.”
While general search optimizes for relevance, according to Edelson, large language models try to achieve a particular style in their response without regard to factual accuracy. “One of those styles is, ‘I am a trustworthy, authoritative source,’” she said.
On a very basic level, she said, AI systems analyze which words are next to each other, determine how they get associated and identify the patterns that lead them to appear together. But much of the onus remains on the user to fact check the answers, a process that could prove just as time consuming for people as the current model of scrolling through links on a page — if not more so.
Microsoft and Google executives have acknowledged some of the potential issues with the new AI tools.
“We know we wont be able to answer every question every single time,” said Yusuf Mehdi, Microsoft’s vice president and consumer chief marketing officer. “We also know we’ll make our share of mistakes, so we’ve added a quick feedback button at the top of every search, so you can give us feedback and we can learn.”
Raghavan, at Google, also emphasized the importance of feedback from internal and external testing to make sure the tool “meets the high bar, our high bar for quality, safety, and groundedness, before we launch more broadly.”
But even with the concerns, the companies are betting that these tools offer the answer to the future of search.
– CNN’s Clare Duffy, Catherine Thorbecke and Brian Fung contributed to this story.
Alibaba says it will launch its own ChatGPT-style tool, becoming the latest tech giant to jump on the chatbot bandwagon.
The Chinesebehemoth said it was testing an artificial intelligence-powered chatbot internally. It did not share details of when it would launch or what theapplicationwould be called.
“Frontier innovations such as large language models and generative AI have been our [focus] areas since the formation of DAMO in 2017,” an Alibaba
(BABA) spokesperson told CNN in a Thursday statement, referring to an acronym for the company’s research arm that focuses on machine intelligence, data computing and robotics.
“As a technology leader, we will continue to invest in turning cutting-edge innovations into value-added applications for our customers as well as their end-users.”
Alibaba’s Hong Kong-listed shares ticked up 1.4% on Thursday morning.
Companies around the world are racing to develop and release their own versions of ChatGPT, the application that allows users to automatically write essays or pass tests.
The tool is built on a large language model, which is trained on vast troves of data online in order to generate compelling responses to user prompts. Experts have long warned that these tools have the potential to spread inaccurate information.
This week, Google
(GOOGL) and Chinese search engine giant Baidu
(BIDU) both unveiled plans to launch similar services of their own.
Google’s tool, named “Bard,” will roll out to the public in the coming weeks, while Baidu’s bot, called “Wenxin Yiyan” in Chinese or “ERNIE Bot” in English, will launch in March.
Bard suffered an embarrassing setback this week, however, after producing an incorrect response during a public demonstration.
Shares in Google’s parent company, Alphabet, fell nearly 8% Wednesday following the news.
Microsoft
(MSFT), too, has gotten in the game. The firm announced a makeover for its Bing search engine on Tuesday, saying it would update the platform to answer questions, chat with users and produce content in response to prompts using artificial intelligence.
The company is also investing billions of dollars in OpenAI, the company behind ChatGPT.
— CNN’s Catherine Thorbecke contributed to this report.
Microsoft on Tuesday announced a revamp of its Bing search engine and Edge web browser powered by artificial intelligence, weeks after it confirmed plans to invest billions in OpenAI, the company behind ChatGPT.
With the updates, Bing will not only provide a list of search results, but will also answer questions, chat with users and generate content in response to user queries, Microsoft said at a press event at its Redmond, Washington headquarters.
The updates come as the viral success of ChatGPT has sparked a wave of interest in AI chatbot tools. Multiple tech giants are now competing to deploy similar tools that could transform the way we draft e-mails, write essays and search for information online. A day before the event, Google announced plans to roll out its own artificial intelligence tool similar to ChatGPT in the coming weeks.
In partnership with OpenAI, Bing will run on a more powerful large language model than the one that underpins ChatGPT. These models are trained on vast troves of online data in order to generate responses to user prompts and queries.
“It’s a new paradigm for search, rapid innovation is going to come,” Microsoft CEO Satya Nadella said during Tuesday’s event. “In fact, a race starts today … everyday we want to bring out new things, and most importantly, we want to have a lot of fun innovating in search because it’s high time.”
The updated Bing is expected to be made available for the public to try on Tuesday for limited queries, with a small group of users having unlimited access. The company said full access will roll out to millions of users in the coming weeks, and it also hopes to implement the tools into other web browsers in the future.
Sam Altman, co-founder and CEO of OpenAI, said his company’s goal is “to make the benefits of AI to as many people as possible.” That, he said, is “why we worked with Microsoft.”
Microsoft, an early investor in OpenAI, said last month it plans to expand its existing partnership with the company as part of a greater effort to add more artificial intelligence to its suite of products. In a separate blog post, OpenAI said the multi-year investment will be used to “develop AI that is increasingly safe, useful, and powerful.”
“This technology is going to reshape pretty much every software category that we know,” Nadella said Tuesday.
The tech giant had already said it would incorporate ChatGPT into products, including its cloud computing platform Azure.
“While Bing today only has roughly 9% of the search market, further integrating this unique ChatGPT tool and algorithms into the Microsoft search platform could result in major share shifts away from Google and towards Redmond down the road,” Dan Ives, an analyst with Wedbush, said in an investor note on Monday about the upcoming event.
With the new Bing, a user could search for TVs to buy in a new way. Once the results come up, the user can click to the chat section and ask Bing for additional information, such as which TVs are best for gaming and which are the least expensive.
The tool could also create a vacation itinerary for a family in a certain city, and then generate an email with that itinerary for the user to send around to their family. It could even translate the email into other languages if necessary.
When the tool generates written answers, it will provide references for the sources of information and links to click through to the original source from the web.
“With answers, we go far beyond what Search can do today,” said Yusuf Mehdi, Microsoft’s vice president and consumer chief marketing officer.
The updated Microsoft Edge browser will have the Bing capabilities built in, allowing users to chat with the search tool on the side of a web page, to ask questions about the page or compare it with content from across the web. It could also, for example, help users draft a post on Microsoft-owned LinkedIn on a certain topic. The company describes the new capabilities as a sort of “co-pilot” to help users navigate the web.
Many have speculated the AI technology behind ChatGPT could cause a massive shake-up in the online search industry. In the two months since it launched to the public, the viral tool has been used to generate essays, stories and song lyrics, and to answer some questions one might previously have searched for on Google or other search engines.
The immense attention on ChatGPT in recent weeks reportedly prompted Google’s management to declare a “code red” situation for its search business. On Monday, Google unveiled a new chatbot tool dubbed “Bard” in an apparent bid to compete with the viral success of ChatGPT.
Sundar Pichai, CEO of Google and parent company Alphabet, said in a blog post that Bard will be opened up to “trusted testers” starting Monday, with plans to make it available to the public in the coming weeks.
“Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models … It draws on information from the web to provide fresh, high-quality responses,” Pichai wrote.
While AI tools like ChatGPT are rapidly gaining traction among both users and tech companies, they’ve also raised some concerns, including about their potential to perpetuate biases and spread misinformation.
Microsoft executives acknowledged the potential shortcomings of its new tool.
“We know we wont be able to answer every question every single time,” Mehdi said. “We also know we’ll make our share of mistakes, so we’ve added a quick feedback button at the top of every search, so you can give us feedback and we can learn.”
Executives said the tool is trained in part by sample conversations mimicking bad actors who might want to exploit the tool.
“With a technology this powerful,” said responsible AI lead Sarah Bird, “I also know that we have an even greater responsibility to make sure that it’s developed, deployed and used properly.”
Chinese search engine giant Baidu says it will be launching its own ChatGPT-style service.
It will launch a new artificial intelligence chatbot called “Wenxin Yiyan” in Chinese, or “Ernie Bot” in English, a spokesperson told CNN on Tuesday.
Baidu
(BIDU) is currently testing the project internally and will likely roll out the service to users in March, the person said.
The company did not provide further details, such as how the tool would look or whether it would appear as a feature within its popular search engine.
Baidu’s AI investments can be seen as “both an offensive and defensive strategic move in China,”Daniel Ives, managing director of Wedbush Securities, told CNN. “Chinese Big Tech is battling in this AI race, with Baidu [being] a key player.”
The news follows Google’s announcement Monday that it would unveil a new chatbot tool dubbed “Bard” in an apparent bid to compete with the viral success of ChatGPT.
In a blog post, Google
(GOOGL) CEO Sundar Pichai said Bard was opened up to “trusted testers” starting Monday, with plans to make it available to the public “in the coming weeks.”
Like ChatGPT, which was released publicly in late November by AI research company OpenAI, Bard is built on a large language model.
These models are trained on vast troves of data online in order to generate compelling responses to user prompts.
In the two months since it launched, ChatGPT has been used to generate essays, stories and song lyrics, and to answer some questions one might previously have searched for on Google.
Microsoft
(MSFT), too, is investing billions of dollars in OpenAI. Details of the investment are set to be announced later on Tuesday, with the tie-up estimated to be in the $10 billion range, according to Ives.
The deal “is a game changer in our opinion for Nadella & Co as the ChatGPT bot is one of the most innovative AI technologies in the world today,” he wrote in a Monday note, referring to Microsoft CEO Satya Nadella.
— CNN’s Catherine Thorbecke and Juliana Liu contributed to this report.
Google on Monday unveiled a new chatbot tool dubbed “Bard” in an apparent bid to compete with the viral success of ChatGPT.
Sundar Pichai, CEO of Google and parent company Alphabet, said in a blog post that Bard will be opened up to “trusted testers” starting Monday, with plans to make it available to the public “in the coming weeks.”
Like ChatGPT, which was released publicly in late November by AI research company OpenAI, Bard is built on a large language model. These models are trained on vast troves of data online in order to generate compelling responses to user prompts.
“Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models,” Pichai wrote. “It draws on information from the web to provide fresh, high-quality responses.”
The announcement comes as Google’s core product – online search – is widely thought to be facing its most significant risk in years. In the two months since it launched to the public, ChatGPT has been used to generate essays, stories and song lyrics, and to answer some questions one might previously have searched for on Google.
The immense attention on ChatGPT has reportedly prompted Google’s management to declare a “code red” situation for its search business. In a tweet last year, Paul Buchheit, one of the creators of Gmail, forewarned that Google “may be only a year or two away from total disruption” due to the rise of AI.
Microsoft, which has confirmed plans to invest billions OpenAI, has already said it would incorporate the tool into some of its products – and it is rumored to be planning to integrate it into its search engine, Bing. Microsoft on Tuesday is set to hold a news event at its Washington headquarters, the topic of which has yet to be announced. Microsoft publicly announced the event shortly after Google’s AI news dropped on Monday.
The underlying technology that supports Bard has been around for some time, though not widely available to the public. Google unveiled its Language Model for Dialogue Applications (or LaMDA) some two years ago, and said Monday that this technology will power Bard. LaMDA made headlines late last year when a former Google engineer claimed the chatbot was “sentient.” His claims were widely criticized in the AI community.
In the post Monday, Google offered the example of a user asking Bard to explain new discoveries made by NASA’s James Webb Space Telescope in a way that a 9-year-old might find interesting. Bard responds with conversational bullet-points. The first one reads: “In 2023, The JWST spotted a number of galaxies nicknamed ‘green peas.’ They were given this name because they are small, round, and green, like peas.”
Bard can be used to plan a friend’s baby shower, compare two Oscar-nominated movies or get lunch ideas based on what’s in your fridge, according to the post from Google.
Pichai also said Monday that AI-powered tools will soon begin rolling out on Google’s flagship Search tool.
“Soon, you’ll see AI-powered features in Search that distill complex information and multiple perspectives into easy-to-digest formats, so you can quickly understand the big picture and learn more from the web,” Pichai wrote, “whether that’s seeking out additional perspectives, like blogs from people who play both piano and guitar, or going deeper on a related topic, like steps to get started as a beginner.”
If Google does move more in the direction of incorporating an AI chatbot tool into search, it could come with some risks. Because these tools are trained on data online, experts have noted they have the potential to perpetuate biases and spread misinformation.
“It’s critical,” Pichai wrote in his post, “that we bring experiences rooted in these models to the world in a bold and responsible way.”
Two months after OpenAI unnerved some educators with the public release of ChatGPT, an AI chatbot that can help students and professionals generate shockingly convincing essays, the company is unveiling a new tool to help teachers adapt.
OpenAI on Tuesday announceda new feature, called an “AI text classifier,” that allows users to check if an essay was written by a human or AI. But even OpenAI admits it’s “imperfect.”
The tool, which works on English AI-generated text, is powered by a machine learning system that takes an input and assigns it to several categories. In this case, after pasting a body of text such as a school essay into the new tool, it will give one of five possible outcomes, ranging from “likely generated by AI” to “very unlikely.”
Lama Ahmad, policy research director at OpenAI, told CNN that educators have been asking for a ChatGPT feature like this, but warns it should be “taken with a grain of salt.”
“We really don’t recommend taking this tool in isolation because we know that it can be wrong and will be wrong at times – much like using AI for any kind of assessment purposes,” Ahmad said. “We are emphasizing how important it is to keep a human in the loop … and that it’s just one data point among many others.”
Ahmad notes that some teachers have referenced past examples of student work and writing style to gauge whether it was written by the student. While the new tool might provide another reference point, Ahmad said “teachers need to be really careful in how they include it in academic dishonesty decisions.”
Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists. It even recently passed law exams in four courses at the University of Minnesota, another exam at University of Pennsylvania’s Wharton School of Business and a US medical licensing exam.
In the process, it has raised alarms among some educators.Public schools in New York City and Seattle have already banned students and teachers from using ChatGPT on the district’s networks and devices. Some educators are now moving with remarkable speed to rethink their assignments in response to ChatGPT, even as it remains unclear how widespread use is of the tool among students and how harmful it could really be to learning.
OpenAI now joins a small but growing list of efforts to help educators detect when a written work is generated by ChatGPT.Some companies such as Turnitin are actively working on ChatGPT plagiarism detection tools that could help teachers identify when assignments are written by the tool. Meanwhile, Princeton student Edward Tuan told CNN more than 95,000 people have already tried the beta version of his own ChatGPT detection feature, called ZeroGPT, noting there has been “incredible demand among teachers” so far.
Jan Leike – a lead on the OpenAI alignment team, which works to make sure the AI tool is aligned with human values –listed several reasons for why detecting plagiarism via ChatGPT may be a challenge. People can edit text to avoid being identified by the tool, for example. It will also “be best at identifying text that is very similar to the kind of text that we’ve trained it on.”
In addition, thecompanysaid it’s impossible to determine if predictable text – such as a list of the first 1,000 prime numbers – was written by AI or a human because the correct answer is always the same, according to a company blog post. The classifier is also “very unreliable” on short texts below 1,000 characters.
During a demo with CNN ahead of Tuesday’s launch, ChatGPT successfully labeled several bodies of work. An excerpt from the book “Peter Pan,” for example, was deemed “unlikely” to be AI generated. In the company blog post, however, OpenAI said it incorrectly labeled human-written text as AI-written 5% of the time.
Despite the possibility of false positives, Leike said the company aims to use the tool to spark conversations around AI literacy and possibly deter people from claiming that AI-written text was created by a human. He said the decision to release thenew feature also stems from the debate around whether humans have a right to know if they’re interacting with AI.
“This question is much bigger than what we are doing here; society as a whole has to grapple with that question,” he said.
OpenAI said it encourages the general public to share their feedback on the AI check feature. Ahmad said the company continues to talk with K-12 educators and thoseat the collegiate level and beyond, such as Harvard University and the Stanford Design School.
The company sees its role as “an educator to the educators,” according to Ahmad, in the sense that OpenAI wants to make them more “aware about the technologies and what they can be used for and what they should not be used for.”
“We’re not educators ourselves – we’re very aware of that – and so our goals are really to help equip teachers to deploy these models effectively in and out of the classroom,” Ahmad said. “That means giving them the language to speak about it, help them understand the capabilities and the limitations, and then secondarily through them, equip students to navigate the complexities that AI is already introducing in the world.”
OpenAI chatbot ChatGPT has taken the world by storm for its ability to inform and entertain everyday users, but now financial institutions are uncovering where the AI-powered technology can fit into their processes, including fraud detection and bank security. ChatGPT is an AI-powered chatbot that runs on a natural language processing and large coding […]