Social robot roommate Jibo initially caused a stir, but sadly didn’t live long.
Photograph: Jibo
Not that there haven’t been an array of other attempts. Jibo, a social robot roommate that used AI and endearing gestures to bond with its owners had its collective plug unceremoniously pulled just a few years after being put out into the world. Meanwhile, another US-grown offering, Moxie, an AI-empowered robot aimed at helping with child development, is still active.
It’s hard not to look at devices like this and shudder at the possibilities. There’s something inherently disturbing about tech that plays at being human, and that uncanny deception can rub people the wrong way. After all, our science fiction is replete with AI beings, many of them tales of artificial intelligence gone horribly wrong. The easy, and admittedly lazy, comparison to something like the Hyodol is M3GAN, the 2023 film about an AI-enabled companion doll that goes full murderbot.
But aside from offputting dolls, social robots come in many forms. They’re assistants, pets, retail workers, and often socially inept weirdos that just kind of hover awkwardly in public. But they’re also sometimes weapons, spies, and cops. It’s with good reason that people are suspicious of these automatons, whether they come in a fluffy package or not.
Wendy Moyle is a professor at the School of Nursing & Midwifery Griffith University in Australia who works with patients experiencing dementia. She says her work with social robots has angered people, who sometimes see giving robot dolls to older adults as infantilizing.
“When I first started using robots, I had a lot of negative feedback, even from staff,” Moyle says. “I would present at conferences and have people throw things at me because they felt that this was inhuman.”
However, the atmosphere around assistive robots has gotten less hostile recently, as they’ve been utilized in many positive use cases. Robotic companions are bringing joy to people with dementia. During the Covid pandemic, caretakers used robotic companions like Paro, a small robot meant to look like a baby harp seal, to help ease loneliness in older adults. Hyodol’s smiling dolls, whether you see them as sickly or sweet, are meant to evoke a similar friendly response.
Altman and three veteran business executives, all women, were named to OpenAI’s board on Friday, OpenAI announced in a blog post. Sue Desmond-Hellmann, former CEO of the Bill & Melinda Gates Foundation; Nicole Seligman, a former Sony general counsel; and Fidji Simo, the CEO and chair of grocery delivery company Instacart and a former Facebook executive, are the others joining the board.
OpenAI’s announcement coincided with the release of results from an internal investigation commissioned by three existing board members and carried out by the law firm WilmerHale. It found “a breakdown in trust” precipitated Altman’s removal by the prior board and that his earlier conduct “did not mandate removal,” according to a summary published by OpenAI.
On a press call Friday, Altman attempted to draw a line under OpenAI’s drama, saying, “I’m pleased this whole thing is over.” He added that “it’s been disheartening to see some people with an agenda trying to use leaks in the press to hurt the company, hurt the mission.”
While the investigation cleared Altman to reclaim his board seat, he said he “did learn a lot from this experience,” expressing remorse for one incident in particular involving a board member he did not name.
That appeared to be a reference to former OpenAI director Helen Toner, a researcher at the Center for Security and Emerging Technology, a Georgetown think tank. After she published a research analysis that criticized the speed of OpenAI’s product launch decisions, Altman reportedly tried to remove her from the board. “I think I could have handled that situation with more grace and care—I apologize for that,” he said.
Clean-Up
OpenAI has been looking to expand the board for months after announcing an interim board following the November chaos. It was formed after a deal between some board members who had pushed Altman out, alleging he had endangered its mission to develop superhuman AI for the benefit of all. Three of those directors agreed to step down after more than 95 percent of OpenAI employees threatened to quit if he wasn’t brought back.
The company’s governance has drawn public scrutiny because of its development of ChatGPT, Dall-E, and other services that have kicked off a boom in generative AI technologies over the past couple of years.
Altman had been suddenly fired by four members of the board of OpenAI’s nonprofit entity, which in an unusual structure in tech oversees a for-profit arm working on AI development. They expressed concerns about his communications with the board not being consistently candid as part of their justification for the move.
Elon Musk last week sued two of his OpenAI cofounders, Sam Altman and Greg Brockman, accusing them of “flagrant breaches” of the trio’s original agreement that the company would develop artificial intelligence openly and without chasing profits. Late on Tuesday, OpenAI released partially redacted emails between Musk, Altman, Brockman, and others that provide a counternarrative.
The emails suggest that Musk was open to OpenAI becoming more profit-focused relatively early on, potentially undermining his own claim that it deviated from its original mission. In one message Musk offers to fold OpenAI into his electric-car company Tesla to provide more resources, an idea originally suggested by an email he forwarded from an unnamed outside party.
The newly published emails also imply that Musk was not dogmatic about OpenAI having to freely provide its developments to all. In response to a message from chief scientist Ilya Sutskevar warning that open sourcing powerful AI advances could be risky as the technology advances, Musk writes, “Yup.” That seems to contradict the arguments in last week’s lawsuit that it was agreed from the start that OpenAI should make its innovations freely available.
Putting the legal dispute aside, the emails released by OpenAI show a powerful cadre of tech entrepreneurs founding an organization that has grown to immense power. Strikingly, although OpenAI likes to describe its mission as focused on creating artificial general intelligence—machines smarter than humans—its founders spend more time discussing fears about the rising power of Google and other deep-pocketed giants than excited about AGI.
“I think we should say that we are starting with a $1B funding commitment. This is real. I will cover whatever anyone else doesn’t provide,” Musk wrote in a missive discussing how to introduce OpenAI to the world. He dismissed a suggestion to launch by announcing $100 million in funding, citing the huge resources of Google and Facebook.
Musk cofounded OpenAI with Altman, Brockman, and others in 2015, during another period of heady AI hype centered around Google. A month before the nonprofit was incorporated, Google’s AI program AlphaGo had learned to play the devilishly tricky board game Go well enough to defeat a champion human player for the first time. The feat shocked many AI experts who had thought Go too subtle for computers to master anytime soon. It also showed the potential for AI to master many seemingly impossible tasks.
The text of Musk’s lawsuit confirms some previously reported details of the OpenAI backstory at this time, including the fact that Musk was first made aware of the possible dangers posed by AI during a 2012 meeting with Demis Hassabis, cofounder and CEO of DeepMind, the company that developed AlphaGo and was acquired by Google in 2014. The lawsuit also confirms that Musk disagreed deeply with Google cofounder Larry Page over the future risks of AI, something that apparently led to the pair falling out as friends. Musk eventually parted ways with OpenAI in 2018 and has apparently soured further on the project since the wild success of ChatGPT.
Since OpenAI released the emails with Musk this week, speculation has swirled about the names and other details redacted from the messages. Some turned to AI as a way to fill in the blanks with statistically plausible text.
“This needs billions per year immediately or forget it,” Musk wrote in one email about the OpenAI project. “Unfortunately, humanity’s future is in the hands of [redacted],” he added, perhaps a reference to Google cofounder Page.
Elsewhere in the email change, the AI software—like some commentators on Twitter—guessed Musk had forwarded arguments that Google had a powerful advantage in AI from Hassabis.
Whoever it was, the relationships on display in the emails between OpenAI’s cofounders have since become fractured. Musk’s lawsuit seeks to force the company to stop licensing technology to its primary backer, Microsoft. In a blog post accompanying the emails released this week, OpenAI’s other cofounders expressed sorrow at how things had soured.
“We’re sad that it’s come to this with someone whom we’ve deeply admired,” they wrote. “Someone who inspired us to aim higher, then told us we would fail, started a competitor, and then sued us when we started making meaningful progress towards OpenAI’s mission without him.”
(NewsNation) — University junior Marley Stevens faced a startling setback when a paper she worked on received a zero grade, plunging her into academic probation and jeopardizing her scholarship. The twist? She had used Grammarly, a popular writing plugin recommended by her university to refine her work.
Stevens, recounting her ordeal, expressed initial disbelief upon receiving the email notifying her of the zero grade. “I thought he had sent the email to the wrong person because I worked super hard on my paper,” she said in a Sunday interview on “NewsNation Prime.”
She didn’t expect that three months later, she would still be entangled in the aftermath, with her scholarship hanging by a thread. Grammarly says 30 million people use this tool to catch spelling errors, typos and grammar issues.
Grammarly also uses generative AI, and a detection service flagged Stevens’ assignment for the teacher as “unintentionally cheating.”
“I’m on probation until Feb. 16 of next year. And this started when he sent me the email. It was October. I didn’t think that now in March of 2024 that this would still be a big thing that was going on,” Stevens said.
Despite Grammarly being recommended on the University of North Georgia’s website, Stevens found herself embroiled in a battle to clear her name. The tool, briefly removed from the school’s website, later resurfaced, adding to the confusion surrounding its acceptable usage despite the software’s utilization of generative AI.
“I have a teacher this semester who told me in an email like, ‘Yes, use Grammarly. It’s a great tool.’ And they advertise it,” Stevens said.
Grammarly’s Jenny Maxwell clarified the company’s stance, emphasizing its role as a partner in enhancing writing experiences while ensuring responsible usage. “Our AI engine inside of it helps people create better writing experiences that are grammatically correct, [with] fewer spelling issues,” she explained.
Maxwell defended the tool’s integrity, highlighting its 15-year history of aiding students and professionals in crafting grammatically correct content. “We’ve recently added a generative engine within Grammarly,” Maxwell explained, emphasizing responsible usage and transparency in citing its assistance.
Despite Stevens’ appeal and subsequent GoFundMe campaign to rectify the situation, her options seem limited. The university’s stance, citing the absence of suspension or expulsion, has left her in a bureaucratic bind.
Maxwell, on behalf of Grammarly, extended support, including a $4,000 donation.
Reflecting on the broader implications, Maxwell urged institutions to adapt their assessment methods in light of evolving technologies like AI.
“Education is wrestling right now with how they need to evolve the way that they assess writing,” she remarked.
NewsNation reached out to the university for comment and hasn’t heard back.
If all that is true—and there’s no way to tell right now—Groq might well pose a threat to the dominance of Nvidia. Ross is careful when discussing this. “Let’s be clear—they’re Goliath, and we’re David,” he says. “It would be very, very foolish to say that Nvidia is worried about us.” When asked about Groq, though, Nvidia’s prompt response indicates that the startup is indeed on its radar. With near-Groq-like speed, the Goliath’s PR team sent me a statement indicating that Nvidia’s AI advantage is not only in its chips but other services it provides to customers. like AI software, memory, networking, and other goodies. “AI compute in the data center is a complex challenge that requires a full-stack solution,” it says, implying that its unnamed competitor might be stack-challenged.
In any case, Ross says he’s not competing with Nvidia but offering an alternative experience—and not just in terms of speed. He’s on a mission to make sure that Groq will deliver fair results unsullied by political point of view or pressure from commercial interests. “Groq will never be involved in advertising, ever,” he says. “Because that’s influencing people. AI should always be neutral, it should never tell you what you should be thinking. Groq exists to make sure everyone has access. It’s helping you make your decision, not its decisions.” Great sentiments, but even the Groq chatbot, when I quizzed it about early-stage idealism, is skeptical about such claims. “The pressure to generate profits and scale can lead even well-intentioned founders to compromise on their ideals,” it promptly replied.
One other thing. You may have heard that Elon Musk has given the name “Grok” to the LLM created by his AI company. This took Ross by surprise, since he says he trademarked the name of his company when he founded it in 2016, and he believes it covers the phonetically identical original term. “We called dibs,” he says. “He can’t have it. We’ve sent a cease-and-desist letter.” So far he hasn’t gotten a response from Musk.
When I asked Groq about the name dispute, it first cautioned me that it doesn’t provide legal opinions. “However, I can provide some context that may help you understand the situation better,” it said. The bot explained that the term grok has been used in the industry for decades, so Musk would be within his rights to use it. On the other hand, if Groq trademarked the term, it might well have an exclusive claim. All accurate and on the mark—everything you’d expect from a modern LLM. What you would not expect was that the reply appeared in less than a second.
Time Travel
In my book on Google, In the Plex, I explained how the company, and its cofounder Larry Page, prioritized speed and recognized that faster products are used not only more often, but differently. It became an obsession within Google.
Engineers working for Page learned quickly enough of [his speed] priority. “When people do demos and they’re slow, I’m known to count sometimes,” he says. “One one-thousand, two one-thousand. That tends to get people’s attention.” Actually, if your product could be measured in seconds, you’d already failed. Paul Buchheit remembers one time when he was doing an early Gmail demo in Larry’s office. Page made a face and told him it was way too slow. Buchheit objected, but Page reiterated his complaint, charging that the reload took at least 600 milliseconds. (That’s six-tenths of a second.) Buchheit thought, You can’t know that, but when he got back to his own office he checked the server logs. Six hundred milliseconds. “He nailed it,” says Buchheit.
OpenAI has seen the future, and it involves imbuing a humanoid robot with the silicon spark of artificial intelligence.
The developer of ChatGPT, the generative AI technology that has set the world abuzz, said it’s investing in Figure, a California company that makes robots for the workplace. OpenAI and Figure will also join forces to develop AI technology aimed at helping robots “process and reason from language,” the companies announced in a news release.
Launched in 2022, Figure is building what it calls “general purpose humanoids” that can labor alongside people, as well as do dangerous or unpleasant work. “There are over 10 million unsafe or undesirable jobs in the U.S. alone, and an aging population will only make it increasingly difficult for companies to scale their workforces,” the company says on its website.
Other blue-chip players joining OpenAI to invest in Figure include Jeff Bezos (through Bezos Expeditions, his family office), Intel, Microsoft and Nvidia, with the $675 million venture round announced Thursday valuing the robotics company at $2.6 billion.
Figure is one of a growing number of companies developing robots to work in warehouses, factories and other industrial settings. The company announced in January that it had signed a deal with BMW to deploy robots in the German automaker’s plants, including the German company’s facility in Spartanburg, South Carolina.
“Our vision at Figure is to bring humanoid robots into commercial operations as soon as possible,” said Brett Adcock, the startup’s founder and CEO, in a statement on its latest funding round. “This investment, combined with our partnership with OpenAI and Microsoft, ensures that we are well-prepared to bring embodied AI into the world to make a transformative impact on humanity.”
Legal claims are starting to pile up against Microsoft and OpenAI, as three more news sites have sued the firms over copyright infringement, The Verge reported. The Intercept, Raw Story and AlterNet filed separate lawsuits accusing ChatGPT of reproducing news content “verbatim or nearly verbatim” while stripping out important attribution like the author’s name.
The sites, all represented by the same law firm, said that if ChatGPT trained on copyright material, it “would have learned to communicate that information when providing responses.” Raw Story and AlterNet added that OpenAI and Microsoft must have known that the chatbot would be less popular and generate lower revenue if “users believed that ChatGPT responses violated third-party copyrights.”
The news organizations note in the lawsuit that OpenAI offers an opt-out system for website owners, meaning that the company must be aware of potential copyright infringement. Microsoft and OpenAI have also said that they’ll defend customers against legal claims around copyright infringement that might arise from using their products, and even pay for incurred costs.
Late last year, The New York Timessued OpenAI and Microsoft for copyright infringement, saying it “seeks to hold them responsible for the billions of dollars in statutory and actual damages”. OpenAI asked a court to dismiss that claim, saying the NYT took advantage of a ChatGPT bug that made it recite articles word for word.
The companies also face lawsuits from multiple non-fiction authors accusing them of “massive and deliberate theft of copyrighted works,” and by comedian Sarah Silverman over similar claims.
The State of Texas Assessments of Academic Readiness, known as STAAR , are a series of state-mandated standardized tests used in Texas schools to assess a student’s achievements and knowledge.
Diane Smith
Star-Telegram
Having just adapted to a newly reformatted state test, school leaders across Texas are now looking at a new change in how their students are assessed: computer-based scoring.
The Texas Education Agency rolled out the new “automated scoring engine,” a computer-based grading system, in December, the Dallas Morning News reported. Following the change, about three-quarters of all essay questions will be scored by a computer program rather than human scorers.
School district leaders in the Fort Worth area say it’s too soon for them to tell whether the new grading system is a cause for concern. But some say they need more information about the new system.
“I think anytime a computer program is going to take on grading of something of this magnitude, I think it is concerning,” said Jennifer Price, chief academic officer for the Keller Independent School District.
Automated scoring comes amid STAAR reformat
The new scoring engine comes amid broader changes to the state test. Last year, the Texas Education Agency rolled out a newly revamped STAAR exam that includes more writing prompts and fewer multiple choice questions than previous versions. State education officials say the new test is designed to more closely mirror instruction students get in the classroom.
But open-ended responses like essays also take longer to score than multiple choice questions. TEA officials said using computer-based scoring in combination with human scorers allows the agency to score tests and get results back to districts more quickly and cheaply.
Chris Rozunik, director of the agency’s student assessment division, said the computer program scores exams based on the same rubric that human graders use. The agency is also using human-scored sample papers to train the engine on what to look for in students’ responses, she said.
Rozunik said the new engine isn’t an AI system with broad capabilities like ChatGPT, but rather a computer-based scoring system with narrow parameters. She noted the agency has used machine scoring for closed-ended questions like multiple choice prompts for years.
The agency is committed to having human scores evaluate 25% of all essays, she said. The essays graded by humans include those the computer program can’t make sense of, and also a certain number the agency randomly assigns to human scorers, she said.
The reasons the computer program might kick an essay to human graders are varied, Rozunik said. If a student enters a series of random letters instead of an answer, the computer won’t understand how to evaluate it. But real answers, even good ones, can also baffle a computer program. If a student answers a question in a language other than English, the essay will end up being referred to a human, she said. Likewise, if a student gives an answer that is thoughtful and creative, but doesn’t come in a form the computer recognizes, their answer will go to a human, who will be better able to score it appropriately, she said.
“We do not penalize kids for unique thinking,” she said.
The agency is already facing a lawsuit brought by several school districts, including the Fort Worth and Crowley independent school districts, over the state’s A-F accountability system, which is primarily based on STAAR scores. Last October, a state district judge temporarily blocked the agency from releasing that year’s A-F scores.
Fort Worth school officials want more clarity on scoring change
Price, the Keller ISD administrator, said she’s worried about what guardrails are in place for the new automated system. State education officials say the exam is no longer a high-stakes test for students, since their performance doesn’t have any bearing on whether they go on to the next grade. But STAAR scores are still a high-stakes matter for school districts, since they’re the main factor in accountability ratings. Those scores can affect how parents perceive their school districts or campuses, ultimately influencing their decision about where to enroll their kids.
Given those stakes, Price doesn’t think state education officials have given districts enough information about how the new system works. The district has known the change was coming for about a year, she said, but TEA has given districts only limited details about what it would look like.
Melissa DeSimone, executive director of research, assessment and accountability for the Northwest Independent School District, said she doesn’t have enough data yet to know whether the new scoring system is a cause for concern. So far, TEA has only used the automated engines to score last December’s end-of-course exams. The district has gotten raw scores from that round of testing, she said, but hasn’t yet received students’ responses to test questions. Districts should get those responses sometime in late March, she said. At that point, the district can go through students’ answers and see if they were scored appropriately, she said.
If the district does find discrepancies between the scores that students received and the quality of their responses, officials can request that those tests be reevaluated by a human score, DeSimone said. The drawback is that those requests cost the district about $50 each if the scores come back the same, she said. The agency waives that fee if human scorers rate the response differently than the computer did.
District leaders have known that automated scoring was coming since the early part of last year, DeSimone said. The district didn’t adjust any of its test preparation because the automated scoring system is supposed to be based on the same rubric as human scoring, she said.
Fort Worth ISD officials weren’t available for an interview for this story. In an email, Melissa Kelly, the district’s associate superintendent of learning and leading, said there’s “a significant level of uncertainty” around how the new system will work.
So far, the district isn’t planning any major changes in response to the new scoring system, Kelly said. District leaders will stay focused on teaching Texas’ state-mandated standards and wait to see what results come out of the scoring change, she said.
Testing expert says automated scoring is growing
Kurt Geisinger, director of the Buros Center for Testing at the University of Nebraska–Lincoln, said the shift to automated grading shouldn’t be a big cause for concern for local school districts. Automated grading of essays is becoming more common across the country, he said, and for the most part, it’s been implemented without major problems.
A few years ago, Geisinger served as board chairman for the Graduate Review Examinations, an admissions test used for graduate schools across the country. At the time, the testing organization shifted to a hybrid AI-human grading model, where each test would be scored by both a computer and a human, he said. The organization found that the AI program did about as well as the human grader, he said.
Geisinger said one of the admissions exams in use across the country — he wouldn’t say which test — is graded at least in part using AI. The grading program analyzes essays based on about 40 different criteria, he said. But the three factors that end up being most critical to the final score are the length of the essay, the number of paragraphs and the average word length, he said. That means those tests aren’t so much measuring the quality of writing as a few factors that often correlate with good writing, he said.
Using those factors as a proxy for judging the quality of writing has some drawbacks, Geisinger said. If a test-taker uses longer words, it can be a sign of a larger vocabulary, he said. But the awkward use of big words makes for bad writing. If an AI system can’t tell whether the test-taker uses those words correctly, it may struggle to tell good writing from bad writing, he said.
Geisinger said some professors are also concerned about whether creativity in writing gets lost in the shift to AI grading, although he said he hasn’t seen any research to validate those concerns.
“I’ve heard English scholars say they wonder how someone like James Joyce would do on an AI-scored (test),” he said.
Related stories from Fort Worth Star-Telegram
Silas Allen is an education reporter focusing on challenges and possible solutions in Fort Worth’s school system. Allen is a graduate of the University of Missouri. Before coming to the Star-Telegram, he covered education and other topics at newspapers in Stillwater and Oklahoma City, Oklahoma. He also served as the news editor of the Dallas Observer, where he wrote about K-12 and higher education. He was born and raised in southeast Missouri.
Opinions expressed by Entrepreneur contributors are their own.
Imagine standing on the brink of the most transformative era in modern history, an era shaped by artificial intelligence, and failing to understand its impact on your business.
Ignoring the tidal wave of AI positions you squarely on the path to professional extinction. It opens the door for competitors to clone your success, for automation to replace your role and for your relevance in your industry to evaporate.
Think this is hyperbole? You haven’t been listening.
Watch this transformative video from bestselling author, Ben Angel, now — it will give you a crash course on the top risks entrepreneurs are walking straight into and the top skills they require to successfully navigate the change that is already underway.
Are you fully utilizing AI to drive your productivity and profits yet?
Opinions expressed by Entrepreneur contributors are their own.
Unlock the full potential of AI for your business with this ultimate 7-step ChatGPT prompt formula. It’s the key to driving unprecedented productivity and soaring profitability with ease.
Are you fully utilizing AI to drive your productivity and profits yet? Download the free “AI Success Kit” (limited time only). You’ll also get a free chapter from Ben’s brand new book, “The Wolf is at The Door — How to Survive and Thrive in an AI-Driven World.”
Disclosure: Our goal is to feature products and services that we think you’ll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners.
If you’re not already, it’s important that you learn to leverage the power of ChatGPT, automation, and AI for your business. According to Content at Scale, studies say that more than 80 percent of companies have already adopted AI into their operations. If you are ready to join the masses but unsure how, then this affordable e-learning experience is just for you.
This ChatGPT & Automation E-Degree is on sale for just $29.99 (reg. $790). This collection of 12 courses and more than 25 hours of content features lectures and breakdowns on how to apply the power of ChatGPT and other automation tools to real-world scenarios.
These courses are led by instructors from Eduonix Learning Solutions, which has earned a 4/5 star average rating for its high-end tech training experiences.
This bundle has a ton of useful tips and lectures to help you elevate your business and operations with the tools newly available online. For example, it goes in-depth on how to master conversations with ChatGPT, which you could use to streamline industry research, keep your writing shipshape, and automate costly services like SEO consulting and copyediting.
The bundle also features lectures that look at where AI can support and improve the world of data visualization, including breakdowns on how to transform raw data into telling and effective visual narratives.
Businesses that fall behind in sales often don’t keep up with the revolutions in technology and the culture of their market. Since most businesses thrive on efficient and cost-effective communications, learning to master these tools is a must for today’s business folks.
ARTIFICIAL Intelligence has predicted the chilling factors that could spark an apocalyptic World War Three – changing the world as we know it.
ChatGPT revealed eight horrific scenarios that could force world leaders to wage wars across the globe – leading to the deaths of millions.
4
AI has revealed the eight horrific scenarios that could lead to World War ThreeCredit: Getty
4
Putin’s invasion of Ukraine has put everyone at risk of a third World WarCredit: Getty
4
Xi Jinping’s territorial disputes in the South China Sea could potentially lead to a massive international conflictCredit: AFP
4
An Israeli army tank rolling alongside the border with the Gaza Strip during the Israel-Hamas warCredit: AFP
The Sun gave ChatGPT a prompt and asked what could possibly lead to a third global war.
The AI – which runs on advanced machine learning – predicted eight different factors which could cause the situation to spiral out of control.
ChatGPT started its predictions by saying: “It’s important to note that this is purely speculative, and real-world events are influenced by numerous unpredictable variables.”
The clever bot added: “The goal should always be to promote peaceful resolutions and international cooperation to prevent the outbreak of global conflicts.
“In reality, the international community generally works towards diplomatic solutions and conflict prevention.”
It then, however, offered a fascinating list of factors that could contribute to the outbreak of a global conflict.
ChatGPT wrote: “In a hypothetical scenario, World War 3 could potentially start due to a combination of complex geopolitical, economic, and social factors.”
TERRITORIAL DISPUTES
The wars in Ukraine and Gaza, as well as the rising tensions in Korea and China are often described as territorial disputes at their core.
Over the years they have become more and more complex – but the fight over land both sides claim is theirs has grown bloody.
ChatGPT believes if these disputes over land continue to escalate it could tip the scales just enough to spark another world war – or even several around the globe.
China’s illegal claim over Taiwan is one such instance that could see sparks fly.
The communist regime could launch a full-blown invasion to absorb Taiwan into the Chinese mainland.
According to the ChatGPT, the rise of nationalist or populist leaders promoting aggressive foreign policies might contribute to increased tensions and breakdowns in peaceful communication.
Nationalism contributed to the major alliances in the 20th century that played a role in World War One – France and Russia felt insecure about the rise of Germany and teamed up, while the Ottoman Empire felt intimidated by Russia and ended up siding with Germany.
In the current climate, America – the world superpower – feels threatened by the rise of China, fearing it could become the next great power of the world.
This has resulted in hostile relations between the two nations which could, in a worst case scenario, turn into a major war.
Similarly, Russian insecurity around NATO – a military alliance of very powerful countries around the globe – could be dangerous.
In the increasing hostile situation, America and Europe will see themselves as allies working against regimes like North Korea, China and Russia.
ChatGPT argues that populist leaders could undermine international institutions and alliances, weakening the mechanisms in place for diplomatic resolutions and cooperation.
This can result in a lack of effective communication and collaboration, increasing the chances of misunderstandings and conflicts.
FAILED DIPLOMACY
ChatGPT says repeated failures in diplomatic efforts to resolve international conflicts could erode trust between nations and create a hostile environment.
Putin’s illegal invasion of Ukraine occurred despite the West’s diplomatic efforts to stop it.
And Israel’s unprecedented attack against Hamas in Gaza escalated even after repeated efforts of international players trying to tackle the situation diplomatically.
Failed diplomacy in more vulnerable situations that involve greater risks could lead to major cross-border conflicts with millions dying across the globe.
TECHNOLOGICAL ARMS RACE
A race for advanced military tech, such as artificial intelligence, autonomous weapons, or advanced weaponry, could lead to another arms race.
If a country thinks another’s technological advancements pose a potential threat, it could respond by increasing its own military capabilities, inadvertently escalating tensions.
The Nuclear Arms Race in the 20th century between the US and the USSR sparked the Cold War – and an apocalyptic conflict between the two superpowers almost became a reality.
ChatGPT also believes that fears of falling behind in technology might lead nations to attack each other out of fear.
PROXY WARS
Ongoing regional conflicts where major powers are working as sponsors could escalate into broader wars, pulling more nations into the fray, the AI has warned.
According to ChatGPT, stakeholders could leverage proxy groups and other sponsors to cause major war flashpoints.
Since the beginning of the Israel-Hamas war, Iran has been fostering terrorism to create conflict in the Middle East through its proxy groups such as Hezbollah and the Houthis.
This forced major world powers such as the UK and the US to join the conflict, increasing fears that the regional situation could turn into an all-out war.
RESOURCE SCARCITY
ChatGPT suggests that intense competition for essential resources, such as water, oil, or rare minerals, could also be very dangerous.
According to the AI chatbot, conflicts over natural resources arise when different nations can’t agree about who they belong to, where they should be sent, how to protect them and how to use them.
The United Nations Environment Program (UNEP) reports that at least 40 per cent of all intrastate conflicts (conflicts between countries) in the past 60 years have had a link to natural resources.
And the AI believes it could trigger major cross-border conflicts in the future.
ECONOMIC TURMOIL
A severe global economic crisis would also create instability and trigger political unrest, potentially leading to conflicts as nations struggle to secure their interests, the AI bot claims.
The chatbot also argues that the prolonged economic crisis in the era of globalisation could be enough reason for nations to resort to military operations.
According to the World Economic Forum, the next big international economic crisis, such as a global recession, could well spark World War Three.
CYBER WARFARE
ChatGPT believes that escalating cyber attacks could lead to widespread distrust and retaliation among nations, escalating into a full-scale war.
At times like these when generative AI could be easily leveraged to create widespread propaganda, cyber warfare remains one the biggest threat to world peace.
Nations engaged in ongoing conflicts could deploy intense propaganda against the rival state and could severely escalate the situation, which could attract other ally nations and turn into a major global war.
What makes cyber warfare even more dangerous is that such digital attacks don’t have to be state-sponsored – and even players not involved in conflicts could leverage propaganda warfare to spark tensions between other countries.
The ‘real’ danger of WW3 starting
WITH several ongoing world conflicts, the looming threat of nuclear warfare has sparked fears that WW3 could soon be a reality if we aren’t careful.
OpenAI is launching a new subscription plan for ChatGPT, its viral AI-powered chatbot, aimed at smaller, self-service-oriented teams.
Aptly called ChatGPT Team, the plan provides a dedicated workspace for teams up to 149 people using ChatGPT as well as admin tools for team management. All users in a ChatGPT Team gain access to OpenAI’s latest models — GPT-4 (which generates text), GPT-4 with Vision (which understands images in addition to text) and DALL-E 3 (which creates images) — plus tools to allow ChatGPT to analyze, edit and extract info from uploaded files.
ChatGPT Team also lets people within a team build and share GPTs, custom apps based on OpenAI’s text-generating AI models. GPTs don’t require coding experience and can be as simple or complex as desired. For example, a GPT could ingest a company’s proprietary codebases so that developers can check their style or generate code in line with best practices.
As an added benefit, OpenAI says that ChatGPT Team customers will get unspecified new features and improvements down the line, and that it won’t train models on team data or conversations.
ChatGPT Team is priced at $30 per user per month or $25 per user per month billed annually — higher than ChatGPT Plus, OpenAI’s individual premium ChatGPT plan, which costs $20 per month. But ChatGPT Team is a good deal cheaper than ChatGPT Enterprise, which costs as much as $60 per user per month with a minimum of 150 users and a 12-month contract.
ChatGPT Team seems tailor-made for small- and medium-sized business customers who want team-oriented ChatGPT features without having to pay top dollar for them. That’s likely to be a lucrative space; according to a recent survey from ResumeBuilder, 49% of companies use ChatGPT for use cases like coding, creating content like job descriptions and interview questions and summarizing documents and meetings, while 30% say that they intend to use ChatGPT in the future.=
Disclosure: Our goal is to feature products and services that we think you’ll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners.
Census.gov reports that just 3.8% of businesses recently revealed they are using AI to produce goods and services. If you want to take your company to the next level and be in what will eventually be the majority when it comes to utilizing the power of artificial intelligence, it’s time to master ChatGPT.
Learning more about AI doesn’t mean you have to head back to the classroom. In fact, just in time for the new year, you can now learn from the comfort of your couch thanks to The Complete ChatGPT Artificial Intelligence OpenAI Training Bundle. Packed with four informative courses, this bundle can be yours for just $24.97 — less than $7 a class — now through January 14 with no coupon code needed.
Don’t worry if you’re totally new to the world of AI. This bundle kicks off with a straightforward intro taught by ChatGPT trainer Mike Wheeler, ChatGPT for Beginners. This course tackles how to use ChatGPT in just an hour, giving you a glimpse into how to make the software work for you and make your work day smoother.
After you gain a foundation in all things AI, three more courses round out your education. If you’re hoping to use ChatGPT for marketing purposes, you can dive into ChatGPT: Artificial Intelligence (AI) That Writes for You and learn as teacher Alex Genadinik shows you how to enlist ChatGPT to write blog posts, social media captions, and more. Then, the courses Create a ChatGPT A.I. Bot with Tkinter & Python and Create a ChatGPT A.I. Bot with Django & Python take things even further.
Opinions expressed by Entrepreneur contributors are their own.
Artificial Intelligence: What was once a seemingly passing buzz phrase has now become an accepted and enduring technology reality — set to only increase in speed and application.
But, as a society, and especially among those focused on business solutions, there are divides on modes of response. Some feel nervous, even frightened, but certainly concerned about the impact AI could have on their jobs individually and the job market as a whole. Others straddle the fence, unconvinced as yet about its true potential. Then there is the corps of the savvy — those who harness its capabilities and recognize its limitations.
I heartily recommend you take your place in that last group.
I’m lucky to work in an ecosystem of individuals committed to harnessing innovation, one that embraces inventions and improvements and then learns how to redirect workflow energy accordingly. One such individual is Chris Winfield, founder of Understanding AI. His career path has flowed through various markets and verticals but has centered on giving entrepreneurs what he likes to term an “unfair advantage” by leveraging everything from relationships to PR and social media to mentorships. Now, he’s turned his attention to AI and has identified key ways entrepreneurs can gain another unfair advantage by leveraging it.
We sat down and discussed this pathway to more robust 21st-century business and various other related topics, including how to soothe associated anxiety.
According to Winfield, the most glaring mistake entrepreneurs make is avoiding the subject altogether. In a rapidly evolving business landscape, reluctance can leave you stranded while others sail smoothly into new opportunities. “The key,” he said, “is to understand that AI is a tool like any another: its effectiveness depends on how you wield it.”
Early steps
To assess the applicability of AI in a work setting, Winfield typically has clients go through a simple exercise: Figure out what your hourly wage is (everyone has one: how much your company made last year divided by the hours you worked). Then, write down everything you do for one week — tasks, meetings, minutia, calls… everything. Once that list is complete, identify tasks you wouldn’t be doing if you could pay someone else to, then ask whether ChatGTP (or any other AI tool) could handle them.
Consider one example: Imagine how an entrepreneur who also manages writing and social media for her company might leverage AI during a typical day. She has a morning video call with an interviewer and applies vidyo.ai, an AI-assisted editing platform that transforms longer-form videos and podcasts into shorter clips suitable for TikTok, YouTube and Instagram. It also generates a snippet-ready video of the call and a transcript of everything discussed. She then engages ContentFries, which chops that video into social media-ready tidbits.
Finally, she makes productive use of the transcript using another AI tool to write blog posts and social media captions. She has done all of this — truly maximizing output and content creation — essentially without lifting a finger.
To be sure, polishing and cleverly market-applying content is still up to you. One common pitfall, Winfield pointed out, is the assumption that AI can work wonders without you, but it can only be as good as the prompts you provide, which require creativity. He recalled mentoring a chiropractor who asked an AI tool to “write good newsletters.” That was it… that was the prompt. The results, not surprisingly, were lackluster. So, Winfield coached him, including imparting the effectiveness of “laddering.”
Generally applied in marketing realms, laddering also works with prompts. Think of it as peeling back an onion — moving from understanding features to values to the emotions that make us tick. We have to do this when using AI to help it understand our foundational and creative needs and the emotional payoff we’re looking for. Once it has all of those inputs, it can create useful and valuable content for consumers.
Enter ChatGPT, a tool developed by Open AI that can truly revolutionize our work. In addition to crafting and focusing prompts, you can use it as a brainstorming partner, a search engine and a versatile outline creator. Applied thoughtfully, it can save oceans of research time and busy work, leaving more hours for tasks only your creative brain can handle.
Another quick exercise: Have ChatGPT build a target market persona for your company, which could help you understand the current market better and/or identify one that a unique product or offering could reach. Be specific with your prompts, though: Ask it who may like your product, what competitors for it there might be, and lastly, have it present a demographic that you may not have thought about. Examine critically the resulting information, and don’t be afraid to make mistakes (or recognize AI’s mistakes) along the way.
As artificial intelligence continues to evolve, new tools are seemingly released daily, and it’s easy to get caught up in the excitement and lose focus. So, Winfield advises that entrepreneurs designate AI tools for work-related tasks only (avoiding using them for unrelated matters during office hours) to prevent time-wasting distractions.
The better you get at leveraging AI, the easier it will be to identify which tools you should and can integrate. Handled strategically and capably, you’ll find that productivity increases, creativity expands, and time is created for the kind of multidimensional thinking that really helps move the success needle.
Opinions expressed by Entrepreneur contributors are their own.
In the fast-paced world of scaling service-based businesses, the dilemma of delivery versus marketing efforts is a constant challenge, especially for those who are at the point of maxing out on their client capacity.
The need for brand relevance, consistency and awareness can feel overwhelming when a lot of your time is dedicated to delivering a transformational experience for your clients. That sparked the idea for my team and me to dive in on testing relevant artificial intelligence tools to help free up time spent on our marketing strategies.
Here’s everything that worked — and what didn’t — in the last 12 months, using my coaching business as a guinea pig.
Does AI diminish the human touch or authenticity of our message?
One common fear is that AI will replace authenticity in marketing. A helpful reframe here may be to see this tool as an additional instrument in the “orchestration” of your brand.
Picture this: Just as a conductor directs the musical performance or symphony, AI can enhance your song without overshadowing your unique voice and the intention of what you want to convey in the song.
The song I’m referring to is your brand message. AI is just another instrument that can be added to your marketing to spread awareness for your brand. As long as you are clear on your ideal clients, offer and core message — then the playground is yours to integrate AI creatively into your marketing.
The biggest lesson I learned in the last 12 months is that the machine output will only be as good as the user’s input. You are still the storyteller, the architect and the soul of the intellectual message.
The 3 biggest benefits of AI
1. The client-driven brand message
One key benefit is streamlining note-taking and not having to solely rely on our brains to do all the recalls, especially for client calls, events, workshops, etc.
Tools like Fathom on Zoom have allowed me to take detailed notes during calls without compromising my presence and focus as a coach. The ability to condense and generate information, summaries and insights at lightning speed is a game-changer.
From a marketing perspective, all the AI-streamlined notes can help enhance the clarity and specificity of our message because now we can use real-time data (a.k.a words sourced from our existing or potential clients) to create the most effective message to attract even more of our people. For streamlining call notes and summarizing data, consider trying Fathom AI or Otter AI.
2. Making your content more dynamic and attention-grabbing
In today’s social media landscape, capturing the attention of our potential clients is more challenging than ever as you may be already aware. Now it is more crucial than ever to have compelling sales content, email headlines and hooks that stand out.
That’s exactly what I loved using AI for in my business. AI can work around the clock (unlike most of us who need eight to nine hours of sleep) to help you craft different versions of attention-grabbing copy that resonate with your audience.
Recently, I needed help writing the landing page copy for my new workshop event, so I went to ChatGPT. I provided the context of this event, what I was trying to teach, ideal clients that I wanted to reach and allowed AI to do the rest of the magic.
The entire process took less than 30 minutes from start to finish and resulted in a 40% conversion rate for sign-ups. This would have taken me at least two hours in the past.
This type of workflow works best if you are clear on the idea and just want support in putting the structure around it. Do always trust your intuition to decide on what feels right and edit as you go to ensure it still fits with your brand voice. Your intuition (human touch) is the starter and the finisher, AI can just speed up the process in between. For this use case, check out ChatGPT or Google Bard.
3. Being everywhere at the same time (omnichannel efficiency)
As consumer’s content preferences evolve — from listening, watching or reading content, AI’s repurposing functionality offers endless possibilities for us business owners to streamline this process to reach more people on various platforms.
If you are someone who prefers to create content by “saying it out loud,” you can use a tool called Oasis AI to help you bring the audio into various formats of social content.
If you are someone who prefers to film content in a video setting, you can use a tool called Descript AI to help you add text-to-captions and cut into short-form videos to distribute to channels such as YouTube Shorts, Instagram Reels or TikTok.
No longer do we need to spend countless hours editing or repurposing manually to be everywhere at once. All you need is one core content, and let the machine do the heavy lifting to help you generate 10x more out of that original piece.
A helpful reminder here is that artificial intelligence can not aggregate information that doesn’t exist online yet. We are still the creators and innovators.
Humans live, humans experience and humans connect — robots cannot do that. Humans will be the facilitators and the conductors of the machine. The question comes down to this: Are you willing to learn to become the best facilitator to help your business expand forward?
In the hands of someone curious, open-minded and creative, AI makes the marketing output significantly easier and faster. Welcome to the next era of marketing.
Each month, the Bank Automation News editorial team does a deep dive into topics relevant to the banking industry and, this year, AI dominated the headlines. From AI-driven credit decisioning to implementation and generative AI, FI executives shared how the technology is changing the industry.
Following are the editors’ favorite AI features of 2023:
Buy-versus-build is a common question when discussing AI within financial institutions. FIs including Barclays, Citi, Deutsche Bank, HSBC and JPMorgan weighed in on how they are investing in the technology — most taking a hybrid approach on buying in-house and selecting third-party vendors to provide the technology.
Financial institutions this year looked to AI and automation to speed up processes and add efficiencies. However, when implementing AI-driven decisioning, FIs needed to consider biases and compliance. That’s where large language models, explainability and data training are needed.
The finance industry explored generative AI technology such as implementing it within customer support, fraud detection, natural language processing and language translation. Rather than just tapping ChatGPT, FIs have since rolled out their own generative AI technologies for internal and client-facing chatbots.
Generative AI uses in finance will continue to surface in 2024 as the technology adapts and FIs invest in it.
Get ready for the Bank Automation Summit U.S. 2024 in Nashville on March 18-19! Discover the latest advancements in AI and automation in banking. Register now.
OpenAI is in early discussions to raise a fresh round of funding at a valuation at or above $100 billion, people with knowledge of the matter said, a deal that would cement the ChatGPT maker as one of the world’s most valuable startups.
Investors potentially involved in the fundraising round have been included in preliminary discussions, according to the people, who asked not to be identified to discuss private matters. Details like the terms, valuation and timing of the funding round haven’t yet been finalized and could still change, the people said.
The company is set to complete a separate tender offer in early January, which would allow employees to sell their shares at a valuation of $86 billion, Bloomberg previously reported. That is being led by Thrive Capital and saw more demand from investors than there was availability, people familiar with the matter have said.
OpenAI’s rocketing valuation mirrors the AI frenzy it kicked off one year ago after releasing ChatGPT, a chatbot capable of composing eerily human sentences and even poetry in response to simple prompts. The company became Silicon Valley’s hottest startup, raising $13 billion to date from Microsoft Corp., and spurred a new appreciation for the promise of AI that changed the tech industry landscape within a few months.
OpenAI has also held discussions to raise funding for a new chip venture with Abu Dhabi-based G42, according to people with knowledge of the matter.
The startup has discussed raising between $8 billion and $10 billion from G42, said one of the people, all of whom requested anonymity to discuss confidential information. It’s unclear whether the chips venture and wider company funding efforts are related.
OpenAI Chief Executive Officer Sam Altman had been seeking capital for the chipmaking project, code-named Tigris. The goal is to produce semiconductors that can compete with those from Nvidia, which currently dominates the AI chip market, Bloomberg News reported last month.
In October, G42 announced a partnership with OpenAI “to deliver cutting-edge AI solutions to the UAE and regional markets.” No financial details were provided. The firm, founded in 2018, is led by Sheikh Tahnoon bin Zayed Al Nahyan, the UAE’s national security adviser and chair of the Abu Dhabi Investment Authority.
OpenAI’s future looked briefly uncertain after its board suddenly fired Altman earlier last month. At the time, some investors considered writing their stakes down to zero. But after five days of leadership tumult, Altman was brought back and a new board was named. The company has aimed to signal to customers that it’s refocusing on its products following the upheaval.
— With assistance from Hannah Miller
Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.
[ad_2]
Gillian Tan, Edward Ludlow, Shirin Ghaffary, Bloomberg