Millennials and Gen Z have witnessed technological advancements their great grandparents would have seen as science fiction, from the birth of the iPhone to the rollout of self-driving cars, but Gen A’s story might be even more Asimov-esque.

Bank of America Research analysts, led by equity strategist Martyn Briggs, explained Thursday that the rise of A.I. is “an iPhone moment, on steroids” that will reshape not only the business world and the global economy over the coming decades, but also an entire generation.

Gen A, generally defined as those born between 2012 and the mid-2020s, will grow up in a world where “the norm” will be A.I. assistants that learn and grow alongside children, slowly being tailored to their specific needs.

“We are at the dawn of a demographic ‘Gen A’ revolution,” Briggs and his team wrote in a Thursday note summarizing the key takeaways from 14 recent A.I. expert events. “While Gen Z is the most disruptive generation now as the first generation to be born into an online world; kids of today will have AI models that grow with them.” 

The analysts cited Timothy Papandreou, an advisor to Alphabet’s research and development organization, X (formerly GoogleX), who explained at a recent event that A.I. will lead to a transition from a generation of programmers to a generation of “perfect prompters” as kids learn to utilize generative A.I. “assistants” throughout their lives. 

Gen A won’t need programming skills to use their A.I models, instead they will work to properly prompt these systems with simple text to get the desired outcome, whether that’s finding information about Kafka for a school paper or writing an email at work.

“Children will now have an A.I. avatar shadow assistant or agent from birth. As they grow A.I. will grow with them and know everything it needs to know, and always be there as a mentor,” Papandreou said, arguing that Gen Z will be the “last generation to not grow up with AI.”

As Fortune previously reported, Gen Z has grappled with anxiety and finding meaning in their role as the first generation born with the internet and social media. Now, it’s Gen A’s turn to deal with the impact of rapid technological advancement, and their A.I. usage will likely bring forward a whole new set of unforeseen mental health problems.

A new world for businesses and regulators

Bank of America’s analysts detailed how Gen A’s use of A.I. will ultimately reshape the business world and require serious regulation from governments to prevent worst-case scenarios. They argued that younger generations are already leaning into A.I, noting that “in the case of ChatGPT and Bard, younger generations are more aware of them and see them in a more favourable light.”

In sum, the rise of the first A.I.-fluent generation, Gen A, means a rapid transition to using the technology is necessary for businesses, according to the experts BofA interviewed throughout 14 separate A.I. events over the past year.

“There will be two kinds of companies at the end of this decade: those that are fully utilizing AI, and those that are out of business,” Peter Diamandis, a futurist and the co-founder of Singularity University, bluntly told the investment bank.

For governments, the risks of A.I. are significant as Gen A comes of age. Experts have repeatedly warned about the potential impact of A.I. on the labor market, but BofA’s analysts are less concerned about job losses and more worried about the rise of deepfakes, propaganda, and copyright and intellectual property issues. They said there will need to be “AI Wranglers”—human programmers and regulators who oversee A.I. models—to help counteract these negative effects in the future.

“A.I. regulation is on the horizon,” they wrote. “To address the problems of propaganda and fake news, we need regulation, especially principles, standards and guardrails.”

BofA also spoke with Nell Watson, an A.I. ethicist and the president of the European Responsible Artificial Intelligence Office, about how future regulations might look. She said a global framework for regulating A.I. is “unlikely,” but different strategies to prevent worst-case scenarios are rolling out now.

“Each region is targeting a ‘Goldilocks level’ of just right intervention but taking different approaches, from surveillance and control (China), national security/hardware restrictions (US), to privacy laws (EU),” she explained. 

Watson said that “company A.I. ethics and self-regulation are insufficient and uneven thus far,” but the new technology shouldn’t be overregulated due to “moral panic” either.

“We should do just enough and not more, to have enough safety to prevent catastrophe and prevent unfortunate externalities, but not shut down A.I. altogether,” she argued.

Will Daniel

Source link

You May Also Like

A legal victory over Google could yet dent Big Tech’s armour

It is easy to view the US government’s latest legal challenge to…

US stocks hover in tight range ahead of inflation report

US stocks hovered in a tight range on Monday as investors took…

During Labor Day Sale, Elevate Your Camping With This $309.97 Pop-Up Tent | Entrepreneur

Disclosure: Our goal is to feature products and services that we think…

World Bank says remittances up 5% in 2022, but growth to slow to 2% next year

Remittances to low- and middle-income countries grew by nearly 5% to around…