ReportWire

Tag: openai

  • Italy blocks ChatGPT over privacy concerns | CNN Business

    Italy blocks ChatGPT over privacy concerns | CNN Business

    [ad_1]


    London
    CNN
     — 

    Regulators in Italy issued a temporary ban on ChatGPT Friday, effective immediately, due to privacy concerns and said they had opened an investigation into how OpenAI, the US company behind the popular chatbot, uses data.

    Italy’s data protection agency said users lacked information about the collection of their data and that a breach at ChatGPT had been reported on March 20.

    “There appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies,” the agency said.

    The Italian regulator also expressed concerns over the lack of age verification for ChatGPT users. It argued that this “exposes children to receiving responses that are absolutely inappropriate to their age and awareness.” The platform is supposed to be for users older than 13, it noted.

    The data protection agency said OpenAI would be barred from processing the data of Italian users until it “respects the privacy regulation.”

    OpenAI has been given 20 days to communicate the measures it will take to comply with Italy’s data rules. Otherwise, it could face a penalty of up to €20 million ($21.8 million), or up to 4% of its annual global turnover.

    Since its public release four months ago, ChatGPT has become a global phenomenon, amassing millions of users impressed with its ability to craft convincing written content, including academic essays, business plans and short stories.

    But concerns have also emerged about its rapid spread and what large-scale uptake of such tools could mean for society, putting pressure on regulators around the world to act.

    The European Union is finalizing rules on the use of artificial intelligence in the bloc. In the meantime, EU companies must comply with the General Data Protection Regulation, or GDPR, as well as the Digital Services Act and Digital Markets Act, which apply to tech platforms.

    Meanwhile, so-called “generative AI” tools available to the public are proliferating.

    Earlier this month, OpenAI released GPT-4, a new version of the technology underpinning ChatGPT that is even more powerful. The company said the updated technology passed a simulated law school bar exam with a score around the top 10% of test takers; by contrast, the prior version, GPT-3.5, scored around the bottom 10%.

    This week, some of the biggest names in tech, including Elon Musk, called for AI labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    — Julia Horowitz contributed reporting.

    [ad_2]

    Source link

  • TV and film writers are fighting to save their jobs from AI. They won’t be the last | CNN Business

    TV and film writers are fighting to save their jobs from AI. They won’t be the last | CNN Business

    [ad_1]



    CNN
     — 

    By any standard, John August is a successful screenwriter. He’s written such films as “Big Fish,” “Charlie’s Angels” and “Go.” But even he is concerned about the impact AI could have on his work.

    A powerful new crop of AI tools, trained on vast troves of data online, can now generate essays, song lyrics and other written work in response to user prompts. While there are clearly limits for how well AI tools can produce compelling creative stories, these tools are only getting more advanced, putting writers like August on guard.

    “Screenwriters are concerned about our scripts being the feeder material that is going into these systems to generate other scripts, treatments, and write story ideas,” August, a Writers Guild of America (WGA) committee member, told CNN. “The work that we do can’t be replaced by these systems.”

    August is one of the more than 11,000 members of the WGA who went on strike Tuesday morning, bringing an immediate halt to the production of some television shows and possibly delaying the start of new seasons of others later this year.

    WGA is demanding a host of changes from the Alliance of Motion Picture and Television Producers (AMPTP), from an increase in pay to receiving clear guidelines around working with streaming services. But as part of their demands, the WGA is also fighting to protect their livelihoods from AI.

    In a proposal published on WGA’s website this week, the labor union said AI should be regulated so it “can’t write or rewrite literary material, can’t be used as source material” and that writers’ work “can’t be used to train AI.”

    August said the AI demand “was one of the last things” added to the WGA list, but that it’s “clearly an issue writers are concerned about” and need to address now rather than when their contact is up again in three years. By then, he said, “it may be too late.”

    WGA said the proposal was rejected by AMPTP, which countered by offering annual meetings to discuss advancements in the technology. August said AMPTP’s response shows they want to keep their options open.

    In a document sent to CNN responding to some of WGA’s asks, AMPTP said it values the work of creatives and “the best stories are original, insightful and often come from people’s own experiences.”

    “AI raises hard, important creative and legal questions for everyone,” it wrote. “Writers want to be able to use this technology as part of their creative process, without changing how credits are determined, which is complicated given AI material can’t be copyrighted. So it’s something that requires a lot more discussion, which we’ve committed to doing.”

    It added that the current WGA agreement defines a “writer” as a “person,” and said “AI-generated material would not be eligible for writing credit.”

    The writers’ attempt at bargaining over AI is perhaps the most high-profile labor battle yet to address concerns about the cutting-edge technology that has captivated the world’s attention in the six months since the public release of ChatGPT.

    Goldman Sachs economists estimate that as many as 300 million full-job jobs globally could be automated in some way by the newest wave of AI. White-collar workers, including those in administrative and legal roles, are expected to be the most affected. And the impact may hit sooner than some think: IBM’s CEO recently suggested AI could eliminate the need for thousands of jobs at his company alone in the next five years.

    David Gunkel, a professor at the department of communications at Northern Illinois University who tracks AI in media and entertainment, said screenwriters want clear guidelines around AI because “they can see the writing on the wall.”

    “AI is already displacing human labor in many other areas of content creation—copywriting, journalism, SEO writing, and so on,” he said. “The WGA is simply trying to get out-in-front of and to protect their members against … ‘technological unemployment.’”

    While film and TV writers in Hollywood may currently be leading the charge, professionals in other industries will almost certainly be paying attention.

    “There’s certainly other industries that need to be paying close attention to this space,” said Rowan Curran, an analyst at Forrester Research who focuses on AI. He noted that digital artists, musicians, engineers, real estate professionals and customer service workers will all feel the impact of generative AI.

    “Watch this #WGA strike carefully,” Justine Bateman, a writer, director and former actress, wrote in a tweet shortly after the strike kicked off. “Understand that our fight is the same fight that is coming to your professional sector next: it’s the devaluing of human effort, skill, and talent in favor of automation and profits.”

    AI has had a place in Hollywood for years. In the 2018 “Marvel Avengers Infinity Wars” film, the face of Thanos – a character played by actor Josh Brolin – was created in part with the technology.

    Crowd and battle scenes in films including the “Lord of the Rings” and “Meg” have utilized AI, and the most recent Indiana Jones used it to make Harrison Ford’s character appear younger. It’s also been used for color correction, finding footage more quickly during post production and making improvements such as removing scratches and dust from footage.

    But AI in screenwriting is in its infancy. In March, a “South Park” episode called “Deep Learning,” was co-written by ChatGPT and the tool was highly focused on in the plot (the characters use ChatGPT to talk to girls and write school papers).

    August said writers are largely willing to play ball with tools, as long as they’re used as launching pads or for research and writers are still credited and utilized throughout the production process.

    “Screenwriters are not luddites, and we’ve been quick to use new technologies to help us tell our stories,” August said. “We went from typewriters to word processors happily and it increased productivity. …. But we don’t need a magical typewriter that types scripts all by itself.”

    Because large language models are trained on text that humans have written before, and find patterns in words and sentences to create responses to prompts, concerns around intellectual property exist, too. “It is entirely possible for a [chatbot] to generate a script in the style of a particular kind of filmmaker or scriptwriter without prior consent of the original artist or the Hollywood studio that holds the IP for that material,” Gunkel said.

    For example, one could prompt ChatGPT to generate a zombie apocalypse drama in the style of David Mamet. “Who should get credited for that?” August said. “What happens if we allow a producer or studio executive to come up with a treatment or pitch or something that looks like a screenplay that no writer has touched?”

    For now, the legal landscape remains very much unsettled on the matter, with regulations lagging behind the rapid pace of AI development. In early April, the Biden administration said it is seeking public comments on how to hold artificial intelligence systems like ChatGPT accountable.

    “We can’t protect studios from their own bad choices,” August said. “We can only protect writers from abuses.”

    The strike, and the demands around AI specifically, come at a time when both the writers and the studios are feeling financial pain.

    Many of the businesses represented by AMPTP have seen drops in their stock price, prompting deep cost cutting, including layoffs. The need to manage costs, combined with addressing the fallout from the strike, might only make the companies feel more pressure to turn to AI for scriptwriting.

    “In the short term, this could be an effective way to circumvent the WGA strike, mainly because [large language models], which are considered property and not personnel, can be employed for this task without violating the picket line,” Gunkel said. Such an “experiment” could also show production studios whether it’s possible “to get by with less humans involved,” he said.

    But Joshua Glick, a visiting professor of film and electronic arts at Bard University, believes such a move would be ill-advised.

    “It would be a pretty aggressive and antagonistic move for studios to move forward with AI-generated scripts in terms of getting writers to come to the negotiating table because AI is such a crucial sticking point in the negotiations,” said Glick, who also co-created Deepfake: Unstable Evidence on Screen, an exhibition at the Museum of the Moving Image in New York.

    “At the same time, I think the result of those scripts would be pretty mediocre at best,” he said.

    However the studios react, the issue is unlikely to go away in Hollywood. Film and TV actors’ contracts are up in June, and many are worried about how their faces, bodies and voices will be impacted by AI, August said.

    “As writers, we don’t want tools to replace us but actors have the same concerns with AI, as do directors, editors and everyone else who does creative work in this industry,” he added.

    [ad_2]

    Source link

  • Chinese police detain man for allegedly using ChatGPT to spread rumors online | CNN Business

    Chinese police detain man for allegedly using ChatGPT to spread rumors online | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Police in China have detained a man they say used ChatGPT to create fake news and spread it online, in what state media has called the country’s first criminal case related to the AI chatbot.

    According to a statement from police in the northwest province of Gansu, the suspect allegedly used ChatGPT to generate a bogus report about a train crash, which he then posted online for profit. The article received about 15,000 views, the police said in Sunday’s statement.

    ChatGPT, developed by Microsoft

    (MSFT)
    -backed OpenAI, is banned in China, though internet users can use virtual private networks (VPN) to access it.

    Train crashes have been a sensitive issue in China since 2011, when authorities faced pressure to explain why state media had failed to provide timely updates on a bullet train collision in the city of Wenzhou that resulted in 40 deaths.

    Gansu authorities said the suspect, surnamed Hong, was questioned in the city of Dongguan in southern Guangdong province on May 5.

    “Hong used modern technology to fabricate false information, spreading it on the internet, which was widely disseminated,” the Gansu police said in the statement.

    “His behavior amounted to picking quarrels and provoking trouble,” they added, explaining the offense that Hong was accused of committing.

    Police said the arrest was the first in Gansu since China’s Cyberspace Administration enacted new regulations in January to rein in the use of deep fakes. State broadcaster CGTN says it was the country’s first arrest of a person accused of using ChatGPT to fabricate and spread fake news.

    Formally known as deep synthesis, deep fake refers to highly realistic textual and visual content generated by artificial intelligence.

    The new legislation bars users from generating deep fake content on topics already prohibited by existing laws on China’s heavily censored internet. It also outlines take down procedures for content considered false or harmful.

    The arrest also came amid a 100-day campaign launched by the internet branch of the Ministry of Public Security in March to crack down on the spread of internet rumors.

    Since the beginning of the year, Chinese internet giants such as Baidu

    (BIDU)
    and Alibaba

    (BABA)
    have sought to catch up with OpenAI, launching their own versions of the ChatGPT service.

    Baidu unveiled “Wenxin Yiyan” or “ERNIE Bot” in March. Two months later, Alibaba launched “Tongyi Qianwen,” which roughly translates as seeking truth by asking a thousand questions.

    In draft guidelines issued last month to solicit public feedback, China’s cyberspace regulator said generative AI services would be required to undergo security reviews before they can operate.

    Service providers will also be required to verify users’ real identities, as well as providing details about the scale and type of data they use, their basic algorithms and other technical information.

    [ad_2]

    Source link

  • OpenAI, maker of ChatGPT, hit with proposed class action lawsuit alleging it stole people’s data | CNN Business

    OpenAI, maker of ChatGPT, hit with proposed class action lawsuit alleging it stole people’s data | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI, the company behind the viral ChatGPT tool, has been hit with a lawsuit alleging the company stole and misappropriated vast swaths of peoples’ data from the internet to train its AI tools.

    The proposed class action lawsuit, filed Wednesday in a California federal court, claims that OpenAI secretly scraped “massive amounts of personal data from the internet,” according to the complaint. The nearly 160-page complaint alleges that this personal data, including “essentially every piece of data exchanged on the internet it could take,” was also seized by the company without notice, consent or “just compensation.”

    Moreover, this data scraping occurred at an “unprecedented scale,” the suit claims.

    OpenAI did not immediately respond to CNN’s request for comment Wednesday. Microsoft, a major investor into OpenAI, was also named as a defendant in the suit and did not immediately respond to a request for comment.

    “By collecting previously obscure personal data of millions and misappropriating it to develop a volatile, untested technology, OpenAI put everyone in a zone of risk that is incalculable – but unacceptable by any measure of responsible data protection and use,” Timothy K. Giordano, a partner at Clarkson, the law firm behind the suit, said in a statement to CNN Wednesday.

    The complaint also claims that OpenAI products “use stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge.”

    The lawsuit seeks injunctive relief in the form of a temporary freeze on further commercial use of OpenAI’s products. It also seeks payments of “data dividends” as financial compensation to people whose information was used to develop and train OpenAI’s tools.

    OpenAI publicly launched ChatGPT late last year, and the tool immediately went viral for its ability to generate compelling, human-sounding responses to user prompts. The success of ChatGPT spurred an apparent AI arms race in the tech world, as companies big and small are now racing to develop and deploy AI tools into as many products as possible.

    [ad_2]

    Source link

  • OpenAI CEO Sam Altman to testify before Congress | CNN Business

    OpenAI CEO Sam Altman to testify before Congress | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    OpenAI CEO Sam Altman will testify before Congress next Tuesday as lawmakers increasingly scrutinize the risks and benefits of artificial intelligence, according to a Senate Judiciary subcommittee.

    During Tuesday’s hearing, lawmakers will question Altman for the first time since OpenAI’s chatbot, ChatGPT, took the world by storm late last year.

    The groundbreaking generative AI tool has led to a wave of new investment in AI, prompting a scramble among US policymakers who have called for guardrails and regulation amid fears of AI’s misuse.

    Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”

    “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”

    He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

    [ad_2]

    Source link

  • ‘Serious concerns’: Top companies raise alarm over Europe’s proposed AI law | CNN Business

    ‘Serious concerns’: Top companies raise alarm over Europe’s proposed AI law | CNN Business

    [ad_1]


    Dortmund, Germany
    CNN
     — 

    Dozens of Europe’s top business leaders have pushed back on the European Union’s proposed legislation on artificial intelligence, warning that it could hurt the bloc’s competitiveness and spur an exodus of investment.

    In an open letter sent to EU lawmakers Friday, C-suite executives from companies including Siemens

    (SIEGY)
    , Carrefour

    (CRERF)
    , Renault

    (RNLSY)
    and Airbus

    (EADSF)
    raised “serious concerns” about the EU AI Act, the world’s first comprehensive AI rules.

    Other prominent signatories include big names in tech, such as Yann LeCun, chief AI scientist of Meta

    (FB)
    , and Hermann Hauser, founder of British chipmaker ARM.

    “In our assessment, the draft legislation would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing,” the group of more than 160 executives said in the letter.

    They argue that the draft rules go too far, especially in regulating generative AI and foundation models, the technology behind popular platforms such as ChatGPT.

    Since the craze over generative AI began this year, technologists have warned of the potential dark side of systems that allow people to use machines to write college essays, take academic tests and build websites. Last month, hundreds of top experts warned about the risk of human extinction from AI, saying mitigating that possibility “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    The EU proposal applies a broad brush to such software “regardless of [its] use cases,” and could push innovative companies and investors out of Europe because they would face high compliance costs and “disproportionate liability risks,” according to the executives.

    “Such regulation could lead to highly innovative companies moving their activities abroad” and investors withdrawing their capital from European AI, the group wrote.

    “The result would be a critical productivity gap between the two sides of the Atlantic.”

    The executives are calling for policymakers to revise the terms of the bill, which was agreed upon by European Parliament lawmakers earlier this month and is now being negotiated with EU member states.

    “In a context where we know very little about the real risks, the business model, or the applications of generative AI, European law should confine itself to stating broad principles in a risk-based approach,” the group wrote.

    The business leaders called for a regulatory board of experts to oversee these principles and ensure they can be continuously adapted to changes in the fast-moving technology.

    The group also urged lawmakers to work with their US counterparts, noting that regulatory proposals had also been made in the United States. EU lawmakers should try to “create a legally binding level playing field,” the executives wrote.

    If such action isn’t taken and Europe is constrained by regulatory demands, it could hurt the region’s international standing, the group suggested.

    “Like the invention of the Internet or the breakthrough of silicon chips, generative AI is the kind of technology that will be decisive for the performance capacity and therefore the significance of different regions,” it said.

    Tech experts have increasingly called for greater regulation of AI as it becomes more widely used. In recent months, the United States and China have also laid out plans to regulate the technology. Sam Altman, CEO of ChatGPT maker OpenAI, has used high-profile trips around the world in recent weeks to call for co-ordinated international regulation of AI.

    The EU rules are the world’s “first ever attempt to enact” legally binding rules that apply to different areas of AI, according to the European Parliament.

    Negotiators of the AI Act hope to reach an agreement before the end of the year, and once the final rules are adopted by the European Parliament and EU member states, the act will become law.

    As they stand now, the rules would ban AI systems deemed to be harmful, including real-time facial recognition systems in public spaces, predictive policing tools and social scoring systems, such as those in China.

    The Act also outlines transparency requirements for AI systems. For instance, systems such as ChatGPT would have to disclose that their content was AI-generated and provide safeguards against the generation of illegal content.

    Engaging in prohibited AI practices could lead to hefty fines: up to €40 million ($43 million) or an amount equal to up to 7% of a company’s worldwide annual turnover, whichever is higher.

    But penalties would be “proportionate” and consider the market position of small-scale providers, suggesting there could be some leniency for startups.

    Not everyone has pushed back on the legislation so far. Earlier this month, Digital Europe, a trade association that counts SAP

    (SAP)
    and Ericsson

    (ERIC)
    among its members, called the rules “a text we can work with.”

    “However, there remain some areas which can be improved to ensure Europe becomes a competitive hub for AI innovation,” the group said in a statement.

    Dragos Tudorache, a Romanian member of parliament who led the bill’s drafting, said he was convinced that those who signed the new letter “have not read the text but have rather reacted on the stimulus of a few.”

    “The only concrete suggestions made are in fact what the [draft] text now contains: an industry-led process for defining standards, governance with industry at the table, and a light regulatory regime that asks for transparency. Nothing else,” he said in a statement.

    “It is a pity that the aggressive lobby of a few is capturing other serious companies in the net, which unfortunately undermines the undeniable lead that Europe has taken.”

    Brando Benifei, an Italian member of parliament who also led the drafting of the legislation, told CNN “we will listen to all concerns and stakeholders when dealing with AI regulation, but we have a firm commitment to deliver clear and enforceable rules.”

    “Our work could positively affect the global conversation and direction when dealing with artificial intelligence and its impact on fundamental rights, without hindering the necessary pursuit of innovation,” he said.

    [ad_2]

    Source link

  • Alibaba unveils its ChatGPT-style service | CNN Business

    Alibaba unveils its ChatGPT-style service | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Alibaba showed off its answer to the ChatGPT craze on Tuesday, demonstrating new software that it plans to eventually roll out across all its platforms.

    The Chinese tech giant unveiled Tongyi Qianwen, a large language model that will be embedded in its Tmall Genie smart speakers and workplace messaging platform DingTalk. It was trained on vast troves of data in order to generate compelling responses to users’ prompts.

    The technology will initially be integrated into those two products and eventually added to all Alibaba

    (BABA)
    applications, from e-commerce to mapping services, according to the company.

    Group CEO Daniel Zhang, who also oversees Alibaba’s cloud division, presented the new AI-powered service at a conference in Beijing, where the company demonstrated how it will allow users to transcribe meeting notes, craft business pitches and tell children’s stories.

    The company has opened up Tongyi Qianwen — which roughly translates as “seeking truth by asking a thousand questions” — to enterprise customers for testing before making it available to more users.

    “We are at a technological watershed moment, driven by generative AI and cloud computing,” Zhang said.

    Generative AI refers to the technology that underpins platforms like ChatGPT. The service has exploded in popularity in recent months, and Chinese tech companies have been racing to release their own versions, prompting some critics to predict that the trend will add fuel to an existing US-China rivalry in emerging technologies.

    Alibaba, which has a large cloud computing business, will also allow clients of that division to use the new technology to build their own customized large language models, the firm said in a statement.

    The debut comes after that of Baidu

    (BIDU)
    , which launched its own ChatGPT-style service last month. During a similar presentation, Baidu

    (BIDU)
    showed how its chatbot, called ERNIE, could generate a company newsletter, come up with a corporate slogan and solve a math riddle.

    On Monday, SenseTime, one of China’s most prominent AI companies, launched a suite of new services, including a chatbot called SenseChat.

    China will be setting rules to govern the operation of such services. In draft guidelines issued Tuesday to solicit public feedback, the country’s cyberspace regulator said generative AI services would be required to undergo security reviews before they can operate.

    Service providers will also be required to verify users’ real identities. In addition, they must provide information about the scale and type of data they use, their basic algorithms and other technical information.

    Alibaba’s shares in Hong Kong ticked up 1.6% following its demonstration.

    The company announced last month that it planned to split its business into six units. Most of those units, including its cloud services business that oversees AI projects, will be authorized to raise capital and pursue public listings.

    — Juliana Liu contributed to this report.

    [ad_2]

    Source link

  • The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a few months in 2017, there were rumors that Sam Altman was planning to run for governor of California. Instead, he kept his day job as one of Silicon Valley’s most influential investors and entrepreneurs.

    But now, Altman is about to make a different kind of political debut.

    Altman, the CEO and co-founder of OpenAI, the artificial intelligence company behind viral chatbot ChatGPT and image generator Dall-E, is set to testify before Congress on Tuesday. His appearance is part of a Senate subcommittee hearing on the risks artificial intelligence poses for society, and what safeguards are needed for the technology.

    House lawmakers on both sides of the aisle are also expected to hold a dinner with Altman on Monday night, according to multiple reports. Dozens of lawmakers are said to be planning to attend, with one Republican lawmaker describing it as part of the process for Congress to assess “the extraordinary potential and unprecedented threat that artificial intelligence presents to humanity.”

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    The hearing and meetings come as ChatGPT has sparked a new arms race over AI. A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts. This week’s hearing may only cement his stature as a central player in AI’s rapid growth – and also add to scrutiny of him and his company.

    Those who know Altman have described him as a brilliant thinker, someone who makes prescient bets and has even been called “a startup Yoda.” In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    “If anyone knows where this is going, it’s Sam,” Brian Chesky, the CEO of Airbnb, wrote in a post about Altman for the latter’s inclusion this year on Time’s list of the 100 most influential people. “But Sam also knows that he doesn’t have all the answers. He often says, ‘What do you think? Maybe I’m wrong?’ Thank God someone with so much power has so much humility.”

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    OpenAI declined to make anyone available for an interview for this story.

    The success of ChatGPT may have brought Altman greater public attention, but he has been a well-known figure in Silicon Valley for years.

    Prior to cofounding OpenAI with Musk in 2015, Altman, a Missouri native, studied computer science at Stanford University, only to drop out to launch Loopt, an app that helped users share their locations with friends and get coupons for nearby businesses.

    In 2005, Loopt was part of the first batch of companies at Y Combinator, a prestigious tech accelerator. Paul Graham, who co-founded Y Combinator, later described Altman as “a very unusual guy.”

    “Within about three minutes of meeting him, I remember thinking ‘Ah, so this is what Bill Gates must have been like when he was 19,’” Graham wrote in a post in 2006.

    Loopt was acquired in 2012 for about $43 million. Two years later, Altman took over from Graham as president of Y Combinator. The position allowed Altman to connect him with numerous powerful figures in the tech industry. He remained at the helm of the accelerator until 2019.

    Margaret O’Mara, a tech historian and professor at the University of Washington, told CNN that Altman “has long been admired as a thoughtful, significant guy and in the remarkably small number of powerful people who are kind of at the top of tech and have a lot of sway.”

    During the Trump administration, Altman gained new attention as a vocal critic of the president. It was against that backdrop that he was rumored to be considering a run for California governor.

    Rather than running, however, Altman instead looked to back candidates who aligned with his values, which include lower cost of living, clean energy and taking 10% off the defense budget to give to research and development of future technology.

    Altman continues to push for some of these goals through his work in the private sector. He invested in Helion, a fusion research company that inked a deal with Microsoft last week to sell clean energy to the tech giant by 2028.

    Altman has also been a proponent of the idea of a universal basic income and has suggested that AI could one day help fulfill that goal by generating so much wealth it could be redistributed back to the public.

    As Graham told The New Yorker about Altman in 2016, “I think his goal is to make the whole future.”

    When launching OpenAI, Musk and Altman’s original mission was to get ahead of the fear that AI could harm people and society.

    “We discussed what is the best thing we can do to ensure the future is good?” Musk told the New York Times about a conversation with Altman and others before launching the company. “We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing A.I. in a way that is safe and is beneficial to humanity.”

    In an interview at the launch of OpenAI, Altman explained the company as his way of trying to steer the path of AI technology. “I sleep better knowing I can have some influence now,” he said.

    If there’s one thing AI enthusiasts and critics can agree on right now, it may be that Altman clearly has succeeded in having some influence over the rapidly evolving technology.

    Less than six months after the release of ChatGPT, it has become a household name, almost synonymous with AI itself. CEOs are using it to draft emails. Realtors are using it to write iistings and draft legal documents. The tool has passed exams from law and business schools – and been used to help some students cheat. And OpenAI recently released a more powerful version of the technology underpinning ChatGPT.

    Tech giants like Google and Facebook are now racing to catch up. Similar generative AI technology is quickly finding its way into productivity and search tools used by billions of people.

    A future that once seemed very far off now feels right around the corner, whether society is ready for it or not. Altman himself has professed not to be sure about how it will turn out.

    O’Mara said she believes Altman fits into “the techno-optimist school of thought that has been dominant in the Valley for a very long time,” which she describes as “the idea that we can devise technology that can indeed make the world a better place.”

    While Altman’s cautious remarks about AI may sound at odds with that way of thinking, O’Mara argues it may be an “extension” of it. In essence, she said, it’s related to “the idea that technology is transformative and can be transformative in a positive way but also has so much capacity to do so much that it actually could be dangerous.”

    And if AI should somehow help bring about the end of society as we know it, Altman may be more prepared than most to adapt.

    “I prep for survival,” he said in a 2016 profile of him in the New Yorker, noting several possible disaster scenarios, including “A.I. that attacks us.”

    “I try not to think about it too much,” Altman said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

    [ad_2]

    Source link

  • Who says romance is dead? Couples are using ChatGPT to write their wedding vows | CNN Business

    Who says romance is dead? Couples are using ChatGPT to write their wedding vows | CNN Business

    [ad_1]



    CNN
     — 

    When Elyse Nguyen was nearing her wedding date in February and still hadn’t started writing her vows, a friend suggested she try a new source of inspiration: ChatGPT.

    The AI chatbot, which was released publicly in late November, can generate compelling written responses to user prompts and offers the promise of helping people get over writer’s block, whether it be for an essay, an email, or an emotional speech.

    “At first we inputted the prompt as a joke and the output was pretty cheesy with personal references to me and my husband,” said Nguyen, a financial analyst at Qualcomm. “But the essence of what vows should incorporate was there – our promises to each other and structure.”

    She made edits, changed the prompts to add humor and details about her partner’s interests, and added some personal touches. Nguyen ultimately ended up using a good portion of ChatGPT’s suggestions and said her husband was on board with it.

    “It helped alleviate some stress because I had no prior experience with wedding vows nor did I know what should be included,” Nguyen said. “Plus, ChatGPT is a genius with alliteration, analogies and metaphors. Having something like, ‘I promise to be your partner in life with the enthusiasm of a golfer’s first hole in one’ in my back pocket was comical.”

    Nearly five months after ChatGPT went viral and ignited a new AI arms race in Silicon Valley, more couples are looking to it for help with wedding planning, including writing vows and speeches, drafting religious marriage contracts, and setting up websites for the special day.

    Ellen Le recently created some of her wedding website through a new Writer’s Block Assistant tool on online wedding planning service Joy, which was one of the first third-party platforms to incorporate ChatGPT’s technology. (Last month, OpenAI, the company behind ChatGPT, opened up access to the chatbot, paving the way for it to be integrated into numerous apps and services.)

    Le, a product manager at a startup, said she used the feature to draft an “about us” page and write directions from San Francisco to her Napa Valley wedding. The Writer’s Block Assistant tool helps users write vows, best man and maid of honor speeches, thank you cards and wedding website “about us” pages. It also lets users highlight personal stories and select the style or tone before pulling it into a speech.

    “I started drafting my vows and when I typed in how we met, it produced this very delightful story,” Le said. “Some of it was inaccurate, making up certain details, but it gave me a helping hand and something to react to, rather than just spending 10 hours thinking about how to get started.”

    Le said her fiance, who often uses ChatGPT for work, is considering using AI to help with his vows too.

    Joy co-founder and CEO Vishal Joshi, who studied artificial intelligence and electrical engineering at NIT Rourkela in India, said the company launched Writer’s Block Assistant in March after it conducted an internal study that found most of its users were somewhat overwhelmed with getting started on writing vows and speeches, and wished they had help. He said the company has already seen thousands of submissions since launching the tool.

    “Almost two decades ago, AI enthusiasts like myself and my research peers had only dreamt of mass market adoption we are seeing today, and we know this is just the true beginning,” Joshi said. “Just like smartphones, if applied well, the positive impact of AI on our lives can far outshine the negatives. We’re working on responsibly innovating using AI to advance the wedding and event industry as a whole.”

    Michael Grinn and Kate Gardiner used viral AI tool ChatGPT to write the Ketubah, a Jewish wedding contract, for their June wedding.

    ChatGPT has sparked concerns in recent months about its potential to perpetuate biases, spread misinformation and upend certain livelihoods. Now, as it finds its way into marriage ceremonies, it could raise more nuanced questions about whether people risk losing something by injecting technology into what is supposed to be a deeply personal and, for many, spiritual moment in life.

    Michael Grinn, an anesthesiologist with practices in Miami and New York, was experimenting with ChatGPT when he asked it to produce a traditional Ketubah – a Jewish marriage contract – for his upcoming June wedding.

    Grinn and his fiance Kate Gardiner, the founder and CEO of a public relations firm, then requested it make some language changes around gender equality and intimacy. “At the end, we both looked at each other and were like, we can’t disagree with the result,” he said.

    Editing took about an hour, but it still shaved hours off what otherwise could have been a lengthy process, he said. Still, Grinn plans to write his own vows. “I want them to be less refined and something no one else helped me with.”

    He does, however, plan to use ChatGPT for inspiration for officiating his best man’s wedding. “It mostly comes down to time because I’ve been working so much,” he said, “and this is so efficient.”

    [ad_2]

    Source link

  • Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI CEO Sam Altman is set to testify before a Senate Judiciary subcommittee on Tuesday after the viral success of ChatGPT, his company’s chatbot tool, renewed an arms race over artificial intelligence and sparked concerns from some lawmakers about the risks posed by the technology.

    “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”

    He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

    A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”

    Montgomery is expected to urge Congress to adopt a “precision regulation” approach for AI based on specific use cases, and to suggest that lawmakers push companies to test how their systems handle bias and other concerns – and disclose those results.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts.

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    – CNN’s Jennifer Korn contributed to this report.

    [ad_2]

    Source link

  • Amazon is ‘investing heavily’ in the technology behind ChatGPT | CNN Business

    Amazon is ‘investing heavily’ in the technology behind ChatGPT | CNN Business

    [ad_1]



    CNN
     — 

    Amazon wants investors to know it won’t be left behind in the latest Big Tech arms race over artificial intelligence.

    In a letter to shareholders Thursday, Amazon

    (AMZN)
    CEO Andy Jassy said the company is “investing heavily” in large language models (LLMs) and generative AI, the same technology that underpins ChatGPT and other similar AI chatbots.

    “We have been working on our own LLMs for a while now, believe it will transform and improve virtually every customer experience, and will continue to invest substantially in these models across all of our consumer, seller, brand, and creator experiences,” Jassy wrote in his letter to shareholders.

    The remarks, which were part of Jassy’s second annual letter to shareholder since taking over as CEO, hint at the pressure that many tech companies feel to explain how they can tap into the rapidly evolving marketplace for AI products. Since ChatGPT was released to the public in late November, Google

    (GOOG)
    , Facebook

    (FB)
    and Microsoft

    (MSFT)
    have all talked up their growing focus on generative AI technology, which can create compelling essays, stories and visuals in response to user prompts.

    Amazon’s goal, according to Jassy, is to offer less costly machine learning chips so that “small and large companies can afford to train and run their LLMs in production.” Large language models are trained on vast troves of data in order to generate responses to user prompts.

    “Most companies want to use these large language models, but the really good ones take billions of dollars to train and many years, most companies don’t want to go through that,” Jassy said in an interview with CNBC on Thursday morning.

    “What they want to do is they want to work off of a foundational model that’s big and great already, and then have the ability to customize it for their own purposes,” Jassy told CNBC.

    With that in mind, Amazon on Thursday unveiled a new service called Bedrock. It essentially makes foundation models (large models that are pre-trained on vast amounts of data) from AI21 Labs, Anthropic, Stability AI and Amazon accessible to clients via an API, Amazon said in a blog post.

    Jassy told CNBC he thinks Bedrock “will change the game for people.”

    In his letter to shareholders, Jassy also touted AWS’s CodeWhisperer, another AI-powered tool which he said “revolutionizes developer productivity by generating code suggestions in real time.”

    “I could write an entire letter on LLMs and Generative AI as I think they will be that transformative, but I’ll leave that for a future letter,” Jassy wrote. “Let’s just say that LLMs and Generative AI are going to be a big deal for customers, our shareholders, and Amazon.”

    In the letter, Jassy also reflected on leading Amazon through “one of the harder macroeconomic years in recent memory,” as the e-commerce giant cut some 27,000 jobs as part of a major bid to rein in costs in recent months.

    “There were an unusual number of simultaneous challenges this past year,” Jassy said in the letter, before outlining steps Amazon took to rethink certain free shipping options, abandon some of its physical store concepts and significantly reduce overall headcount.

    Amazon disclosed in a securities filing Thursday that Jassy’s pay package last year was valued at some $1.3 million, and that the CEO did not receive any new stock awards in 2022. (When Jassy took over as CEO in 2021, he was awarded a pay package mostly comprised of stock awards that valued his total compensation package at some $212 million.)

    Despite the challenges at Amazon, however, Jassy said in his letter that he finds himself “optimistic and energized by what lies ahead.” Jassy added: “I strongly believe that our best days are in front of us.”

    [ad_2]

    Source link

  • Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    [ad_1]



    CNN
     — 

    Google was hit with a wide-ranging lawsuit on Tuesday alleging the tech giant scraped data from millions of users without their consent and violated copyright laws in order to train and develop its artificial intelligence products.

    The proposed class action suit against Google, its parent company Alphabet, and Google’s AI subsidiary DeepMind was filed in a federal court in California on Tuesday, and was brought by Clarkson Law Firm. The firm previously filed a similar suit against ChatGPT-maker OpenAI last month. (OpenAI did not previously respond to a request for comment on the suit.)

    The complaint alleges that Google “has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans” and using this data to train its AI products, such as its chatbot Bard. The complaint also claims Google has taken “virtually the entirety of our digital footprint,” including “creative and copywritten works” to build its AI products.

    Halimah DeLaine Prado, Google’s general counsel, called the claims in the suit “baseless” in a statement to CNN. “We’ve been clear for years that we use data from public sources — like information published to the open web and public datasets — to train the AI models behind services like Google Translate, responsibly and in line with our AI Principles,” DeLaine Prado said.

    “American law supports using public information to create new beneficial uses, and we look forward to refuting these baseless claims,” the statement added.

    Alphabet and DeepMind did not immediately respond to a request for comment.

    The complaint points to a recent update to Google’s privacy policy that explicitly states the company may use publicly accessible information to train its AI models and tools such as Bard.

    In response to an earlier Verge report on the update, the company said its policy “has long been transparent” about this practice and “this latest update simply clarifies that newer services like Bard are also included.”

    The lawsuit comes as a new crop of AI tools have gained tremendous attention in recent months for their ability to generate written work and images in response to user prompts. The large language models underpinning this new technology are able to do this by training on vast troves of online data.

    In the process, however, companies are also drawing mounting legal scrutiny over copyright issues from works swept up in these data sets, as well as their apparent use of personal and possibly sensitive data from everyday users, including data from children, according to the Google lawsuit.

    “Google needs to understand that ‘publicly available’ has never meant free to use for any purpose,” Tim Giordano, one of the attorneys at Clarkson bringing the suit against Google, told CNN in an interview. “Our personal information and our data is our property, and it’s valuable, and nobody has the right to just take it and use it for any purpose.”

    The suit is seeking injunctive relief in the form of a temporary freeze on commercial access to and commercial development of Google’s generative AI tools like Bard. It is also seeking unspecified damages and payments as financial compensation to people whose data was allegedly misappropriated by Google. The firm says it has lined up eight plaintiffs, including a minor.

    Giordano contrasted the benefits and alleged harms of how Google typically indexes online data to support its core search engine with the new allegations of it scraping data to train AI tools.

    With its search engine, he said, Google can “serve up an attributed link to your work that can actually drive somebody to purchase it or engage with it.” Data scraping to train AI tools, however, is creating “an alternative version of the work that radically alters the incentives for anybody to need to purchase the work,” Giordano added.

    While some internet users may have grown accustomed to their digital data being collected and used for search results or targeted advertising, the same may not be true for AI training. “People could not have imagined their information would be used this way,” Giordano said.

    Ryan Clarkson, a partner at the law firm, said Google needs to “create an opportunity for folks to opt out” of having their data used for training AI while still maintaining their ability to use the internet for their everyday needs.

    [ad_2]

    Source link