ReportWire

Tag: generative ai

  • System of intelligence — generative AI at the app layer | TechCrunch

    System of intelligence — generative AI at the app layer | TechCrunch

    [ad_1]

    Generative AI is a paradigm shift in technology, and it will spur a massive shift in enterprise spend over the next decade and beyond. Transformations of this magnitude can feel rapid on the surface, especially when they make a huge splash like generative AI has in recent months, but it’s a steep and steady climb to permeate the layers of the enterprise technology stack.

    The infrastructure layer captures the initial spend as companies assemble the building blocks for power and performance — the capital pouring into Nvidia and GPU aggregators today indicate this is well underway. As adoption (and dollars) move up the stack, development focus will shift toward the new experiences and products that will reshape each subsequent layer.

    We’re just getting a glimpse of how this transformation will unfold at the application layer, and early signals suggest the disruption will be profound.

    Long before generative AI, enterprise applications began to deliver more consumer-like experiences by improving UIs and introducing interactive elements that would engage everyday users and accelerate workflow. This spurred a shift from “system of record” applications like Salesforce and Workday to “system of engagement” applications like Slack and Notion.

    As generative AI shapes the next generation of application products, we can expect even more sweeping evolution.

    Collaboration was a defining characteristic of this new breed of enterprise tools, with features like multiplayer mode, annotation functionality, version history, and metadata. These apps also leveraged consumer-native viral components to drive adoption and enable seamless sharing of content within and between organizations. The core record retained its intrinsic value within these systems of engagement, and served as a bedrock for the growing volume of information created at the engagement layer.

    As generative AI shapes the next generation of application products, we can expect even more sweeping evolution. The first players look a lot like ChatGPT integrators, building lightweight tools directly on top of generative models that deliver immediate but fleeting value. We have already seen a variety of generative AI products emerge that have explosive initial growth, but also extremely high churn due to limited workflow or lack of additional functionality. These applications typically produce a generative output that is a single-use type of content or media (i.e., not embedded into a user’s everyday workflow), and their value relies on off-the-shelf generative models that are widely available to others in the market.

    The second wave of generative AI applications, which is just beginning to take shape, will leverage generative models to integrate the structured data that lies within the system-of-record applications and the unstructured data that lies within the system-of-engagement applications.

    Developers of these products will have more potential to create enduring companies than first-wave entrants, but only if they can find a way to “own” the layer above the system-of-engagement and system-of-record applications — no mean feat when incumbents like Salesforce are already scrambling to implement generative AI to create a protective moat around their underlying layers.

    This leads to the third wave, where entrants create their own, defensible “system of intelligence” layer. Startups will first introduce novel product offerings that deliver value by harnessing existing system-of-record and system-of-engagement capabilities. Once a strong use case is established, they will then build out workflows that can ultimately stand alone as a true enterprise application.

    This does not necessarily mean replacing the existing interactive or database layers; instead, they will create new structured and unstructured data where generative models utilize these new datasets to enhance the product experience — essentially creating a new class of “super datasets.”

    A core focus for these products should be integrations with the ability to ingest, clean, and label the data. For example, to build a new customer support experience, it’s not enough to simply ingest the knowledge base of existing customer support tickets. A truly compelling product should also incorporate bug tracking, product documentation, internal team communications, and much more. It will know how to pull out the relevant information, tag it, and weigh it in order to create novel insights. It will have a feedback loop that allows it to get better with training and usage, not only within an organization but also across multiple organizations.

    When a product accomplishes all of this, switching to a competitor becomes very difficult — the weighted, cleaned data is highly valuable and it would take too long to achieve the same quality with a new product.

    At this point, the intelligence lies not only in the product or model, but also in the associated hierarchy, labels, and weights. Insights will take minutes instead of days to deliver, with a focus on actions and decisions rather than just synthesis of information. These will be the true system-of-intelligence products that leverage generative AI, marked by these defining traits:

    • Have deep integration with company workflows and ability to capture newly created structured and unstructured data.
    • Be sophisticated around the characterization and digestion of data through hierarchy, labels, and weights.
    • Create data feedback loops within and between customers to enhance the product experience.

    One key question I love to ask customers is, “Where does a new product stack rank with the other tools you use?” Normally the system-of-record product is the most important, followed by the system-of-engagement product, with additional tooling at the bottom of the list.

    The least important product will be the first to get cut when the budget is tight, so emerging system-of-intelligence products must provide enduring value in order to survive. They’ll also face steep competition from incumbents who will build generative AI–enabled intelligence capabilities into their products.  It will be up to the new wave of system-of-intelligence to couple their offerings with high-value workflows, collaboration, and the introduction of super datasets to endure.

    Transformation of the AI space has accelerated over the last 12 months, and the industry is learning fast. Open source models are proliferating and closed proprietary models are also evolving at an atypically rapid pace. Now it’s up to founders to build enduring system-of-intelligence products atop this rapidly shifting landscape — and when it’s done right, the impact on enterprises will be extraordinary.

    [ad_2]

    Carrie Andrews

    Source link

  • Europe reaches a deal on the world's first comprehensive AI rules

    Europe reaches a deal on the world's first comprehensive AI rules

    [ad_1]

    LONDON — European Union negotiators clinched a deal Friday on the world’s first comprehensive artificial intelligence rules, paving the way for legal oversight of AI technology that has promised to transform everyday life and spurred warnings of existential dangers to humanity.

    Negotiators from the European Parliament and the bloc’s 27 member countries overcame big differences on controversial points including generative AI and police use of face recognition surveillance to sign a tentative political agreement for the Artificial Intelligence Act.

    “Deal!” tweeted European Commissioner Thierry Breton just before midnight. “The EU becomes the very first continent to set clear rules for the use of AI.”

    The result came after marathon closed-door talks this week, with the initial session lasting 22 hours before a second round kicked off Friday morning.

    Officials were under the gun to secure a political victory for the flagship legislation. Civil society groups, however, gave it a cool reception as they wait for technical details that will need to be ironed out in the coming weeks. They said the deal didn’t go far enough in protecting people from harm caused by AI systems.

    “Today’s political deal marks the beginning of important and necessary technical work on crucial details of the AI Act, which are still missing,” said Daniel Friedlaender, head of the European office of the Computer and Communications Industry Association, a tech industry lobby group.

    The EU took an early lead in the global race to draw up AI guardrails when it unveiled the first draft of its rulebook in 2021. The recent boom in generative AI, however, sent European officials scrambling to update a proposal poised to serve as a blueprint for the world.

    The European Parliament will still need to vote on the act early next year, but with the deal done that’s a formality, Brando Benifei, an Italian lawmaker co-leading the body’s negotiating efforts, told The Associated Press late Friday.

    “It’s very very good,” he said by text message after being asked if it included everything he wanted. “Obviously we had to accept some compromises but overall very good.” The eventual law wouldn’t fully take effect until 2025 at the earliest, and threatens stiff financial penalties for violations of up to 35 million euros ($38 million) or 7% of a company’s global turnover.

    Generative AI systems like OpenAI’s ChatGPT have exploded into the world’s consciousness, dazzling users with the ability to produce human-like text, photos and songs but raising fears about the risks the rapidly developing technology poses to jobs, privacy and copyright protection and even human life itself.

    Now, the U.S., U.K., China and global coalitions like the Group of 7 major democracies have jumped in with their own proposals to regulate AI, though they’re still catching up to Europe.

    Strong and comprehensive rules from the EU “can set a powerful example for many governments considering regulation,” said Anu Bradford, a Columbia Law School professor who’s an expert on EU law and digital regulation. Other countries “may not copy every provision but will likely emulate many aspects of it.”

    AI companies subject to the EU’s rules will also likely extend some of those obligations outside the continent, she said. “After all, it is not efficient to re-train separate models for different markets,” she said.

    The AI Act was originally designed to mitigate the dangers from specific AI functions based on their level of risk, from low to unacceptable. But lawmakers pushed to expand it to foundation models, the advanced systems that underpin general purpose AI services like ChatGPT and Google’s Bard chatbot.

    Foundation models looked set to be one of the biggest sticking points for Europe. However, negotiators managed to reach a tentative compromise early in the talks, despite opposition led by France, which called instead for self-regulation to help homegrown European generative AI companies competing with big U.S rivals, including OpenAI’s backer Microsoft.

    Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet. They give generative AI systems the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.

    The companies building foundation models will have to draw up technical documentation, comply with EU copyright law and detail the content used for training. The most advanced foundation models that pose “systemic risks” will face extra scrutiny, including assessing and mitigating those risks, reporting serious incidents, putting cybersecurity measures in place and reporting their energy efficiency.

    Researchers have warned that powerful foundation models, built by a handful of big tech companies, could be used to supercharge online disinformation and manipulation, cyberattacks or creation of bioweapons.

    Rights groups also caution that the lack of transparency about data used to train the models poses risks to daily life because they act as basic structures for software developers building AI-powered services.

    What became the thorniest topic was AI-powered face recognition surveillance systems, and negotiators found a compromise after intensive bargaining.

    European lawmakers wanted a full ban on public use of face scanning and other “remote biometric identification” systems because of privacy concerns. But governments of member countries succeeded in negotiating exemptions so law enforcement could use them to tackle serious crimes like child sexual exploitation or terrorist attacks.

    Rights groups said they were concerned about the exemptions and other big loopholes in the AI Act, including lack of protection for AI systems used in migration and border control, and the option for developers to opt-out of having their systems classified as high risk.

    “Whatever the victories may have been in these final negotiations, the fact remains that huge flaws will remain in this final text,” said Daniel Leufer, a senior policy analyst at the digital rights group Access Now.

    ___

    Tech reporter Matt O’Brien in Providence, Rhode Island, contributed to this report.

    [ad_2]

    Source link

  • Backed by Cresta founders, Trove's AI wants to make surveys fun again | TechCrunch

    Backed by Cresta founders, Trove's AI wants to make surveys fun again | TechCrunch

    [ad_1]

    Surveys have become an integral part of many aspects of our lives, but most of them are tedious, leading to ineffective responses and actions. Dinghan Shen and Yuan Xue, two software engineers working in Silicon Valley, recognized an opportunity to leverage the breakthroughs brought by large language models to make surveys more empathetic and engaging.

    Around six months ago, Shen and Xue, who had been friends since high school, started Trove AI, a SaaS platform that lets users create conversational surveys powered by GPT-4 and its own fine-tuned models. The idea has received backing already. Zayd Enam and Tim Shi, co-founders of Shen’s former employer Cresta, an a16z-backed unicorn empowering contact center agents with AI, invested an undisclosed amount in the startup’s pre-seed funding.

    Launched six weeks ago, Trove’s first version has amassed over 1,000 users who are mostly small and medium-sized businesses from around the world. “Dozens of” them have sent surveys at least twice since. Still free to use, the platform has attracted a wide range of users, including a London-based spa, a K-12 school in Boston, and a travel agency focused on Latin America.

    Applying conversational AI to surveys appears to be a low-hanging fruit in this era of ChatGPT frenzy. Enterprise-focused survey giant Qualtrics has adopted AI across its line of customer and employee feedback products. SurveyMonkey, the go-to survey provider for SMBs, is using AI to automate the creation process.

    To differentiate, Trove aims to ultimately become a “customer and employee experience management platform” for companies of all sizes, Shen said. The product is essentially experience management around customers, employees, products, and more. Following the recent management saga of OpenAI that briefly disrupted its chatbot service, applications that build on top of ChatGPT, or “wrapper products,” are rethinking their heavy dependence on third-party APIs.

    “We are 80% SaaS and 20% AI,” Shen told TechCrunch in an interview. As such, he reckoned Trove offers ample value in addition to its features powered by OpenAI. “We aim to do everything from survey creation, response, analytics, ticket creation to CRM integration… It’s an AI-generated feedback loop.”

    CRM integration, specifically, would allow Trove to create highly customized surveys upon which the system can create a ticket automatically and send a personalized follow-up email to thank the customer for giving feedback.

    “Fundamentally, we’re rethinking the experience management workflows from scratch in the context of the powerful large language model capabilities today,” said the founder.

     

    [ad_2]

    Rita Liao

    Source link

  • Google ups the stakes in AI race with Gemini, a technology trained to behave more like humans

    Google ups the stakes in AI race with Gemini, a technology trained to behave more like humans

    [ad_1]

    Google took its next leap in artificial intelligence Wednesday with the launch of project Gemini, an AI model trained to behave in human-like ways that’s likely to intensify the debate about the technology’s potential promise and perils.

    The rollout will unfold in phases, with less sophisticated versions of Gemini called “Nano” and “Pro” being immediately incorporated into Google’s AI-powered chatbot Bard and its Pixel 8 Pro smartphone.

    With Gemini providing a helping hand, Google promises Bard will become more intuitive and better at tasks that involve planning. On the Pixel 8 Pro, Gemini will be able to quickly summarize recordings made on the device and provide automatic replies on messaging services, starting with WhatsApp, according to Google.

    Gemini’s biggest advances won’t come until early next year when its Ultra model will be used to launch “Bard Advanced,” a juiced-up version of the chatbot that initially will only be offered to a test audience.

    The AI, at first, will only work in English throughout the world, although Google executives assured reporters during a briefing that the technology will have no problem eventually diversifying into other languages.

    Based on a demonstration of Gemini for a group of reporters, Google’s “Bard Advanced” might be capable of unprecedented AI multitasking by simultaneously recognizing and understanding presentations involving text, photos and video.

    Gemini will also eventually be infused into Google’s dominant search engine, although the timing of that transition hasn’t been spelled out yet.

    “This is a significant milestone in the development of AI, and the start of a new era for us at Google,” declared Demis Hassabis, CEO of Google DeepMind, the AI division behind Gemini. Google prevailed over other bidders, including Facebook parent Meta, to acquire London-based DeepMind nearly a decade ago, and since melded it with its “Brain” division to focus on Gemini’s development.

    The technology’s problem-solving skills are being touted by Google as being especially adept in math and physics, fueling hopes among AI optimists that it may lead to scientific breakthroughs that improve life for humans.

    But an opposing side of the AI debate worries about the technology eventually eclipsing human intelligence, resulting in the loss of millions of jobs and perhaps even more destructive behavior, such as amplifying misinformation or triggering the deployment of nuclear weapons.

    “We’re approaching this work boldly and responsibly,” Google CEO Sundar Pichai wrote in a blog post. “That means being ambitious in our research and pursuing the capabilities that will bring enormous benefits to people and society, while building in safeguards and working collaboratively with governments and experts to address risks as AI becomes more capable.”

    Gemini’s arrival is likely to up the ante in an AI competition that has been escalating for the past year, with San Francisco startup OpenAI and long-time industry rival Microsoft.

    Backed by Microsoft’s financial muscle and computing power, OpenAI was already deep into developing its most advanced AI model, GPT-4, when it released the free ChatGPT tool late last year. That AI-fueled chatbot rocketed to global fame, bringing buzz to the commercial promise of generative AI and pressuring Google to push out Bard in response.

    Just as Bard was arriving on the scene, OpenAI released GPT-4 in March and has since been building in new capabilities aimed at consumers and business customers, including a feature unveiled in November that enables the chatbot to analyze images. It’s been competing for business against other rival AI startups such as Anthropic and even its partner, Microsoft, which has exclusive rights to OpenAI’s technology in exchange for the billions of dollars that it has poured into the startup.

    The alliance so far has been a boon for Microsoft, which has seen its market value climb by more than 50% so far this year, primarily because of investors’ belief that AI will turn into a gold mine for the tech industry. Google’s corporate parent, Alphabet, also has been riding the same wave with its market value rising more than $500 billion, or about 45%, so far this year.

    Microsoft’s deepening involvement in OpenAI during the past year, coupled with OpenAI’s more aggressive attempts to commercialize its products, has raised concerns that the non-profit has strayed from its original mission to protect humanity as the technology progresses.

    Those worries were magnified last month when OpenAI’s board abruptly fired CEO Sam Altman in a dispute revolving around undisclosed issues of trust. After backlash that threatened to destroy the company and result in a mass exodus of AI engineering talent to Microsoft, OpenAI brought Altman back as CEO and reshuffled its board.

    With Gemini coming out, OpenAI may find itself trying to prove its technology remains smarter than Google’s.

    In a virtual press conference, Google executives dodged questions about how Gemini compared to GPT-4 and declined to share specific technical details, such as its parameter count — one measure of a model’s complexity.

    “I am in awe of what it’s capable of,” Google DeepMind vice president of product Eli Collins said of Gemini.

    [ad_2]

    Source link

  • Europe was set to lead the world on AI regulation. But can leaders reach a deal?

    Europe was set to lead the world on AI regulation. But can leaders reach a deal?

    [ad_1]

    LONDON — The generative AI boom has sent governments worldwide scrambling to regulate the emerging technology, but it also has raised the risk of upending a European Union push to approve the world’s first comprehensive artificial intelligence rules.

    The 27-nation bloc’s Artificial Intelligence Act has been hailed as a pioneering rulebook. But with time running out, it’s uncertain if the EU’s three branches of government can thrash out a deal Wednesday in what officials hope is a final round of talks.

    Europe’s yearslong efforts to draw up AI guardrails have been bogged down by the recent emergence of generative AI systems like OpenAI’s ChatGPT, which have dazzled the world with their ability to produce human-like work but raised fears about the risks they pose.

    Those concerns have driven the U.S., U.K., China and global coalitions like the Group of 7 major democracies into the race to regulate the rapidly developing technology, though they’re still catching up to Europe.

    Besides regulating generative AI, EU negotiators need to resolve a long list of other thorny issues, such as limits on AI-powered facial recognition and other surveillance systems that have stirred privacy concerns.

    Chances of clinching a political agreement between EU lawmakers, representatives from member states and executive commissioners “are pretty high partly because all the negotiators want a political win” on a flagship legislative effort, said Kris Shrishak, a senior fellow specializing in AI governance at the Irish Council for Civil Liberties.

    “But the issues on the table are significant and critical, so we can’t rule out the possibility of not finding a deal,” he said.

    Some 85% of the technical wording in the bill already has been agreed on, Carme Artigas, AI and digitalization minister for Spain, which holds the rotating EU presidency, said at a press briefing Tuesday in Brussels.

    If a deal isn’t reached in the latest round of talks, starting Wednesday afternoon and expected to run late into the night, negotiators will be forced to pick it up next year. That raises the odds the legislation could get delayed until after EU-wide elections in June — or go in a different direction as new leaders take office.

    One of the major sticking points is foundation models, the advanced systems that underpin general purpose AI services like OpenAI’s ChatGPT and Google’s Bard chatbot.

    Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet. They give generative AI systems the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.

    The AI Act was intended as product safety legislation, like similar EU regulations for cosmetics, cars and toys. It would grade AI uses according to four levels of risk — from minimal or no risk posed by video games and spam filters to unacceptable risk from social scoring systems that judge people based on their behavior.

    The new wave of general purpose AI systems released since the legislation’s first draft in 2021 spurred European lawmakers to beef up the proposal to cover foundation models.

    Researchers have warned that powerful foundation models, built by a handful of big tech companies, could be used to supercharge online disinformation and manipulation, cyberattacks or creation of bioweapons. They act as basic structures for software developers building AI-powered services so that “if these models are rotten, whatever is built on top will also be rotten — and deployers won’t be able to fix it,” said Avaaz, a nonprofit advocacy group.

    France, Germany and Italy have resisted the update to the legislation and are calling instead for self-regulation — a change of heart seen as a bid to help homegrown generative AI players, such as French startup Mistral AI and Germany’s Aleph Alpha, compete with big U.S. tech companies like OpenAI.

    Brando Benifei, an Italian member of the European Parliament who is co-leading the body’s negotiating efforts, was optimistic about resolving differences with member states.

    There’s been “some movement” on foundation models, though there are “more issues on finding an agreement” on facial recognition systems, he said.

    ___

    AP Technology Writer Matt O’Brien contributed from Providence, Rhode Island.

    [ad_2]

    Source link

  • 'Mega-deals' could be inflating overall AI funding figures | TechCrunch

    'Mega-deals' could be inflating overall AI funding figures | TechCrunch

    [ad_1]

    It’s safe to say that VCs struck while the iron was hot this year where it concerned generative AI.

    While venture capital investments overall fell compared to last year thanks to macroeconomic challenges and other related factors, startups in the generative AI space — and AI more broadly — did quite well.

    Funding for AI-related startups surpassed $68.7 billion in 2023, according to PitchBook, with generative AI vendors like OpenAI, Stability AI and Anthropic accounting for a substantial portion of that figure. And it appears that the sector will likely close the year with substantially higher investments than the past couple of years.

    But could the top-level numbers be misleading?

    A report on AI investment in Q3 by PitchBook, released this morning, found that “mega-deals” (i.e., multi-hundred-million-dollar investments from big-name backers) vastly inflated deal totals this year.

    For example, just a few months ago, Amazon pledged to invest up to $4 billion in Anthropic, the company developing the AI-powered chatbot Claude. OpenAI secured a $10 billion investment from Microsoft (albeit not all at once and partly in the form of cloud compute credits). Inflection AI, a firm creating what it describes as more “personal” AI assistants, raised $1.3 billion in a funding round led by Microsoft. The list goes on.

    In Q3, VC funding inclusive of mega-deals totaled around $22.1 billion. But after subtracting the tech-giant-led tranches secured by generative AI startups, the total is closer to $15.1 billion for the sector.

    [ad_2]

    Kyle Wiggers

    Source link

  • Free ChatGPT may incorrectly answer drug questions, study says

    Free ChatGPT may incorrectly answer drug questions, study says

    [ad_1]

    Harun Ozalp | Anadolu | Getty Images

    The free version of ChatGPT may provide inaccurate or incomplete responses — or no answer at all — to questions related to medications, which could potentially endanger patients who use OpenAI’s viral chatbot, a new study released Tuesday suggests.

    Pharmacists at Long Island University who posed 39 questions to the free ChatGPT in May deemed that only 10 of the chatbot’s responses were “satisfactory” based on criteria they established. ChatGPT’s responses to the 29 other drug-related questions did not directly address the question asked, or were inaccurate, incomplete or both, the study said. 

    The study indicates that patients and health-care professionals should be cautious about relying on ChatGPT for drug information and verify any of the responses from the chatbot with trusted sources, according to lead author Sara Grossman, an associate professor of pharmacy practice at LIU. 

    For patients, that can be their doctor or a government-based medication information website such as the National Institutes of Health’s MedlinePlus, she said.

    An OpenAI spokesperson said the company guides ChatGPT to inform users that they “should not rely on its responses as a substitute for professional medical advice or traditional care.”

    The spokesperson also shared a section of OpenAI’s usage policy, which states that the company’s “models are not fine-tuned to provide medical information.” People should never use ChatGPT to provide diagnostic or treatment services for serious medical conditions, the usage policy said.

    ChatGPT was widely seen as the fastest-growing consumer internet app of all time following its launch roughly a year ago, which ushered in a breakout year for artificial intelligence. But along the way, the chatbot has also raised concerns about issues including fraud, intellectual property, discrimination and misinformation. 

    Several studies have highlighted similar instances of erroneous responses from ChatGPT, and the Federal Trade Commission in July opened an investigation into the chatbot’s accuracy and consumer protections. 

    In October, ChatGPT drew around 1.7 billion visits worldwide, according to one analysis. There is no data on how many users ask medical questions of the chatbot.

    Notably, the free version of ChatGPT is limited to using data sets through September 2021 — meaning it could lack significant information in the rapidly changing medical landscape. It’s unclear how accurately the paid versions of ChatGPT, which began to use real-time internet browsing earlier this year, can now answer medication-related questions.  

    Grossman acknowledged there’s a chance that a paid version of ChatGPT would have produced better study results. But she said that the research focused on the free version of the chatbot to replicate what more of the general population uses and can access. 

    She added that the study provided only “one snapshot” of the chatbot’s performance from earlier this year. It’s possible that the free version of ChatGPT has improved and may produce better results if the researchers conducted a similar study now, she added.

    Grossman noted that the research, which was presented at the American Society of Health-System Pharmacists’ annual meeting on Tuesday, did not require any funding. ASHP represents pharmacists across the U.S. in a variety of health-care settings.

    ChatGPT study results

    The study used real questions posed to Long Island University’s College of Pharmacy drug information service from January 2022 to April of this year. 

    In May, pharmacists researched and answered 45 questions, which were then reviewed by a second researcher and used as the standard for accuracy against ChatGPT. Researchers excluded six questions because there was no literature available to provide a data-driven response. 

    ChatGPT did not directly address 11 questions, according to the study. The chatbot also gave inaccurate responses to 10 questions, and wrong or incomplete answers to another 12. 

    For each question, researchers asked ChatGPT to provide references in its response so that the information provided could be verified. However, the chatbot provided references in only eight responses, and each included sources that don’t exist.

    One question asked ChatGPT about whether a drug interaction — or when one medication interferes with the effect of another when taken together — exists between Pfizer‘s Covid antiviral pill Paxlovid and the blood-pressure-lowering medication verapamil.

    ChatGPT indicated that no interactions had been reported for that combination of drugs. In reality, those medications have the potential to excessively lower blood pressure when taken together.  

    “Without knowledge of this interaction, a patient may suffer from an unwanted and preventable side effect,” Grossman said. 

    Grossman noted that U.S. regulators first authorized Paxlovid in December 2021. That’s a few months before the September 2021 data cutoff for the free version of ChatGPT, which means the chatbot has access to limited information on the drug. 

    Still, Grossman called that a concern. Many Paxlovid users may not know the data is out of date, which leaves them vulnerable to receiving inaccurate information from ChatGPT. 

    Another question asked ChatGPT how to convert doses between two different forms of the drug baclofen, which can treat muscle spasms. The first form was intrathecal, or when medication is injected directly into the spine, and the second form was oral. 

    Grossman said her team found that there is no established conversion between the two forms of the drug and it differed in the various published cases they examined. She said it is “not a simple question.” 

    But ChatGPT provided only one method for the dose conversion in response, which was not supported by evidence, along with an example of how to that conversion. Grossman said the example had a serious error: ChatGPT incorrectly displayed the intrathecal dose in milligrams instead of micrograms

    Any health-care professional who follows that example to determine an appropriate dose conversion “would end up with a dose that’s 1,000 times less than it should be,” Grossman said. 

    She added that patients who receive a far smaller dose of the medicine than they should be getting could experience a withdrawal effect, which can involve hallucinations and seizures

    [ad_2]

    Source link

  • Wikipedia, wrapped. Here are 2023's most-viewed articles on the internet's encyclopedia

    Wikipedia, wrapped. Here are 2023's most-viewed articles on the internet's encyclopedia

    [ad_1]

    NEW YORK — Remember what you searched for in 2023? Well, Wikipedia has the receipts.

    English Wikipedia raked in more than 84 billion views this year, according to numbers released Tuesday by the Wikimedia Foundation, the non-profit behind the free, publicly edited online encyclopedia. And the most popular article was about ChatGPT (yes, the AI chatbot that’s seemingly everywhere today).

    Since its launch just over a year ago, OpenAI’s ChatGPT has skyrocketed into the public’s consciousness as the technology makes its way into schools, health care, law and even religious sermons. The chatbot has also contributed to growing debates about the promise and potential dangers of generative AI, much of which is documented on its Wikipedia page.

    The second most-read article on Wikipedia in 2023 was the annual list of deaths, which sees high traffic year after year — taking the #4 and #1 spots in 2022 and 2021, respectively. Individual entries for notable figures who passed away also garnered significant interest this year, including pages for Matthew Perry and Lisa Marie Presley.

    Meanwhile, the highly anticipated 2023 Cricket World Cup took third place — alongside three other cricket-related entries in Wikipedia’s top 25 articles this year, including the Indian Premier League at #4, marking the first time cricket content has made the list since the Wikimedia Foundation started tracking in 2015.

    “Barbenheimer,” Taylor Swift and more also appeared to sway our 2023 internet-reading habits. Here are this year’s top 25 articles on English Wikipedia.

    1. ChatGPT: 49,490,406 pageviews

    2. Deaths in 2023: 42,666,860 pageviews

    3. 2023 Cricket World Cup: 38,171,653 pageviews

    4. Indian Premier League: 32,012,810 pageviews

    5. Oppenheimer (film): 28,348,248 pageviews

    6. Cricket World Cup: 25,961,417 pageviews

    7. J. Robert Oppenheimer: 25,672,469 pageviews

    8. Jawan (film): 21,791,126 pageviews

    9. 2023 Indian Premier League: 20,694,974 pageviews

    10. Pathaan (film): 19,932,509 pageviews

    11. The Last of Us (TV series): 19,791,789 pageviews

    12. Taylor Swift, 19,418,385: pageviews

    13. Barbie (film): 18,051,077 pageviews

    14. Cristiano Ronaldo: 17,492,537 pageviews

    15. Lionel Messi: 16,623,630 pageviews

    16. Premier League: 16,604,669 pageviews

    17. Matthew Perry: 16,454,666 pageviews

    18. United States: 16,240,461 pageviews

    19. Elon Musk: 14,370,395 pageviews

    20. Avatar: The Way of Water: 14,303,116 pageviews

    21. India: 13,850,178 pageviews

    22. Lisa Marie Presley: 13,764,007 pageviews

    23. Guardians of the Galaxy Vol. 3: 13,392,917 pageviews

    24. Russian invasion of Ukraine: 12,798,866 pageviews

    25. Andrew Tate: 12,728,616 pageviews

    According to the Wikimedia Foundation, this top 25 list was created using English Wikipedia data as of Nov. 28. Numbers for the full year are set to be updated by the nonprofit on Jan. 3, 2024.

    The top countries that accessed English Wikipedia overall to date in 2023 are the United States (33.2 billion) and the United Kingdom (9 billion) — followed by India (8.48 billion), Canada (3.95 billion) and Australia (2.56 billion), according to Wikimedia Foundation data shared with The Associated Press.

    [ad_2]

    Source link

  • Wikipedia, wrapped. Here are 2023's most-viewed articles on the internet's encyclopedia

    Wikipedia, wrapped. Here are 2023's most-viewed articles on the internet's encyclopedia

    [ad_1]

    NEW YORK — Remember what you searched for in 2023? Well, Wikipedia has the receipts.

    English Wikipedia raked in more than 84 billion views this year, according to numbers released Tuesday by the Wikimedia Foundation, the non-profit behind the free, publicly edited online encyclopedia. And the most popular article was about ChatGPT (yes, the AI chatbot that’s seemingly everywhere today).

    Since its launch just over a year ago, OpenAI’s ChatGPT has skyrocketed into the public’s consciousness as the technology makes its way into schools, health care, law and even religious sermons. The chatbot has also contributed to growing debates about the promise and potential dangers of generative AI, much of which is documented on its Wikipedia page.

    The second most-read article on Wikipedia in 2023 was the annual list of deaths, which sees high traffic year after year — taking the #4 and #1 spots in 2022 and 2021, respectively. Individual entries for notable figures who passed away also garnered significant interest this year, including pages for Matthew Perry and Lisa Marie Presley.

    Meanwhile, the highly anticipated 2023 Cricket World Cup took third place — alongside three other cricket-related entries in Wikipedia’s top 25 articles this year, including the Indian Premier League at #4, marking the first time a cricket article has made the list since the Wikimedia Foundation started tracking in 2015.

    “Barbenheimer,” Taylor Swift and more also appeared to sway our 2023 internet-reading habits. Here are this year’s top 25 articles on English Wikipedia.

    1. ChatGPT: 49,490,406 pageviews

    2. Deaths in 2023: 42,666,860 pageviews

    3. 2023 Cricket World Cup: 38,171,653 pageviews

    4. Indian Premier League: 32,012,810 pageviews

    5. Oppenheimer (film): 28,348,248 pageviews

    6. Cricket World Cup: 25,961,417 pageviews

    7. J. Robert Oppenheimer: 25,672,469 pageviews

    8. Jawan (film): 21,791,126 pageviews

    9. 2023 Indian Premier League: 20,694,974 pageviews

    10. Pathaan (film): 19,932,509 pageviews

    11. The Last of Us (TV series): 19,791,789 pageviews

    12. Taylor Swift, 19,418,385: pageviews

    13. Barbie (film): 18,051,077 pageviews

    14. Cristiano Ronaldo: 17,492,537 pageviews

    15. Lionel Messi: 16,623,630 pageviews

    16. Premier League: 16,604,669 pageviews

    17. Matthew Perry: 16,454,666 pageviews

    18. United States: 16,240,461 pageviews

    19. Elon Musk: 14,370,395 pageviews

    20. Avatar: The Way of Water: 14,303,116 pageviews

    21. India: 13,850,178 pageviews

    22. Lisa Marie Presley: 13,764,007 pageviews

    23. Guardians of the Galaxy Vol. 3: 13,392,917 pageviews

    24. Russian invasion of Ukraine: 12,798,866 pageviews

    25. Andrew Tate: 12,728,616 pageviews

    According to the Wikimedia Foundation, this top 25 list was created using English Wikipedia data as of Nov. 28. Numbers for the full year are set to be updated by the nonprofit on Jan. 3, 2024.

    [ad_2]

    Source link

  • AI's future could be 'open-source' or closed. Tech giants are divided as they lobby regulators

    AI's future could be 'open-source' or closed. Tech giants are divided as they lobby regulators

    [ad_1]

    Tech leaders have been vocal proponents of the need to regulate artificial intelligence, but they’re also lobbying hard to make sure the new rules work in their favor.

    That’s not to say they all want the same thing.

    Facebook parent Meta and IBM on Tuesday launched a new group called the AI Alliance that’s advocating for an “open science” approach to AI development that puts them at odds with rivals Google, Microsoft and ChatGPT-maker OpenAI.

    These two diverging camps — the open and the closed — disagree about whether to build AI in a way that makes the underlying technology widely accessible. Safety is at the heart of the debate, but so is who gets to profit from AI’s advances.

    Open advocates favor an approach that is “not proprietary and closed,” said Darío Gil, a senior vice president at IBM who directs its research division. “So it’s not like a thing that is locked in a barrel and no one knows what they are.”

    The term “open-source” comes from a decades-old practice of building software in which the code is free or widely accessible for anyone to examine, modify and build upon.

    Open-source AI involves more than just code and computer scientists differ on how to define it depending on which components of the technology are publicly available and if there are restrictions limiting its use. Some use open science to describe the broader philosophy.

    The AI Alliance — led by IBM and Meta and including Dell, Sony, chipmakers AMD and Intel and several universities and AI startups — is “coming together to articulate, simply put, that the future of AI is going to be built fundamentally on top of the open scientific exchange of ideas and on open innovation, including open source and open technologies,” Gil said in an interview with The Associated Press ahead of its unveiling.

    Part of the confusion around open-source AI is that despite its name, OpenAI — the company behind ChatGPT and the image-generator DALL-E — builds AI systems that are decidedly closed.

    “To state the obvious, there are near-term and commercial incentives against open source,” said Ilya Sutskever, OpenAI’s chief scientist and co-founder, in a video interview hosted by Stanford University in April. But there’s also a longer-term worry involving the potential for an AI system with “mind-bendingly powerful” capabilities that would be too dangerous to make publicly accessible, he said.

    To make his case for open-source dangers, Sutskever posited an AI system that had learned how to start its own biological laboratory.

    Even current AI models pose risks and could be used, for instance, to ramp up disinformation campaigns to disrupt democratic elections, said University of California, Berkeley scholar David Evan Harris.

    “Open source is really great in so many dimensions of technology,” but AI is different, Harris said.

    “Anyone who watched the movie ‘Oppenheimer’ knows this, that when big scientific discoveries are being made, there are lots of reasons to think twice about how broadly to share the details of all of that information in ways that could get into the wrong hands,” he said.

    The Center for Humane Technology, a longtime critic of Meta’s social media practices, is among the groups drawing attention to the risks of open-source or leaked AI models.

    “As long as there are no guardrails in place right now, it’s just completely irresponsible to be deploying these models to the public,” said the group’s Camille Carlton.

    An increasingly public debate has emerged over the benefits or dangers of adopting an open-source approach to AI development.

    Meta’s chief AI scientist, Yann LeCun, this fall took aim on social media at OpenAI, Google and startup Anthropic for what he described as “massive corporate lobbying” to write the rules in a way that benefits their high-performing AI models and could concentrate their power over the technology’s development. The three companies, along with OpenAI’s key partner Microsoft, have formed their own industry group called the Frontier Model Forum.

    LeCun said on X, formerly Twitter, that he worried that fearmongering from fellow scientists about AI “doomsday scenarios” was giving ammunition to those who want to ban open-source research and development.

    “In a future where AI systems are poised to constitute the repository of all human knowledge and culture, we need the platforms to be open source and freely available so that everyone can contribute to them,” LeCun wrote. “Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture.”

    For IBM, an early supporter of the open-source Linux operating system in the 1990s, the dispute feeds into a much longer competition that precedes the AI boom.

    “It’s sort of a classic regulatory capture approach of trying to raise fears about open-source innovation,” said Chris Padilla, who leads IBM’s global government affairs team. “I mean, this has been the Microsoft model for decades, right? They always opposed open-source programs that could compete with Windows or Office. They’re taking a similar approach here.”

    It was easy to miss the “open-source” debate in the discussion around U.S. President Joe Biden’s sweeping executive order on AI.

    That’s because Biden’s order described open models with the highly technical name of “dual-use foundation models with widely available weights” and said they needed further study. Weights are numerical parameters that influence how an AI model performs.

    “When the weights for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model,” Biden’s order said. He gave U.S. Commerce Secretary Gina Raimondo until July to talk to experts and come back with recommendations on how to manage the potential benefits and risks.

    The European Union has less time to figure it out. In negotiations coming to a head Wednesday, officials working to finalize passage of world-leading AI regulation are still debating a number of provisions, including one that could exempt certain “free and open-source AI components” from rules affecting commercial models.

    [ad_2]

    Source link

  • Why Sam Altman is a no-brainer for Time’s ‘Person of the Year’

    Why Sam Altman is a no-brainer for Time’s ‘Person of the Year’

    [ad_1]

    Nothing has changed our lives more this year than the advances made in artificial intelligence — and they have the potential to alter our lives in even more dramatic ways down the road.

    So it’s a no-brainer that Sam Altman, co-founder and recently returned chief executive of the once-little-known OpenAI, should be named “Person of the Year” by Time Magazine when the selection is announced Wednesday.

    Altman has already cracked Time’s shortlist, joining candidates from varied backgrounds, including world leaders like Xi Jinping and entertainment phenomenon Taylor Swift. The selection ultimately comes down to an “individual or group who most shaped the previous 12 months, for better or for worse.”

    But Time has often given “agents of change” its yearly honor — just look at 2021 winner Elon Musk — and Altman certainly fits that bill.

    No other innovation in the past year has had an impact in such disparate realms. OpenAI publicly launched its ChatGPT chatbot late last year, and as the technology grew viral in 2023, it upended the stock market, Silicon Valley and companies that wouldn’t normally be classified as technology businesses. The ensuing product development and surge in generative AI investment revitalized a tech industry that had sunk into the doldrums amid a pandemic hangover.

    Admittedly, it will take time for companies to realize the true financial benefits of AI: Nvidia Corp.
    NVDA,
    -2.68%

    is among the few to generate serious money from the frenzy so far. But market researcher IDC predicted that global spending on AI, including software, hardware and services for AI-centric systems will reach $154 billion this year, up 27% from a year ago. That total could zoom above $300 billion by 2026.

    Also read: One year after its launch, ChatGPT has succeeded in igniting a new era in tech

    And AI isn’t only impacting the corporate world. The technology is already affecting our daily lives, and it will have even deeper effects going forward. Chatbots are getting smarter on websites, facilitating better customer service. They’re starting to alter the workplace as well, spitting out mostly coherent marketing copy, research and even, gasp, news articles — albeit with plenty of errors.

    At first, ChatGPT seemed like a fun way to kill time or get homework help, but the chatbot and its ilk will seriously alter the working world, helping to eliminate perhaps millions of jobs. Morgan Stanley recently predicted that more than 40% of occupations will be affected by generative AI in the next three years.

    Altman himself has been the face of OpenAI in the past year. He’s talked up the technology, but he also appeared at congressional hearings in May to discuss potential regulation of AI, testifying that “if this technology goes wrong, it can go quite wrong.” His recent firing and quick rehiring by OpenAI and its small, nonprofit board late last month fueled a veritable media storm before the Thanksgiving holiday in the U.S.

    Time chooses its persons of the year for their impact, not because they’re saints. And Altman’s own story is not without controversy. The recent brouhaha over his leadership of OpenAI is believed to have been caused by a deep schism over the ethics of AI development. The board seemingly wanted more guardrails and precautions, and feared that rushed development could irrevocably doom mankind.

    Read in the Wall Street Journal: How effective altruism split Silicon Valley and fueled the blowup at OpenAI

    Altman, who also wooed Microsoft Corp.
    MSFT,
    -1.43%

    to become an investor in OpenAI, emerged the victor in the upheaval with his own company’s altruistic board. Had Altman truly been fired from OpenAI, Microsoft was planning to hire him, and nearly every employee at OpenAI was ready to quit and follow him there. While OpenAI faces plenty of competition, including from Alphabet Inc.’s
    GOOG,
    -2.02%

    GOOGL,
    -1.96%

    Google, Altman should continue to be the face of AI development, for good and for bad, even as he has advocated industry regulation.

    The debut and influence of ChatGPT and follow-on AI products are having the biggest impact on tech development since the invention of the iPhone. Altman is at the center of it and leading the charge. Whether he can keep the lid on Pandora’s Box or not depends on many factors, but he and the company he leads are clearly driving a new tech movement that affects us all, whether we like it or not.

    [ad_2]

    Source link

  • Europe's world-leading artificial intelligence rules are facing a do-or-die moment

    Europe's world-leading artificial intelligence rules are facing a do-or-die moment

    [ad_1]

    LONDON — Hailed as a world first, European Union artificial intelligence rules are facing a make-or-break moment as negotiators try to hammer out the final details this week — talks complicated by the sudden rise of generative AI that produces human-like work.

    First suggested in 2019, the EU’s AI Act was expected to be the world’s first comprehensive AI regulations, further cementing the 27-nation bloc’s position as a global trendsetter when it comes to reining in the tech industry.

    But the process has been bogged down by a last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI’s ChatGPT and Google’s Bard chatbot. Big tech companies are lobbying against what they see as overregulation that stifles innovation, while European lawmakers want added safeguards for the cutting-edge AI systems those companies are developing.

    Meanwhile, the U.S., U.K., China and global coalitions like the Group of 7 major democracies have joined the race to draw up guardrails for the rapidly developing technology, underscored by warnings from researchers and rights groups of the existential dangers that generative AI poses to humanity as well as the risks to everyday life.

    “Rather than the AI Act becoming the global gold standard for AI regulation, there’s a small chance but growing chance that it won’t be agreed before the European Parliament elections” next year, said Nick Reiners, a tech policy analyst at Eurasia Group, a political risk advisory firm.

    He said “there’s simply so much to nail down” at what officials are hoping is a final round of talks Wednesday. Even if they work late into the night as expected, they might have to scramble to finish in the new year, Reiners said.

    When the European Commission, the EU’s executive arm, unveiled the draft in 2021, it barely mentioned general purpose AI systems like chatbots. The proposal to classify AI systems by four levels of risk — from minimal to unacceptable — was essentially intended as product safety legislation.

    Brussels wanted to test and certify the information used by algorithms powering AI, much like consumer safety checks on cosmetics, cars and toys.

    That changed with the boom in generative AI, which sparked wonder by composing music, creating images and writing essays resembling human work. It also stoked fears that the technology could be used to launch massive cyberattacks or create new bioweapons.

    The risks led EU lawmakers to beef up the AI Act by extending it to foundation models. Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet.

    Foundation models give generative AI systems such as ChatGPT the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.

    Chaos last month at Microsoft-backed OpenAI, which built one of the most famous foundation models, GPT-4, reinforced for some European leaders the dangers of allowing a few dominant AI companies to police themselves.

    While CEO Sam Altman was fired and swiftly rehired, some board members with deep reservations about the safety risks posed by AI left, signaling that AI corporate governance could fall prey to boardroom dynamics.

    “At least things are now clear” that companies like OpenAI defend their businesses and not the public interest, European Commissioner Thierry Breton told an AI conference in France days after the tumult.

    Resistance to government rules for these AI systems came from an unlikely place: France, Germany and Italy. The EU’s three largest economies pushed back with a position paper advocating for self-regulation.

    The change of heart was seen as a move to help homegrown generative AI players such as French startup Mistral AI and Germany’s Aleph Alpha.

    Behind it “is a determination not to let U.S. companies dominate the AI ecosystem like they have in previous waves of technologies such as cloud (computing), e-commerce and social media,” Reiners said.

    A group of influential computer scientists published an open letter warning that weakening the AI Act this way would be “a historic failure.” Executives at Mistral, meanwhile, squabbled online with a researcher from an Elon Musk-backed nonprofit that aims to prevent “existential risk” from AI.

    AI is “too important not to regulate, and too important not to regulate well,” Google’s top legal officer, Kent Walker, said in a Brussels speech last week. “The race should be for the best AI regulations, not the first AI regulations.”

    Foundation models, used for a wide range of tasks, are proving the thorniest issue for EU negotiators because regulating them “goes against the logic of the entire law,” which is based on risks posed by specific uses, said Iverna McGowan, director of the Europe office at the digital rights nonprofit Center for Democracy and Technology.

    The nature of general purpose AI systems means “you don’t know how they’re applied,” she said. At the same time, regulations are needed “because otherwise down the food chain there’s no accountability” when other companies build services with them, McGowan said.

    Altman has proposed a U.S. or global agency that would license the most powerful AI systems. He suggested this year that OpenAI could leave Europe if it couldn’t comply with EU rules but quickly walked back those comments.

    Aleph Alpha said a “balanced approach is needed” and supported the EU’s risk-based approach. But it’s “not applicable” to foundation models, which need “more flexible and dynamic” regulations, the German AI company said.

    EU negotiators still have yet to resolve a few other controversial points, including a proposal to completely ban real-time public facial recognition. Countries want an exemption so law enforcement can use it to find missing children or terrorists, but rights groups worry that will effectively create a legal basis for surveillance.

    EU’s three branches of government are facing one of their last chances to reach a deal Wednesday.

    Even if they do, the bloc’s 705 lawmakers still must sign off on the final version. That vote needs to happen by April, before they start campaigning for EU-wide elections in June. The law wouldn’t take force before a transition period, typically two years.

    If they can’t make it in time, the legislation would be put on hold until later next year — after new EU leaders, who might have different views on AI, take office.

    “There is a good chance that it is indeed the last one, but there is equally chance that we would still need more time to negotiate,” Dragos Tudorache, a Romanian lawmaker co-leading the European Parliament’s AI Act negotiations, said in a panel discussion last week.

    His office said he wasn’t available for an interview.

    “It’s a very fluid conversation still,” he told the event in Brussels. “We’re going to keep you guessing until the very last moment.”

    [ad_2]

    Source link

  • Europe's world-leading artificial intelligence rules are facing a do-or-die moment

    Europe's world-leading artificial intelligence rules are facing a do-or-die moment

    [ad_1]

    LONDON — Hailed as a world first, European Union artificial intelligence rules are facing a make-or-break moment as negotiators try to hammer out the final details this week — talks complicated by the sudden rise of generative AI that produces human-like work.

    First suggested in 2019, the EU’s AI Act was expected to be the world’s first comprehensive AI regulations, further cementing the 27-nation bloc’s position as a global trendsetter when it comes to reining in the tech industry.

    But the process has been bogged down by a last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI’s ChatGPT and Google’s Bard chatbot. Big tech companies are lobbying against what they see as overregulation that stifles innovation, while European lawmakers want added safeguards for the cutting-edge AI systems those companies are developing.

    Meanwhile, the U.S., U.K., China and global coalitions like the Group of 7 major democracies have joined the race to draw up guardrails for the rapidly developing technology, underscored by warnings from researchers and rights groups of the existential dangers that generative AI poses to humanity as well as the risks to everyday life.

    “Rather than the AI Act becoming the global gold standard for AI regulation, there’s a small chance but growing chance that it won’t be agreed before the European Parliament elections” next year, said Nick Reiners, a tech policy analyst at Eurasia Group, a political risk advisory firm.

    He said “there’s simply so much to nail down” at what officials are hoping is a final round of talks Wednesday. Even if they work late into the night as expected, they might have to scramble to finish in the new year, Reiners said.

    When the European Commission, the EU’s executive arm, unveiled the draft in 2021, it barely mentioned general purpose AI systems like chatbots. The proposal to classify AI systems by four levels of risk — from minimal to unacceptable — was essentially intended as product safety legislation.

    Brussels wanted to test and certify the information used by algorithms powering AI, much like consumer safety checks on cosmetics, cars and toys.

    That changed with the boom in generative AI, which sparked wonder by composing music, creating images and writing essays resembling human work. It also stoked fears that the technology could be used to launch massive cyberattacks or create new bioweapons.

    The risks led EU lawmakers to beef up the AI Act by extending it to foundation models. Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet.

    Foundation models give generative AI systems such as ChatGPT the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.

    Chaos last month at Microsoft-backed OpenAI, which built one of the most famous foundation models, GPT-4, reinforced for some European leaders the dangers of allowing a few dominant AI companies to police themselves.

    While CEO Sam Altman was fired and swiftly rehired, some board members with deep reservations about the safety risks posed by AI left, signaling that AI corporate governance could fall prey to boardroom dynamics.

    “At least things are now clear” that companies like OpenAI defend their businesses and not the public interest, European Commissioner Thierry Breton told an AI conference in France days after the tumult.

    Resistance to government rules for these AI systems came from an unlikely place: France, Germany and Italy. The EU’s three largest economies pushed back with a position paper advocating for self-regulation.

    The change of heart was seen as a move to help homegrown generative AI players such as French startup Mistral AI and Germany’s Aleph Alpha.

    Behind it “is a determination not to let U.S. companies dominate the AI ecosystem like they have in previous waves of technologies such as cloud (computing), e-commerce and social media,” Reiners said.

    A group of influential computer scientists published an open letter warning that weakening the AI Act this way would be “a historic failure.” Executives at Mistral, meanwhile, squabbled online with a researcher from an Elon Musk-backed nonprofit that aims to prevent “existential risk” from AI.

    AI is “too important not to regulate, and too important not to regulate well,” Google’s top legal officer, Kent Walker, said in a Brussels speech last week. “The race should be for the best AI regulations, not the first AI regulations.”

    Foundation models, used for a wide range of tasks, are proving the thorniest issue for EU negotiators because regulating them “goes against the logic of the entire law,” which is based on risks posed by specific uses, said Iverna McGowan, director of the Europe office at the digital rights nonprofit Center for Democracy and Technology.

    The nature of general purpose AI systems means “you don’t know how they’re applied,” she said. At the same time, regulations are needed “because otherwise down the food chain there’s no accountability” when other companies build services with them, McGowan said.

    Altman has proposed a U.S. or global agency that would license the most powerful AI systems. He suggested this year that OpenAI could leave Europe if it couldn’t comply with EU rules but quickly walked back those comments.

    Aleph Alpha said a “balanced approach is needed” and supported the EU’s risk-based approach. But it’s “not applicable” to foundation models, which need “more flexible and dynamic” regulations, the German AI company said.

    EU negotiators still have yet to resolve a few other controversial points, including a proposal to completely ban real-time public facial recognition. Countries want an exemption so law enforcement can use it to find missing children or terrorists, but rights groups worry that will effectively create a legal basis for surveillance.

    EU’s three branches of government are facing one of their last chances to reach a deal Wednesday.

    Even if they do, the bloc’s 705 lawmakers still must sign off on the final version. That vote needs to happen by April, before they start campaigning for EU-wide elections in June. The law wouldn’t take force before a transition period, typically two years.

    If they can’t make it in time, the legislation would be put on hold until later next year — after new EU leaders, who might have different views on AI, take office.

    “There is a good chance that it is indeed the last one, but there is equally chance that we would still need more time to negotiate,” Dragos Tudorache, a Romanian lawmaker co-leading the European Parliament’s AI Act negotiations, said in a panel discussion last week.

    His office said he wasn’t available for an interview.

    “It’s a very fluid conversation still,” he told the event in Brussels. “We’re going to keep you guessing until the very last moment.”

    [ad_2]

    Source link

  • In Congress and in class: Rep. Don Beyer works toward master's degree in AI

    In Congress and in class: Rep. Don Beyer works toward master's degree in AI

    [ad_1]

    Rep. Don Beyer, D-Va.

    Bill Clark | CQ-Roll Call, Inc. | Getty Images

    WASHINGTON — Don Beyer isn’t the average student at George Mason University. He’s 73 years old. He prefers a notebook and pen to a laptop for note-taking. And he’s a top lawmaker on AI policy in Congress.

    The Virginia Democrat found AI fascinating, but the breakthrough came when he realized he could enroll in computer science classes at George Mason University. So he enrolled, starting with the prerequisite classes that will ultimately lead him to a master’s degree in machine learning. 

    Beyer can only take about one class a semester, as he balances voting on the floor, working on legislation and fundraising with getting his coding homework done. But the classes are already providing benefits. 

    “With every additional course I take, I think I have a better understanding of how the actual coding works,” he recently told CNBC. “What it means to have big datasets, what it means to look for these linkages and also, perhaps, what it means to have unintended consequences.”

    Beyer is part of almost every group of House lawmakers working on AI. He’s vice chair for both the bipartisan Congressional Artificial Intelligence Caucus and a newer AI working group started by The New Democrat Coalition, the largest groups of centrist Democrats in the House.

    He was also a member of former Speaker Kevin McCarthy’s working group on AI, which could be resurrected under Speaker Mike Johnson. On the legislative side, he’s a leader on a bill to expand access to high-powered computational tools needed to develop AI.

    Crash course

    As members of Congress raced to get themselves up to speed on AI this fall with hearings, forums and a dinner with Open AI CEO Sam Altman, Beyer said his classroom time has given him a perspective on what goes on under the hood.

    He’s also learning how easy it can be for a small mistake to have a major impact on code. Beyer said one of his daughters, who is also a coder, sent him a big book about debugging programs that was “very, very long.”

    “You make big mistakes, then you make stupid little mistakes that take you hours to find. And you realize how imperfect any technology is,” he said. “That’s going to drive a lot of trying to defend against the downside risks of AI.”

    Congress is grappling with how to move forward on AI.

    In the House, Rep. Jay Obernolte, R-Calif., who served on McCarthy’s AI working group with Beyer, told CNBC he’s spoken briefly with Johnson, R-La, and the speaker is interested in getting the AI group started again soon, after more pressing battles such as government funding are over.

    Obernolte said there were a few different directions the House could head in on AI, including enacting digital privacy protections for consumers or deciding whether a new federal agency should oversee AI, or whether each currency agency should handle the issue.

    Obernolte, who has a masters degree in artificial intelligence, said there’s no shortage of smart lawmakers on AI, including Beyer. 

     “Don is wonderful, very knowledgeable, you know, really has a passion for this particular issue,” he said. 

    Read more CNBC politics coverage

    ‘Time is of the essence’

    Another issue Congress has its eye on is the ease of spreading videos and photos that look real but are generated by AI — particularly ones showing events that never happened, or real people saying things they never actually said, which could ultimately impact elections.

    Rep. Derek Kilmer, D-Wash., who chairs the New Democrats’ AI working group, said the 2024 election lends fresh urgency to figuring out how to minimize the impact of misleading or false media. 

    “The implications for the spread of misinformation for the integrity of our public discourse or democracy is significant,” Kilmer told CNBC. “And that is driving this push.”

    Senate Majority Leader Chuck Schumer, D-N.Y., recently said “time is of the essence” when it comes to dealing with AI-generated videos and photos. “It may be the thing we have to do first, when it comes to legislation and creating guardrails in AI.”

    Still, Beyer is worried Congress won’t move quickly enough to keep up with the rapid pace of new AI models.

    “What we’re trying to do is not replicate our failures on social media, where for 20-plus years we’ve not regulated at all,” said Beyer. “Social media has had wonderful positive effects, but also some pretty scary downsides to misinformation, disinformation.”

    Beyer acknowledged that due to fights over spending and the House speaker’s gavel, it wasn’t likely Congress would be able to pass AI legislation this year. But he’s hopeful something can move next year, ahead of the 2024 election.

    [ad_2]

    Source link

  • Amazon launches Q, a business chatbot powered by generative artificial intelligence

    Amazon launches Q, a business chatbot powered by generative artificial intelligence

    [ad_1]

    NEW YORK — Amazon finally has its answer to ChatGPT.

    The tech giant said Tuesday it will launch Q — a business chatbot powered by generative artificial intelligence.

    The announcement, made in Las Vegas at an annual conference the company hosts for its AWS cloud computing service, represents Amazon’s response to rivals who’ve rolled out chatbots that have captured the public’s attention.

    San Francisco startup OpenAI’s release of ChatGPT a year ago sparked a surge of public and business interest in generative AI tools that can spit out emails, marketing pitches, essays, and other passages of text that resemble the work of humans.

    That attention initially gave an advantage to OpenAI’s chief partner and financial backer, Microsoft, which has rights to the underlying technology behind ChatGPT and has used it to build its own generative AI tools known as Copilot. But it also spurred competitors like Google to launch their own versions.

    These chatbots are a new generation of AI systems that can converse, generate readable text on demand and even produce novel images and video based on what they’ve learned from a vast database of digital books, online writings and other media.

    Amazon said Tuesday that Q can do things like synthesize content, streamline day-to-day communications and help employees with tasks like generating blog posts. It said companies can also connect Q to their own data and systems to get a tailored experience that’s more relevant to their business.

    The technology is currently available for preview.

    While Amazon is ahead of rivals Microsoft and Google as the dominant cloud computing provider, it’s not perceived as the leader in the AI research that’s led to advancements in generative AI.

    A recent Stanford University index that measured the transparency of the top 10 foundational AI models, including Amazon’s Titan, ranked Amazon at the bottom. Stanford researchers said less transparency can make it harder for customers that want to use the technology to know if they can safely rely on it, among other problems.

    The company, meanwhile, has been forging forward. In September, Amazon said it would invest up to $4 billion in the AI startup Anthropic, a San Francisco-based company that was founded by former staffers from OpenAI.

    The tech giant also has been rolling out new services, including an update for its popular assistant Alexa so users can have more human-like conversations and AI-generated summaries of product reviews for consumers.

    [ad_2]

    Source link

  • OpenAI rival HuggingFace says it’s seeing more client interest after Sam Altman fiasco

    OpenAI rival HuggingFace says it’s seeing more client interest after Sam Altman fiasco

    [ad_1]

    Hugging Face CEO Clement Delangue (R) departs the closed-door “AI Insight Forum” outside the Kennedy Caucus Room in the Russell Senate Office Building on Capitol Hill on September 13, 2023 in Washington, DC.

    Chip Somodevilla | Getty Images

    Artificial intelligence startup Hugging Face says it’s seeing increased interest from potential customers following the chaos at rival OpenAI.

    “I think a lot of companies, organizations now are kind of wondering about the risk about outsourcing their AI to just one AI provider,” Hugging Face CEO Clément Delangue said Tuesday on a call with reporters. “It creates, obviously a single point of failure for them — a little bit for the field too — so they’re investigating different solutions.”

    Hugging Face, whose platform is used by developers to share AI model code, isn’t the first high-valued startup in the space to share that observation in the 11 days since Sam Altman was ousted as OpenAI CEO and then quickly reinstated following an uproar among employees and investors.

    Cohere, the Toronto-based large language model (LLM) startup valued at more than $2 billion, has also seen more inquiries from prospective customers, according to spokesperson Josh Gartner.

    Companies “want outstanding, reliable business solutions, not soap operas,” Gartner told CNBC on Tuesday.

    Multiple companies told CNBC they’d considered switching from OpenAI to competitors’ services as the confusion swelled inside the company.

    Cohere’s top executives, CEO Aidan Gomez and COO Martin Kon, sent a note to investors last week, prior to Altman’s return, informing them that the company’s mission and strategy are “shared fully” by its leadership team, company, enterprise customers and investors.

    The letter didn’t name OpenAI but referred to “the events with one of our competitors over the weekend.” OpenAI has a unique structure in that the parent entity is a nonprofit, with a so-called capped-profit company underneath that umbrella. The board represents the nonprofit and oversees the activities of Altman and the rest of the corporate team.

    Cohere’s executives laid out a clear contrast in how their company is run.

    “Cohere is fortunate to have a board of extremely experienced external board members who are professional investors with significant stakes in the company, in addition to founder control of the board,” they wrote.

    With the challenges at OpenAI, they added, “We believe enterprises feel more than ever that they need a stable and reliable LLM provider,”

    Hugging Face, which is based in New York, raised $235 million in August at a valuation of $4.5 billion. Investors include GoogleAmazonNvidiaSalesforceAMDIntelIBM and Qualcomm.

    Anthropic, founded by ex-OpenAI employees, declined to comment on whether it’s seen increased interest in the past couple weeks. The company is valued at $4.1 billion. A representative for Google’s Bard team, which competes with OpenAI’s ChatGPT, also declined to comment.

    WATCH: If the U.S. doesn’t win on AI, the emerging China alliance will

    [ad_2]

    Source link

  • Meta turned a blind eye to kids on its platforms for years, unredacted lawsuit alleges | TechCrunch

    Meta turned a blind eye to kids on its platforms for years, unredacted lawsuit alleges | TechCrunch

    [ad_1]

    A newly unredacted version of the multi-state lawsuit against Meta alleges a troubling pattern of deception and minimization in how the company handles kids under 13 on its platforms. Internal documents appear to show that the company’s approach to this ostensibly forbidden demographic is far more laissez-faire than it has publicly claimed.

    The lawsuit, filed last month, alleges a wide spread of damaging practices at the company relating to the health and well-being of younger people using it. From body image to bullying, privacy invasion to engagement maximization, all the purported evils of social media are laid at Meta’s door — perhaps rightly, but it also gives the appearance of a lack of focus.

    In one respect at least, however, the documentation obtained by the Attorneys General of 42 states is quite specific, “and it is damning,” as AG Rob Bonta of California put it. That is in paragraphs 642 through 835, which mostly document violations of the Children’s Online Privacy Protection Act, or COPPA. This law created very specific restrictions around young folks online, limiting data collection and requiring things like parental consent for various actions, but a lot of tech companies seem to consider it more suggestion than requirement.

    You know it is bad news for the company when they request pages and pages of redactions:

    Image Credits: TechCrunch / 42 AGs

    This recently happened with Amazon as well, and it turned out they were trying to hide the existence of a price-hiking algorithm that skimmed billions from consumers. But it’s much worse when you’re redacting COPPA complaints.

    “We’re very bullish and confident in our COPPA allegations. Meta is knowingly taking steps that harm children, and lying about it,” AG Bonta told TechCrunch in an interview. “In the unredacted complaint we see that Meta knows that it’s social media platforms are used by millions of kids under 13, and they unlawfully collect their personal info. It shows that common practice where Meta says one thing in its public facing comments to Congress and other regulators, while internally it says something else.”

    The lawsuit argues that “Meta does not obtain—or even attempt to obtain—verifiable parental consent before collecting the personal information of children on Instagram and Facebook… But Meta’s own records reveal that it has actual knowledge that
    Instagram and Facebook target and successfully enroll children as users.”

    Essentially, while the problem of identifying kids’ accounts created in violation of platform rules is certainly a difficult one, Meta allegedly opted to turn a blind eye for years rather than enact more stringent rules that would necessarily impact user numbers.

    Here are a few of the most striking parts of the suit. While some of these allegations relate to practices from years ago, bear in mind that Meta (then Facebook) has been publicly saying it doesn’t allow kids on the platform, and diligently worked to detect and expel them, for a decade.

    Meta has internally tracked and documented under-13s, or U13s, in its audience breakdowns for years, as charts in the filing show. In 2018, for instance, it noted that 20 percent of 12-year-olds on Instagram used it daily. And this was not in a presentation about how to remove them — it is relating to market penetration. The other chart shows Meta’s “knowledge that 20-60% of 11- to 13-year-old users in particular birth cohorts had actively used Instagram on at least a monthly basis.”

    The newly unredacted chart shows that Meta tracked under-13 users closely.

    It’s hard to square this with the public position that users this age are not welcome. And it isn’t because leadership wasn’t aware.

    That same year, 2018, CEO Mark Zuckerberg received a report that there were approximately 4 million people under 13 on Instagram in 2015, which amounted to about a third of all 10-12-year-olds in the U.S., they estimated. Those numbers are obviously dated, but even so they are surprising. Meta has never, to our knowledge, admitted to having such enormous numbers and proportions of under-13 users on its platforms.

    Not externally, at least. Internally, the numbers appear to be well documented. For instance, as the lawsuit alleges:

    Meta possesses data from 2020 indicating that, out of 3,989 children surveyed, 31% of child respondents aged 6-9 and 44% of child respondents aged 10 to 12-years-old had used Facebook.

    It’s difficult to extrapolate from the 2015 and 2020 numbers to today’s (which, as we have seen from the evidence presented here, will almost certainly not be the whole story), but Bonta noted that the large figures are presented for impact, not as legal justification.

    “The basic premise remains that their social media platforms are used by millions of children under 13. Whether it’s 30 percent, or 20 or 10 percent… any child, it’s illegal,” he said. “If they were doing it at any time, it violated the law at that time. And we are not confident that they have changed their ways.”

    An internal presentation called “2017 Teens Strategic Focus” appears to specifically target kids under 13, noting that children use tablets as early as 3 or 4, and “Social identity is an Unmet need Ages 5-11.” One stated goal, according to the lawsuit, was specifically to “grow [Monthly Active People], [Daily Active People] and time spent among U13 kids.”

    It’s important to note here that while Meta does not permit accounts to be run by people under 13, there are plenty of ways it can lawfully and safely engage with that demographic. Some kids just want to watch videos from Spongebob Official, and that’s fine. However, Meta must verify parental consent and the ways it can collect and use their data is limited.

    But the redactions suggest these under-13 users are not of the lawfully and safely engaged type. Reports of underage accounts are reported to be automatically ignored, and Meta “continues collecting the child’s personal information if there are no photos associated with the account.” Of 402,000 reports of accounts owned by users under 13 in 2021, fewer than 164,000 were disabled. And these actions reportedly don’t cross between platforms, meaning Instagram account being disabled doesn’t flag associated or linked Facebook or other accounts.

    Zuckerberg testified to Congress in March of 2021 that “if we detect someone might be under the age of 13, even if they lied, we kick them off.” (And “they lie about it a TON,” one research director said in another quote.) But documents from the next month cited by the lawsuit indicate that “Age verification (for under 13) has a big backlog and demand is outpacing supply” due to a “lack of [staffing] capacity.” How big a backlog? At times, the lawsuit alleges, on the order of millions of accounts.

    A potential smoking gun is found in a series of anecdotes from Meta researchers delicately avoiding the possibility of inadvertently confirming an under-13 cohort in their work.

    One wrote in 2018: “We just want to make sure to be sensitive about a couple of Instagram-specific items. For example, will the survey go to under 13 year olds? Since everyone needs to be at least 13 years old before they create an account, we want to be careful about sharing findings that come back and point to under 13 year olds being bullied on the platform.”

    In 2021, another, studying “child-adult sexual-related content/behavior/interactions” (!) said she was “not includ[ing] younger kids (10-12 yos) in this research” even though there “are definitely kids this age on IG,” because she was “concerned about risks of disclosure since they aren’t supposed to be on IG at all.”

    Also in 2021, Meta instructed a third party research company conducting a survey of preteens to remove any information indicating a survey subject was on Instagram, so the “company won’t be made aware of under 13.”

    Later that year, external researchers provided Meta with information that “of children ages 9-12, 45% used Facebook and 40% used Instagram daily.”

    During an internal 2021 study on youth in social media, they first asked parents if their kids are on Meta platforms and removed them from the study if so. But one researcher asked, “What happens to kids who slip through the screener and then say they are on IG during the interviews?” Instagram Head of Public Policy Karina Newton responded, “we’re not collecting user names right?” In other words, what happens is nothing.

    As the lawsuit puts it:

    Even when Meta learns of specific children on Instagram through interviews with the children, Meta takes the position that it still lacks actual knowledge of that it is collecting personal information from an under-13 user because it does not collect user names while conducting these interviews. In this way, Meta goes through great lengths to avoid meaningfully complying with COPPA, looking for loopholes to excuse its knowledge of users under the age of 13 and maintain their presence on the Platform.

    The other complaints in the lengthy lawsuit have softer edges, such as the argument that use of the platforms contributes to poor body image and that Meta has failed to take appropriate measures. That’s arguably not as actionable. But the COPPA stuff is far more cut and dry.

    “We have evidence that parents are sending notes to them about their kids being on their platform, and they’re not getting any action. I mean, what more should you need? It shouldn’t even have to get to that point,” Bonta said.

    “These social media platforms can do anything they want,” he continued. “They can be operated by a different algorithm, they can have plastic surgery filters or not have them, they can give you alerts in the middle of the night or during school, or not. They choose to do things that maximize the frequency of use of that platform by children, and the duration of that use. They could end all this today if they wanted, they could easily keep those under 13 from accessing their platform. But they’re not.”

    You can read the mostly unredacted complaint here.

    TechCrunch has contacted Meta for comment on the lawsuit and some of these specific allegations, and will update this post if we hear back.

    [ad_2]

    Devin Coldewey

    Source link

  • Nvidia shares close down on report it delays China AI chip designed to comply with U.S. export rules

    Nvidia shares close down on report it delays China AI chip designed to comply with U.S. export rules

    [ad_1]

    Nvidia is delaying a new artificial intelligence chip for China that has been designed to comply with U.S. export restrictions, according to Reuters.

    Nvidia shares closed down about 1.9% after a shortened trading day in the U.S.

    Reuters, citing two sources familiar with the matter, reported that Nvidia told Chinese customers that it is delaying the launch of an AI chip that is designed to comply with U.S. export rules until the first quarter of next year.

    The new chip, called the H20, was being delayed due to issues server manufacturers were having while integrating the semiconductor into their products, Reuters reported.

    Nvidia was not immediately available for comment when contacted by CNBC.

    In October, the U.S. government further tightened export curbs on AI chips to China. Those rules restricted the export of Nvidia’s A800 and H800 chips. These semiconductors were also specifically designed for China.

    As well as the H20, Nvidia is also gearing up to launch two other export-compliant chips called the L20 and L2, Reuters reported.

    The delay to the H20 could be a setback for Nvidia which makes around a fifth of its revenue from China and is facing competition from local players such as Huawei.

    Even as Nvidia reported this week that it tripled its revenue in the September quarter, the company warned sales in regions affected by export restrictions will “decline significantly” in the current quarter.

    [ad_2]

    Source link

  • China’s Alibaba shakes up cloud unit management after scrapping the division’s IPO

    China’s Alibaba shakes up cloud unit management after scrapping the division’s IPO

    [ad_1]

    The World Artificial Intelligence Conference in Shanghai in July 2023.

    Aly Song | Reuters

    Alibaba has begun an overhaul of its cloud computing unit, bringing in veterans to new leadership positions, as it looks to revive the division after cancelling its public listing.

    The move underscores the Chinese technology giant’s desire to tap the boom in artificial intelligence that relies on the infrastructure built up by cloud players.

    Alibaba will put a bigger emphasis on three business units within the cloud space — public cloud, hybrid cloud and cloud infrastructure. While these groups existed previously, Alibaba has newly instated executives that oversee the divisions and that report to the company’s top leadership.

    Weiguang Liu will lead the public cloud division, a person familiar with the matter who was not authorized to speak publicly on it told CNBC, while Jin Li will lead the hybrid cloud unit. Both executives will report to Alibaba’s group CEO Eddie Wu, the source said.

    Jiangwei Jiang will lead the cloud infrastructure unit, reporting to the cloud division’s Chief Technology Officer Jingren Zhou, the person added.

    All three executives of the cloud business units are Alibaba veterans. The company was not immediately available for comment.

    Chinese publication Leifeng first reported the news.

    The shakeup comes after a surprise move by Alibaba last week to cancel the highly-anticipated initial public offering of its cloud unit that led the company to shed more than $20 billion from its value.

    Alibaba underwent the biggest restructure in its history this year by way of splitting the company into six business units. Daniel Zhang stepped down from the CEO role in September, then quit as the head of the cloud business weeks later.

    Alibaba has faced intensifying competition in China in the cloud market, particularly when pursuing customers in the state-owned enterprise and government sector.

    Last week, Wu said the company would put more emphasis on the so-called public cloud, which targets enterprises in China, rather than at government customers.

    The company will focus on AI in the cloud, as AI applications require vast amounts of computing power that cloud computing firms can offer. Alibaba hopes to capitalize on this prospect.

    “The cloud intelligence group will resolutely implement a strategy of driving growth with AI and of prioritizing public cloud. It will scale up its technology investments in AI-related software and hardware,” Wu said.

    “In the future, incremental demand for cloud computing will be driven by demand for AI, and most AI computing will run in the cloud.”

    [ad_2]

    Source link

  • California examines benefits, risks of using artificial intelligence in state government

    California examines benefits, risks of using artificial intelligence in state government

    [ad_1]

    Artificial intelligence that can generate text, images and other content could help improve state programs but also poses risks, according to a report released by the governor’s office on Tuesday.

    Generative AI could help quickly translate government materials into multiple languages, analyze tax claims to detect fraud, summarize public comments and answer questions about state services. Still, deploying the technology, the analysis warned, also comes with concerns around data privacy, misinformation, equity and bias.

    “When used ethically and transparently, GenAI has the potential to dramatically improve service delivery outcomes and increase access to and utilization of government programs,” the report stated.

    The 34-page report, ordered by Gov. Gavin Newsom, provides a glimpse into how California could apply the technology to state programs even as lawmakers grapple with how to protect people without hindering innovation.

    Concerns about AI safety have divided tech executives. Leaders such as billionaire Elon Musk have sounded the alarm that the technology could lead to the destruction of civilization, noting that if humans become too dependent on automation they could eventually forget how machines work. Other tech executives have a more optimistic view about AI’s potential to help save humanity by making it easier to fight climate change and diseases.

    At the same time, major tech firms including Google, Facebook and Microsoft-backed OpenAI are competing with one another to develop and release new AI tools that can produce content.

    The report also comes as generative AI is reaching another major turning point. Last week, the board of ChatGPT maker OpenAI fired CEO Sam Altman for not being “consistently candid in his communications with the board,” thrusting the company and AI sector into chaos.

    On Tuesday night, OpenAI said it reached “an agreement in principle” for Altman to return as CEO and the company named members of a new board. The company faced pressure to reinstate Altman from investors, tech executives and employees, who threatened to quit. OpenAI hasn’t provided details publicly about what led to the surprise ousting of Altman, but the company reportedly had disagreements over keeping AI safe while also making money. A nonprofit board controls OpenAI, an unusual governance structure that made it possible to push out the CEO.

    Newsom called the AI report an “important first step” as the state weighs some of the safety concerns that come with AI.

    “We’re taking a nuanced, measured approach — understanding the risks this transformative technology poses while examining how to leverage its benefits,” he said in a statement.

    AI advancements could benefit California’s economy. The state is home to 35 of the world’s 50 top AI companies and data from Pitchfork says the GenAI market could reach $42.6 billion in 2023, the report said.

    Some of the risks outlined in the report include spreading false information, giving consumers dangerous medical advice and enabling the creation of harmful chemicals and nuclear weapons. Data breaches, privacy and bias are also top concerns along with whether AI will take away jobs.

    “Given these risks, the use of GenAI technology should always be evaluated to determine if this tool is necessary and beneficial to solve a problem compared to the status quo,” the report said.

    As the state works on guidelines for the use of generative AI, the report said that in the interim state employees should abide by certain principles to safeguard the data of Californians. For example, state employees shouldn’t provide Californians’ data to generative AI tools such as ChatGPT or Google Bard or use unapproved tools on state devices, the report said.

    AI‘s potential use go beyond state government. Law enforcement agencies such as Los Angeles police are planning to use AI to analyze the tone and word choice of officers in body cam videos.

    California’s efforts to regulate some of the safety concerns such as bias surrounding AI didn’t gain much traction during the last legislative session. But lawmakers have introduced new bills to tackle some of AI’s risks when they return in January such as protecting entertainment workers from being replaced by digital clones.

    Meanwhile, regulators around the world are still figuring out how to protect people from AI’s potential risks. In October, President Biden issued an executive order that outlined standards around safety and security as developers create new AI tools. AI regulation was a major issue of discussion at the Asia-Pacific Economic Cooperation meeting in San Francisco last week.

    During a panel discussion with executives from Google and Facebook’s parent company, Meta, Altman said he thought that Biden’s executive order was a “good start” even though there were areas for improvement. Current AI models, he said, are “fine” and “heavy regulation” isn’t needed but he expressed concern about the future.

    “At some point when the model can do the equivalent output of a whole company and then a whole country and then the whole world, like maybe we do want some sort of collective global supervision of that,” he said, a day before he was fired as OpenAI’s CEO.

    [ad_2]

    Queenie Wong

    Source link