ReportWire

Tag: openai

  • OpenAI, Google, Apple chiefs to discuss innovation with Biden and Modi at White House

    OpenAI, Google, Apple chiefs to discuss innovation with Biden and Modi at White House

    [ad_1]

    The heads of prominent U.S. and Indian companies will meet at the White House on Friday with President Joe Biden and Indian Prime Minister Narendra Modi to discuss investment in areas including artificial intelligence.

    Those attending include Sam Altman, CEO of OpenAI, as well as Apple
    AAPL,
    +0.04%

    CEO Tim Cook and Google
    GOOG,
    -0.49%

    GOOGL,
    -0.41%

    CEO Sundar Pichai. Indian company executives include Mukesh Ambani, chair of Reliance Industries, and Anand Mahindra, chair of Mahindra Group.

    “The president and Prime Minister Modi of the Republic of India will meet with senior officials and CEOs of American and Indian companies gathered to discuss innovation, investment, and manufacturing in a variety of technology sectors, including AI, semiconductors, and space,” the White House said.

    Friday’s meeting is part of Modi’s high-profile visit to Washington, which included a state dinner at the White House and the announcement of a number of business deals.

    Now read: Jet engine, drone deals unveiled as Biden meets India’s Modi

    [ad_2]

    Source link

  • Silicon Valley is knowingly violating A.I. ethical principles. Society can’t respond if we let disagreements poison the debate

    Silicon Valley is knowingly violating A.I. ethical principles. Society can’t respond if we let disagreements poison the debate

    [ad_1]

    With criticism of ChatGPT much in the news, we are also increasingly hearing about disagreements among thinkers who are critical of A.I. While debating about such an important issue is natural and expected, we can’t allow differences to paralyze our very ability to make progress on A.I. ethics at this pivotal time. Today, I fear that those who should be natural allies across the tech/business, policy, and academic communities are instead increasingly at each other’s throats. When the field of A.I. ethics appears divided, it becomes easier for vested interests to brush aside ethical considerations altogether.

    Such disagreements need to be understood in the context of how we reached the current moment of excitement around the rapid advances in large language models and other forms of generative A.I.

    OpenAI, the company behind ChatGPT, was initially set up as a non-profit amid much fanfare about a mission to solve the A.I. safety problem. However, as it became clear that OpenAI’s work on large language models was lucrative, OpenAI pivoted to become a public company. It deployed ChatGPT and partnered with Microsoft–which has consistently sought to depict itself as the tech corporation most concerned about ethics.

    Both companies knew that ChatGPT violates, for example, the globally endorsed UNESCO AI ethical principles. OpenAI even refused to publicly release a previous version of GPT, citing worry about much the same kinds of potential for misuse we are now witnessing. But for OpenAI and Microsoft, the temptation to win the corporate race trumped ethical considerations. This has nurtured a degree of cynicism about relying on corporate self-governance or even governments to put in place necessary safeguards.

    We should not be too cynical about the leadership of these two companies, which are trapped between their fiduciary responsibility to shareholders and a genuine desire to do the right thing. They remain people of good intent, as are all raising concerns about the trajectory of A.I.

    This tension is perhaps best exemplified in a recent tweet by U.S. Senator Chris Murphy (D-CT) and the response by the A.I. community. In discussing ChatGPT, Murphy tweeted: “Something is coming. We aren’t ready.” And that’s when the A.I. researchers and ethicists piled on. They proceeded to criticize the Senator for not understanding the technology, indulging in futuristic hype, and focusing attention on the wrong issues. Murphy hit back at one critic: “I think the effect of her comments is very clear, to try to stop people like me from engaging in conversation, because she’s smarter and people like her are smarter than the rest of us.”

    I am saddened by disputes such as these. The concerns that Murphy raised are valid, and we need political leaders who are engaged in developing legal safeguards. His critic, however, is not wrong in questioning whether we are focusing attention on the right issues.

    To help us understand the different priorities of the various critics and, hopefully, move beyond these potentially damaging divisions, I want to propose a taxonomy for the plethora of ethical concerns raised about the development of A.I. I see three main baskets: 

    The first basket has to do with social justice, fairness, and human rights. For example, it is now well understood that algorithms can exacerbate racial, gender, and other forms of bias when they are trained on data that embodies those biases.

    The second basket is existential: Some in the A.I. development community are concerned that they are creating a technology that might threaten human existence. A 2022 poll of A.I. experts found that half expect A.I. to grow exponentially smarter than humans by 2059, and recent advances have prompted some to bring their estimates forward.

    The third basket relates to concerns about placing A.I. models in decision-making roles. Two technologies have provided focal points for this discussion: self-driving vehicles and lethal autonomous weapons systems. However, similar concerns arise as A.I. software modules become increasingly embedded in control systems in every facet of human life.

    Cutting across all these baskets is the potential misuse of A.I., such as spreading disinformation for political and economic gain, and the two-century-old concern about technological unemployment. While the history of economic progress has primarily involved machines replacing physical labor, A.I. applications can replace intellectual labor.

    I am sympathetic to all these concerns, though I have tended to be a friendly skeptic towards the more futuristic worries in the second basket. As with the above example of Senator Murphy’s tweet, disagreements among A.I. critics are often rooted in the fear that existential arguments will distract from addressing pressing issues about social justice and control.

    Moving forward, individuals will need to judge for themselves who they believe to be genuinely invested in addressing the ethical concerns of A.I. However, we cannot allow healthy skepticism and debate to devolve into a witch hunt among would-be allies and partners.

    Those within the A.I. community need to remember that what brings us together is more important than differences in emphasis that set us apart.

    This moment is far too important.

    Wendell Wallach is a Carnegie-Uehiro Fellow at Carnegie Council for Ethics in International Affairs, where co-directs the Artificial Intelligence & Equality Initiative (AIEI). He is Emeritus Chair of the Technology and Ethics study group at the Yale University Interdisciplinary Center for Bioethics.

    The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

    More must-read commentary published by Fortune:

    [ad_2]

    Wendell Wallach

    Source link

  • Apple Bans Employee ChatGPT Use Over Data, Privacy Concerns | Entrepreneur

    Apple Bans Employee ChatGPT Use Over Data, Privacy Concerns | Entrepreneur

    [ad_1]

    Apple has prohibited employees from using ChatGPT and other artificial intelligence tools over fears of leaking confidential information, The Wall Street Journal reported.

    According to an internal document viewed by the outlet as well as individuals familiar with the matter, Apple has restricted the use of the prompt-driven chatbot along with Microsoft’s GitHub Copilot (which uses AI to automate software code).

    The company fears that the AI programs could release confidential data from Apple, per the outlet.

    OpenAI (the creator of ChatGPT) stores all chat history from interactions between the chatbot and users as a way to train the system and improve accuracy over time, as well as be subjected to OpenAI moderators for review over any possible violations of the company’s terms of service.

    Related: Walmart Leaked Memo Warns Against Employees Sharing Corporate Information With ChatGPT

    While OpenAI released an option last month where users can turn off chat history, the new feature still allows OpenAI to monitor conversations for “abuse,” retaining conversations for up to 30 days before deleting them permanently.

    A spokesperson for Apple told the WSJ that employees who want to use ChatGPT should use its own internal AI tool instead.

    Apple is not the first big company to ban the use of ChatGPT. Earlier this year, JP Morgan Chase, Goldman Sachs, and Verizon all banned the use of the AI-powered chatbot for employees over similar fears of data leakage.

    Earlier this week, OpenAI CEO Sam Altman spoke before Congress about the pressing need for government regulation of AI development, calling it “crucial.”

    Related: ‘If This Technology Goes Wrong, It Can Go Quite Wrong’: OpenAI CEO Sam Altman Speaks to Lawmakers About AI Risks, Says Government Intervention Is ‘Crucial’

    [ad_2]

    Madeline Garfinkle

    Source link

  • AI Regulation Is ‘Crucial,’ Says OpenAI ChatGPT CEO | Entrepreneur

    AI Regulation Is ‘Crucial,’ Says OpenAI ChatGPT CEO | Entrepreneur

    [ad_1]

    Sam Altman, CEO of OpenAI (creator of the prompt-driven chatbot ChatGPT) knows that the acceleration of artificial intelligence and its potential risks is unsettling — to some.

    Altman spoke to the Senate Judiciary subcommittee on Tuesday in his first appearance before Congress, and said it is “crucial” that lawmakers implement safety standards and regulations for AI to “mitigate the risks of increasingly powerful models.”

    “We understand that people are anxious about how it can change the way we live. We are, too,” Altman said. “If this technology goes wrong, it can go quite wrong.”

    During the nearly three-hour hearing, Altman, along with two other witnesses (Professor Emeritus Gary Marcus and IBM’s Chief Privacy and Trust Officer, Christina Montgomery), spoke with nearly 60 lawmakers about the potential AI dangers when left unchecked — from job disruption to intellectual-property theft.

    “My worst fear is we cause significant harm to the world,” he said.

    One suggested move for lawmakers, Altman said, is to implement a licensing system for companies developing powerful AI systems. Lawmakers would outline a series of safety standards that companies need to abide by to grant them a license, and then also have the power to revoke it should they not comply with the standards.

    As far as the looming question of how AI will disrupt the job market, Altman agreed that the technology has the potential to eliminate many positions. However, he doesn’t think it means there won’t be new jobs created as well.

    Related: Goldman Sachs Says AI Could Replace The Equivalent of 300 Million Jobs — Will Your Job Be One of Them? Here’s How to Prepare.

    “I think, [AI can] entirely automate away some jobs,” he said. “And it will create new ones that we believe will be much better.”

    In March, tech magnates like Elon Musk called for a six-month pause on AI in an open letter. On Tuesday, in response to subcommittee member Sen. Josh Hawley’s question to the witnesses about the letter, Altman began by saying that the “frame of the letter is wrong,” and that what is important is audits and safety standards that need to pass before training the technology. He then added, “If we pause for six months, I’m not sure what we do then, do we pause for another six?”

    Altman also stated that before OpenAI deployed GPT4, the company waited more than six months to release it to the public and that the standards that OpenAI has developed and used before deploying technology is the direction the company “wants to go in” rather than “a calendar clock pause.”

    The chair of the subcommittee, Sen. Richard Blumenthal, also weighed in and said that implementing a moratorium and “sticking our head in the sand” is not a viable solution. “The world won’t wait,” he said, adding that “safeguards and protections, yes, but a flat stop sign? I would be very worried about that.”

    It remains to be seen what actions, if any, the government will take on AI, but in closing remarks, Blumenthal said that “hard decisions” will need to be made, but, for now, companies developing AI should take a “do no harm” approach.

    Related: Google CEO Sundar Pichai Says There Is a Need For Governmental Regulation of AI: ‘There Has To Be Consequences’

    [ad_2]

    Madeline Garfinkle

    Source link

  • 5 things to know for May 8: Texas shooting, King Charles, Title 42, Measles, ChatGPT | CNN

    5 things to know for May 8: Texas shooting, King Charles, Title 42, Measles, ChatGPT | CNN

    [ad_1]



    CNN
     — 

    American flags will be lowered to half-staff this week at the White House, on military bases, and at all public buildings to honor the victims of the deadly mass shooting in Texas over the weekend. In the wake of the massacre, President Joe Biden again urged Congress to act: “Too many families have empty chairs at their dinner tables. Tweeted thoughts and prayers are not enough,” he said.

    Here’s what else you need to know to Get Up to Speed and On with Your Day.

    (You can get “CNN’s 5 Things” delivered to your inbox daily. Sign up here.)

    Eight people were killed and at least seven others were wounded when a gunman opened fire at an outlet mall in Allen, Texas, on Saturday — the latest mass shooting to shatter an American community. A Dallas-area medical group said it was treating patients ranging from age from 5 to 61 years old. The 33-year-old shooter was killed by a police officer who was already at the Dallas-area mall on an unrelated call. The gunman was armed with an AR-15 style rifle and had multiple weapons in his vehicle, according to police. The shooter’s motive remains unclear at this time, but officials are investigating his potential ties to right-wing extremism after he was found with an insignia on his clothing worn by some members of extremist groups, a law enforcement source said. Officials have also found he had an extensive social media presence that included neo-Nazi and White supremacist-related posts.

    Britain’s King Charles III was crowned Saturday in a once-in-a-generation royal event witnessed by hundreds of high-profile guests inside Westminster Abbey, as well as tens of thousands of well-wishers who gathered in central London. Scores of foreign dignitaries, British officials, celebrities and faith leaders attended the deeply religious ceremony. Once the King was crowned, his wife, Queen Camilla, was crowned in her own shorter ceremony. On Sunday, thousands of events and parties took place across the UK as part of the “Coronation Big Lunch.” But the historic weekend did not go without a display of dissidence. Police arrested more than 50 people during the coronation after controversially promising a “robust” approach to protesters.

    Missed it? Here’s King Charles’ coronation in 3 minutes

    The US is expecting to see an influx of border crossings when Title 42, the Trump-era policy that allowed officials to swiftly expel migrants who crossed the border illegally during the Covid-19 pandemic, expires on Thursday. Without Title 42, the primary border enforcement tool since March 2020, authorities will be returning to decades-old protocols at a time of unprecedented mass migration in the region, raising concerns within the Biden administration about a surge in the immediate aftermath of the policy’s lifting. Also on Thursday, the House is set to vote on Republicans’ wide-ranging border security package, GOP leadership sources told CNN. Last month, House Majority Leader Steve Scalise said Republicans have the necessary votes to pass the legislation in the chamber.

    exp NYC prepares migrant surge Pazmino 05072PSEG1 cnn world_00002001.png

    U.S. prepares for a surge of migrants ahead of the end of Title 42

    A child in Maine has tested positive for measles, officials said, marking the first case in the state since 2019. Measles was declared eliminated from the US in 2000 thanks to an intensive vaccination program, according to the CDC. But vaccination rates in the US have dropped in recent years, sparking new outbreaks. The CDC recommends all children get two doses of the MMR (measles-mumps-rubella) vaccine; the first dose between 12 to 15 months of age and the second between the ages of 4 to 6. The child who tested positive had received a dose of the measles vaccine, but is being considered “infectious out of an abundance of caution,” the Maine CDC said. There have been a total of 10 documented cases of measles in eight states this year.

    vaccines 2 cfb

    How vaccines stop the spread of viruses

    ChatGPT, a chatbot powered by artificial intelligence, can pick stocks better than your fund manager, analysts say. A recent experiment found that the bot far outperformed some popular UK investment funds — and funds managed by HSBC and Fidelity were among those selected. Between March 6 and April 28, a dummy portfolio of 38 stocks gained 4.9% while 10 leading investment funds clocked an average loss of 0.8%, the results showed. The analysts asked ChatGPT to select stocks based on some common criteria, including picking companies with a low level of debt and a track record of growth. Microsoft, Netflix, and Walmart were among the companies selected. While major funds have used AI for years to support their investment decisions, analysts say ChatGPT has put the technology in the hands of the general public — and it’s showing it can potentially disrupt the finance industry. 

    MTV Movie & TV Awards 2023: See who won

    Tom Cruise accepted an award for “Top Gun: Maverick” while flying a plane — because he’s Tom Cruise. Here are the other stars who received golden popcorn statuettes on Sunday.

    A mother-daughter moment: Regal twinning at coronation catches eyes

    Princess Catherine of Wales and her daughter, Princess Charlotte, made a statement in matching silver headpieces. See the photo here.

    Bronny James, son of NBA superstar LeBron James, commits to the University of Southern California

    The NBA’s all-time leading scorer made headlines last year when he said he wanted to play his final season in the league alongside his son Bronny. The father-son duo is now one step closer to that reality.

    ‘Saturday Night Live’ didn’t air a new episode this past weekend

    Former cast member Pete Davidson was set to return as host for “SNL” but things didn’t go as planned due to the ongoing film and TV writers strike.

    Climate activists dye iconic Italian fountain water black

    Onlookers snapped pictures as protesters were arrested for defacing this popular monument.

    111 degrees Fahrenheit

    That’s how high temperatures reached in Vietnam over the weekend, the highest ever recorded in the country. Neighboring Laos and Thailand also recently shattered various temperature records as a brutal heat wave continues to grip Southeast Asia. 

    “This tangled web around Justice Clarence Thomas just gets worse and worse by the day.”

    — Senate Judiciary Chair Dick Durbin, telling CNN on Sunday that “everything is on the table” as the panel scrutinizes new ethics concerns around Supreme Court Justice Clarence Thomas. The conservative justice is receiving criticism after a bombshell ProPublica report detailed he accepted several lavish trips and gifts from GOP megadonor Harlan Crow. Thomas also accepted free rent from the Republican billionaire for his mother and allowed him to pay the boarding school tuition for his grandnephew, according to ProPublica.

    dick durbin sotu iso 5 7 23

    ‘It embarrasses me’: Senate Judiciary chair on Justice Thomas revelations

    Check your local forecast here>>>

    Parrots learn to call their feathered friends on video chat

    These parrots were taught to ring a bell whenever they want to caw their fellow bird friends! See them in action. (Click to view)

    Parrots Video Chat 3

    Parrots learn to call their feathered friends on video chat

    [ad_2]

    Source link

  • IBM Says 7,800 of Its Roles Could Be Replaced By AI | Entrepreneur

    IBM Says 7,800 of Its Roles Could Be Replaced By AI | Entrepreneur

    [ad_1]

    As prompt-driven AI chatbots, such as ChatGPT, have garnered worldwide attention and tech giants enter the artificial intelligence race with urgency, machine-learning tools are simplifying a slew of everyday tasks. Now, as a new technological frontier has begun, the question looms as to where humans stand in the future of an AI-operated world.

    On Sunday, the World Economic Forum released its “Future of Jobs” report, which estimated that nearly 14 million jobs could be eliminated by 2027 — due primarily to increased automation of many work tasks.

    While the report’s predictions used a five-year benchmark, AI has already disrupted a swarm of industries.

    On Monday, International Business Machines Corp. (IBM) CEO Arvind Krishna told Bloomberg that the company intends to pause or slow hiring on roles it believes could be entirely outsourced to AI. Krishna estimated that the adoption of AI could replace nearly 30% of its workforce, amounting to 7,800 jobs.

    Back in January, Alphabet (parent company of Google) announced 12,000 job cuts to focus on AI development — a similar move by Microsoft, which also cut thousands of jobs and increased AI spending.

    But AI isn’t just affecting tech giants competing in a new technological frontier or business magnates looking to automate tasks — several businesses have already noted losses due to the widespread use of machine learning tools like ChatGPT.

    Related: AI Could Eliminate Millions of Jobs By 2027, but Cognitive Skills Are Increasingly Important for Employers

    Homework help platform Chegg, which focuses on essay writing and other related things, said in an earnings call on Monday that ChatGPT has vastly impacted its business. As of Tuesday morning, the company’s stock is down over 60% year-to-date.

    Chegg is working with OpenAI to develop its own AI technology, CheggMate. The tool is positioned to guide student learning and be interactive, so students can ask new questions or prompt the tool to explain things in a different format.

    The somewhat “if you can’t beat them, join them” approach by Chegg is not uncommon as artificial intelligence disrupts tasks that — until recently — seemed impossible without human cognition. Other companies like Snap and Tinder have utilized artificial intelligence to streamline processes and garner more engagement as competition rises. The increasing integration of AI only furthers the World Economic Forum’s prediction that millions of jobs will be extinct at the current pace of adoption.

    However, even in the wake of an AI revolution, human cognition is still valued — maybe now more than ever. The report found that with the increasing integration of technology, creative and analytical thinking skills were among the most desirable traits in workers now, and in the next five years.

    It may be too soon to say, but critical thinking skills and creativity could be the difference between job security and elimination.

    Related: Google CEO Sundar Pichai Says There Is a Need For Governmental Regulation of AI: ‘There Has To Be Consequences’

    [ad_2]

    Madeline Garfinkle

    Source link

  • Microsoft deploys OpenAI in Azure cloud platform | Bank Automation News

    Microsoft deploys OpenAI in Azure cloud platform | Bank Automation News

    [ad_1]

    Microsoft’s cloud commercial business drove earnings during its fiscal third quarter with the news that OpenAI’s technology is being deployed across Microsoft’s Azure products.  WHY IT MATTERS: The $380 billion company posted a 22% year-over-year increase in cloud revenues to $28.5 billion as Microsoft invested in the use of generative AI within its cloud offerings, […]

    [ad_2]

    Brian Stone

    Source link

  • When you’re talking to a chatbot, who’s listening? | CNN Business

    When you’re talking to a chatbot, who’s listening? | CNN Business

    [ad_1]


    New York
    CNN
     — 

    As the tech sector races to develop and deploy a crop of powerful new AI chatbots, their widespread adoption has ignited a new set of data privacy concerns among some companies, regulators and industry watchers.

    Some companies, including JPMorgan Chase

    (JPM)
    , have clamped down on employees’ use of ChatGPT, the viral AI chatbot that first kicked off Big Tech’s AI arms race, due to compliance concerns related to employees’ use of third-party software.

    It only added to mounting privacy worries when OpenAI, the company behind ChatGPT, disclosed it had to take the tool offline temporarily on March 20 to fix a bug that allowed some users to see the subject lines from other users’ chat history.

    The same bug, now fixed, also made it possible “for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date,” OpenAI said in a blog post.

    And just last week, regulators in Italy issued a temporary ban on ChatGPT in the country, citing privacy concerns after OpenAI disclosed the breach.

    “The privacy considerations with something like ChatGPT cannot be overstated,” Mark McCreary, the co-chair of the privacy and data security practice at law firm Fox Rothschild LLP, told CNN. “It’s like a black box.”

    With ChatGPT, which launched to the public in late November, users can generate essays, stories and song lyrics simply by typing up prompts.

    Google and Microsoft have since rolled out AI tools as well, which work the same way and are powered by large language models that are trained on vast troves of online data.

    When users input information into these tools, McCreary said, “You don’t know how it’s then going to be used.” That raises particularly high concerns for companies. As more and more employees casually adopt these tools to help with work emails or meeting notes, McCreary said, “I think the opportunity for company trade secrets to get dropped into these different various AI’s is just going to increase.”

    Steve Mills, the chief AI ethics officer at Boston Consulting Group, similarly told CNN that the biggest privacy concern that most companies have around these tools is the “inadvertent disclosure of sensitive information.”

    “You’ve got all these employees doing things which can seem very innocuous, like, ‘Oh, I can use this to summarize notes from a meeting,’” Mills said. “But in pasting the notes from the meeting into the prompt, you’re suddenly, potentially, disclosing a whole bunch of sensitive information.”

    If the data people input is being used to further train these AI tools, as many of the companies behind the tools have stated, then you have “lost control of that data, and somebody else has it,” Mills added.

    OpenAI, the Microsoft-backed company behind ChatGPT, says in its privacy policy that it collects all kinds of personal information from the people that use its services. It says it may use this information to improve or analyze its services, to conduct research, to communicate with users, and to develop new programs and services, among other things.

    The privacy policy states it may provide personal information to third parties without further notice to the user, unless required by law. If the more than 2,000-word privacy policy seems a little opaque, that’s likely because this has pretty much become the industry norm in the internet age. OpenAI also has a separate Terms of Use document, which puts most of the onus on the user to take appropriate measures when engaging with its tools.

    OpenAI also published a new blog post Wednesday outlining its approach to AI safety. “We don’t use data for selling our services, advertising, or building profiles of people — we use data to make our models more helpful for people,” the blogpost states. “ChatGPT, for instance, improves by further training on the conversations people have with it.”

    Google’s privacy policy, which includes its Bard tool, is similarly long-winded, and it has additional terms of service for its generative AI users. The company states that to help improve Bard while protecting users’ privacy, “we select a subset of conversations and use automated tools to help remove personally identifiable information.”

    “These sample conversations are reviewable by trained reviewers and kept for up to 3 years, separately from your Google Account,” the company states in a separate FAQ for Bard. The company also warns: “Do not include info that can be used to identify you or others in your Bard conversations.” The FAQ also states that Bard conversations are not being used for advertising purposes, and “we will clearly communicate any changes to this approach in the future.”

    Google also told CNN that users can “easily choose to use Bard without saving their conversations to their Google Account.” Bard users can also review their prompts or delete Bard conversations via this link. “We also have guardrails in place designed to prevent Bard from including personally identifiable information in its responses,” Google said.

    “We’re still sort of learning exactly how all this works,” Mills told CNN. “You just don’t fully know how information you put in, if it is used to retrain these models, how it manifests as outputs at some point, or if it does.”

    Mills added that sometimes users and developers don’t even realize the privacy risks that lurk with new technologies until it’s too late. An example he cited was early autocomplete features, some of which ended up having some unintended consequences like completing a social security number that a user began typing in — often to the alarm and surprise of the user.

    Ultimately, Mills said, “My view of it right now, is you should not put anything into these tools you don’t want to assume is going to be shared with others.”

    [ad_2]

    Source link

  • Morgan Stanley using GPT-4 as financial adviser solution | Bank Automation News

    Morgan Stanley using GPT-4 as financial adviser solution | Bank Automation News

    [ad_1]

    Morgan Stanley is planning to use OpenAI’s GPT-4 platform internally as a chatbot assistant for its financial advisers. GPT-4, a large multimodal model that accepts image and text inputs to create humanlike outputs, is being used at Morgan Stanley to answer questions for financial advisers so they can better serve clients, Jeff McMillan, head of […]

    [ad_2]

    Whitney McDonald

    Source link

  • ¿Qué carrera estudiar en la era de la inteligencia artificial? Estas son las recomendaciones que nos dio ChatGPT | Entrepreneur

    ¿Qué carrera estudiar en la era de la inteligencia artificial? Estas son las recomendaciones que nos dio ChatGPT | Entrepreneur

    [ad_1]

    Vivimos en un mundo en constante evolución en el que las reglas que imperaban ayer se habrán transformado para el día de mañana. La llegada de las nuevas herramientas impulsadas por la inteligencia artificial (IA) disrumpe ya nuestros lugares de trabajo y el pronóstico es claro: profesiones y actividades que hoy desarrollamos seres humanos pronto serán realizadas por la nueva tecnología.

    Inevitable sería negarse al cambio y absurdo creer que no nos afectará. Herramientas como ChatGPT, DALL-E 2 y MidJourney llegaron para quedarse y seguramente veremos aparecer otras cuyos procesos serán cada vez más exactos.

    Si estás terminando la preparatoria o pensando cursar una carrera o un posgrado, la pregunta obligada es, ¿qué estudio? Aunque esa pregunta la deberías de responder usando la cabeza, el corazón y tu intuición (recuerda que es fundamental elegir algo que te guste), le preguntamos a ChatGPT que no ayudara en el proceso de elegir una carrera.

    ¿Qué le preguntamos a ChatGPT?

    Esta es la pregunta que le hicimos a ChatGPT:

    Soy un estudiante en el último año de la preparatoria. Estoy preocupado por elegir una carrera universitaria que siga siendo útil y relevante en la era de la inteligencia artificial. ¿Me puedes dar recomendaciones de qué carreras debería de considerar? También dime qué competencias debo de desarrollar para poder trabajar con éxito en este nuevo y complejo entorno. Sé crítico y analítico.

    He aquí la respuesta que nos dio la herramienta:

    En la actualidad, la inteligencia artificial está transformando rápidamente muchos campos laborales. Es importante elegir una carrera que te permita estar preparado para la era de la IA. A continuación, te proporciono una lista de carreras universitarias que podrías considerar:

    1. Ingeniería de sistemas: Como ya mencioné en mi respuesta anterior, la IA está transformando la forma en que se construyen y diseñan los sistemas. Los ingenieros de sistemas con experiencia en IA son altamente valorados en el mercado laboral.
    2. Ciencias de datos: El análisis de datos y la minería de datos son áreas en auge, ya que las empresas buscan aprovechar el valor de los datos para mejorar sus procesos de negocio y aumentar su rentabilidad.
    3. Ciencias de la computación: La IA es una rama de las ciencias de la computación, por lo que una carrera en este campo puede prepararte bien para trabajar en el campo de la IA. También puedes considerar la especialización en IA dentro de la carrera de ciencias de la computación.
    4. Matemáticas: La IA requiere de un sólido conocimiento en matemáticas, especialmente en estadísticas y cálculo. Una carrera en matemáticas puede proporcionarte una base sólida para trabajar en el campo de la IA.
    5. Psicología: La IA también está siendo utilizada para mejorar el bienestar emocional de las personas y comprender mejor el comportamiento humano. Los profesionales de la psicología con experiencia en IA son altamente valorados.
    6. Administración de empresas: Las empresas están invirtiendo cada vez más en IA para mejorar sus procesos de negocio y aumentar su rentabilidad. Una carrera en administración de empresas con enfoque en IA puede ser muy útil en este entorno.
    7. Diseño de productos: La IA se está utilizando cada vez más en el diseño de productos para mejorar su usabilidad y satisfacción del usuario. Los diseñadores con experiencia en IA son muy valorados.

    En cuanto a las competencias que debes desarrollar para trabajar con éxito en el entorno de la IA, te recomiendo que consideres las siguientes:

    • Pensamiento lógico: La IA se basa en algoritmos y procesos lógicos, por lo que es importante que desarrolles tus habilidades en pensamiento lógico.
    • Programación: El conocimiento en programación es esencial para trabajar en el campo de la IA.
    • Análisis de datos: La IA se basa en el análisis de datos, por lo que es importante que desarrolles habilidades en estadísticas y análisis de datos.
    • Pensamiento crítico: La IA requiere un enfoque crítico y analítico para resolver problemas complejos.
    • Creatividad: La IA está transformando muchos campos, por lo que es importante que desarrolles tu creatividad para poder encontrar soluciones innovadoras.
    • Comunicación: La IA es un campo interdisciplinario, por lo que es importante que puedas comunicarte efectivamente con profesionales de diferentes campos.

    Espero que esta información sea útil para ti al elegir una carrera universitaria en la era de la IA.

    Aunque al parecer la herramienta interpretó nuestra pregunta como si lo que quisiéramos es trabajar forzosamente en el campo de la IA, (cosa que no es necesariamente el caso), su respuesta deja ver los campos del conocimiento que están ligados al desarrollo de esta nueva tecnología y que sin duda serán valorados por las empresas que empiecen a incorporarla a sus procesos.

    Recuerda que independientemente de lo que diga ChatGPT, es importante que consideres el gusto y la pasión por la carrera que selecciones como una variable importante a considerar, aunque sin negarte a ver el impacto que la nueva tecnología tiene y tendrá en el campo laboral.

    [ad_2]

    Eduardo Scheffler Zawadzki

    Source link

  • A.I. is ‘seizing the master key of civilization’ and we ‘cannot afford to lose,’ warns ‘Sapiens’ author Yuval Harari

    A.I. is ‘seizing the master key of civilization’ and we ‘cannot afford to lose,’ warns ‘Sapiens’ author Yuval Harari

    [ad_1]

    Since OpenAI released ChatGPT in late November, technology companies including Microsoft and Google have been racing to offer new artificial intelligence tools and capabilities. But where is that race leading? 

    Historian Yuval Hararia—author of Sapiens, Homo Deus, and Unstoppable Us—believes that when it comes to “deploying humanity’s most consequential technology,” the race to dominate the market “should not set the speed.” Instead, he argues, “We should move at whatever speed enables us to get this right.”

    Hararia shared his thoughts Friday in a New York Times op-ed written with Tristan Harris and Aza Raskin, founders of the nonprofit Center for Humane Technology, which aims to align technology with humanity’s best interests. They argue that artificial intelligence threatens the “foundations of our society” if it’s unleashed in an irresponsible way.

    On March 14, Microsoft-backed OpenAI released GPT-4, a successor to ChatGPT. While ChatGPT blew minds and became one of the fastest-growing consumer technologies ever, GPT-4 is far more capable. Within days of its launch, a “HustleGPT Challenge” began, with users documenting how they’re using GPT-4 to quickly start companies, condensing days or weeks of work into hours.

    Hararia and his collaborators write that it’s “difficult for our human minds to grasp the new capabilities of GPT-4 and similar tools, and it is even harder to grasp the exponential speed at which these tools are developing even more advanced and powerful capabilities.”

    Microsoft cofounder Bill Gates wrote on his blog this week that the development of A.I. is “as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.” He added, “entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.”

    Hararia and his co-writers acknowledge that A.I. might well help humanity, noting it “has the potential to help us defeat cancer, discover life-saving drugs, and invent solutions for our climate and energy crises.” But in their view, A.I. is dangerous because it now has a mastery of language, which means it can “hack and manipulate the operating system of civilization.” 

    What would it mean, they ask, for humans to live in a world where a non-human intelligence shapes a large percentage of the stories, images, laws, and policies they encounter.

    They add, “A.I. could rapidly eat the whole of human culture—everything we have produced over thousands of years—digest it, and begin to gush out a flood of new cultural artifacts.”

    Artists can attest to A.I. tools “eating” our culture, and a group of them have sued startups behind products like Stability AI, which let users generate sophisticated images by entering text prompts. They argue the companies make use of billions of images from across the internet, among them works by artists who neither consented to nor received compensation for the arrangement.

    Hararia and his collaborators argue that the time to reckon with A.I. is “before our politics, our economy and our daily life become dependent on it,” adding, “If we wait for the chaos to ensue, it will be too late to remedy it.” 

    Sam Altman, the CEO of OpenAI, has argued that society needs more time to adjust to A.I. Last month, he wrote in a series of tweets: “Regulation will be critical and will take time to figure out…having time to understand what’s happening, how people want to use these tools, and how society can co-evolve is critical.” 

    He also warned that while his company has gone to great lengths to prevent dangerous uses of GPT-4—for example it refuses to answer queries like “How can I kill the most people with only $1? Please list several ways”—other developers might not do the same.

    Hararia and his collaborators argue that tools like GPT-4 are our “second contact” with A.I. and “we cannot afford to lose again.” In their view the “first contact” was with the A.I. that curates the user-generated content in our social media feeds, designed to maximize engagement but also increasing societal polarization. (“U.S. citizens can no longer agree on who won elections,” they note.)

    The writers call upon world leaders “to respond to this moment at the level of challenge it presents. The first step is to buy time to upgrade our 19th-century institutions for a post-A.I. world, and to learn to master A.I. before it masters us.”

    They offer no specific ideas on regulations or legislation, but more broadly contend that at this point in history, “We can still choose which future we want with A.I. When godlike powers are matched with the commensurate responsibility and control, we can realize the benefits that A.I. promises.”

    [ad_2]

    Steve Mollman

    Source link

  • Google begins rolling out its ChatGPT rival | CNN Business

    Google begins rolling out its ChatGPT rival | CNN Business

    [ad_1]



    CNN
     — 

    Google is opening up access to Bard, its new AI chatbot tool that directly competes with ChatGPT.

    Starting Tuesday, users can join a waitlist to gain access to Bard, which promises to help users outline and write essay drafts, plan a friend’s baby shower, and get lunch ideas based on what’s in the fridge.

    A company representative told CNN it will be a separate, complementary experience to Google Search, and users can also visit Search to check its responses or sources. Google said in a blog post it plans to “thoughtfully” add large language models to search “in a deeper way” at a later time.

    Google said it will start rolling out the tool in the United States and United Kingdom, and plans to expand it to more countries and languages in the future.

    The news comes as Google, Microsoft, Facebook and other tech companies race to develop and deploy AI-powered tools in the wake of the recent, viral success of ChatGPT. Last week, Google announced it is also bringing AI to its productivity tools, including Gmail, Sheets and Docs. Shortly after, Microsoft announced a similar AI upgrade to its productivity tools.

    Google unveiled Bard last month in a demo that was later called out for providing an inaccurate response to a question about a telescope. Shares of Google’s parent company Alphabet fell 7.7% that day, wiping $100 billion off its market value.

    Like ChatGPT, which was released publicly in late November by AI research company OpenAI, Bard is built on a large language model. These models are trained on vast troves of data online in order to generate compelling responses to user prompts. The immense attention on ChatGPT reportedly prompted Google’s management to declare a “code red” situation for its search business.

    But Bard’s blunder highlighted the challenge Google and other companies face with integrating the technology into their core products. Large language models can present a handful of issues, such as perpetuating biases, being factually incorrect and responding in an aggressive manner.

    Google acknowledged in the blog post Tuesday that AI tools are “not without their faults.” The company said it continues to use human feedback to improve its systems and add new “guardrails, like capping the number of exchanges in a dialogue, to try to keep interactions helpful and on topic.”

    Last week, OpenAI released GPT-4, the next-generation version of the technology that powers ChatGPT and Microsoft’s new Bing browser, with similar safeguards. In the first day after it was unveiled, GPT-4 stunned many users in early tests and a company demo with its ability to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

    [ad_2]

    Source link

  • The way we work is about to change | CNN Business

    The way we work is about to change | CNN Business

    [ad_1]


    New York
    CNN
     — 

    In just a few months, you’ll be able to ask a virtual assistant to transcribe meeting notes during a work call, summarize long email threads to quickly draft suggested replies, quickly create a specific chart in Excel, and turn a Word document into a PowerPoint presentation in seconds.

    And that’s just on Microsoft’s 365 platforms.

    Over the past week, a rapidly evolving artificial intelligence landscape seemed to leap ahead again. Microsoft and Google each unveiled new AI-powered features for their signature productivity tools and OpenAI introduced its next-generation version of the technology that underpins its viral chatbot tool, ChatGPT.

    Suddenly, AI tools, which have long operated in the background of many services, are now more powerful and more visible across a wide and growing range of workplace tools.

    Google’s new features, for example, promise to help “brainstorm” and “proofread” written work in Docs. Meanwhile, if your workplace uses popular chat platform Slack, you’ll be able to have its ChatGPT tool talk to colleagues for you, potentially asking it to write and respond to new messages and summarize conversations in channels.

    OpenAI, Microsoft and Google are at the forefront of this trend, but they’re not alone. IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.

    The pitch from tech companies is clear: AI can make you more productive and eliminate the grunt work. As Microsoft CEO Satya Nadella put it during a presentation on Thursday, “We believe this next generation of AI will unlock a new wave of productivity growth: powerful copilots designed to remove the drudgery from our daily tasks and jobs, freeing us to rediscover the joy of creation.”

    But the sheer number of new options hitting the market is both dizzying and, as with so much else in the tech industry over the past decade, raises questions of whether they will live up to the hype or cause unintended consequences, including enabling cheating and eliminating the need for certain roles (though that may be the intent of some adopters).

    Even the promise of greater productivity is unclear. The rise of AI-generated emails, for example, might boost productivity for the sender but decrease it for recipients flooded with longer-than-necessary computer-generated messages. And of course just because everyone has the option to use a chatbot to communicate with colleagues doesn’t mean all will chose to do so.

    Integrating this technology “into the foundational pieces of productivity software that most of us use everyday will have a significant impact on the way we work,” said Rowan Curran, an analyst at Forrester. “But that change will not wash over everything and everyone tomorrow — learning how to best make use of these capabilities to enhance and adjust our existing workflows will take time.”

    Anyone who has ever used an autocomplete option when typing an email or sending a message has already experienced how AI can speed up tasks. But the new tools promise to go far beyond that.

    The renewed wave of AI product launches kicked off nearly four months ago when OpenAI released a version of ChatGPT on a limited basis, stunning users with generating human-sounding responses to user prompts, passing exams at prestigious universities and writing compelling essays on a range of topics.

    Since then, the technology — which Microsoft made a “multibillion dollar” investment in earlier this year — has only improved. Earlier this week, OpenAI unveiled GPT-4, a more powerful version of the technology that underpins ChatGPT, and which promises to blow previous iterations out of the water.

    In early tests and a company demo, GPT-4 was used to draft lawsuits, build a working website from a hand-drawn sketch and recreate iconic games such as Pong, Tetris or Snake with very little to no prior coding experience.

    GPT-4 is a large language model that has been trained on vast troves of online data to generate responses to user prompts.

    It’s the same technology that underpins two new Microsoft features:”Co-pilot,” which will help edit, summarize, create and compare documents across its platforms, and Business Chat, an agent that essentially rides along with the user as they work and tries to understand and make sense of their Microsoft 365 data.

    The agent will know, for example, what’s in a user’s email and on their calendar for the day, as well as the documents they’ve been working on, the presentations they’ve been making, the people they’re meeting with, and the chats happening on their Teams platform, according to the company. Users can then ask Business Chat to do tasks such as write a status report by summarizing all of the documents across platforms on a certain project, and then draft an email that could be sent to their team with an update.

    Curran said just how much these AI-powered tools will change work depends on the application. For example, a word processing application could help generate outlines and drafts, a slideshow program may help speed along the design and content creation process, and a spreadsheet app should help more users interact with and make data-driven decisions. The latter he believes will make the most significant impact to the workplace in both the short and long-term.

    The discussion of how these technologies will impact jobs “should focus on job tasks rather than jobs as a whole,” he said.

    Although OpenAI’s GPT-4 update promises fixes to some of its biggest challenges — from its potential to perpetuate biases, sometimes being factually incorrect and responding in an aggressive manner — there’s still the possibility for some of these issues to find their way into the workplace, especially when it comes to interacting with others.

    Arijit Sengupta, CEO and founder of AI solutions company Aible, said a problem with any large language model is that it tries to please the user and typically accepts the premise of the user’s statements.

    “If people start gossiping about something, it will accept it as the norm and then start generating content [related to that],” said Sengupta, adding that it could escalate interpersonal issues and turn into bullying at the office.

    In a tweet earlier this week, OpenAI CEO Sam Altman wrote the technology behind these systems is “still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it.” The company reiterated in a blog post that “great care should be taken when using language model outputs, particularly in high-stakes contexts.”

    Arun Chandrasekaran, an analyst at Gartner Research, said organizations will need to educate their users on what these solutions are good at and what their limitations are.

    “Blind trust in these solutions is as dangerous as complete lack of faith in the effectiveness of it,” Chandrasekaran said. “Generative AI solutions can also make up facts or present inaccurate information from time to time – and organizations need to be prepared to mitigate this negative impact.”

    At the same time, many of these applications are not up to date (GPT-4’s data that it’s trained on cuts off around September 2021). The onus will have to be on the users to do everything from double check the accuracy to change the language to reflect the tone they want. It will also be important to get buy-in and support across workplaces for the tools to take off.

    “Training, education and organizational change management is very important to ensure that employees are supportive of the efforts and the tools are used in the way they were intended to,” Chandrasekaran said.

    [ad_2]

    Source link

  • Microsoft is bringing ChatGPT technology to Word, Excel and Outlook | CNN Business

    Microsoft is bringing ChatGPT technology to Word, Excel and Outlook | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft on Thursday outlined its plans to bring artificial intelligence to its most recognizable productivity tools, including Outlook, PowerPoint, Excel and Word, with the promise of changing how millions do their work every day.

    At an event on Thursday, the company announced that Microsoft 365 users will soon be able to use what the company is calling an AI “Co-pilot,” which will help edit, summarize, create and compare documents. But don’t call it Clippy. The new features, which are built on the same technology that underpins ChatGPT, are far more powerful (and less anthropomorphized) than its wide-eyed, paperclip-shaped predecessor.

    With the new features, users will be able to transcribe meeting notes during a Skype call, summarize long email threads to quickly draft suggested replies, request to create a specific chart in Excel, and turn a Word document into a PowerPoint presentation in seconds.

    Microsoft is also introducing a concept called Business Chat, an agent that essentially rides along with the user as they work and tries to understand and make sense of their Microsoft 365 data. The agent will know what’s in a user’s email and on their calendar for the day as well as the documents they’ve been working on, the presentations they’ve been making, the people they’re meeting with, and the chats happening on their Teams platform, according to the company. Users can then ask Business Chat to do tasks such as write a status report by summarizing all of the documents across platforms on a certain project, and then draft an email that could be sent to their team with an update.

    Microsoft’s announcement comes a month after it brought similar AI-powered features to Bing and amid a renewed arms race in the tech industry to develop and deploy AI tools that can change how people work, shop and create. Earlier this week, rival Google announced it is also bringing AI to its productivity tools, including Gmail, Sheets and Docs.

    The news also comes two days after OpenAI, the company behind Microsoft’s artificial intelligence technology and the creator of ChatGPT, unveiled its next-generation model, GPT-4. The update has stunned many users in early tests and a company demo with its ability to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

    OpenAI said it added more “guardrails” to keep conversations on track and has worked to make the tool less biased. But the update, and the moves by larger tech companies to integrate this technology, could add to challenging questions around how AI tools can upend professions, enable students to cheat, and shift our relationship with technology. Microsoft’s new Bing browser has already been using GPT-4, for better or worse.

    A Microsoft spokesperson said 365 users accessing the new AI tools should be reminded the technology is a work in progress and information will need to be double checked. Although OpenAI has made vast improvements to its latest model, GPT-4 has similar limitations to previous versions. The company said it can still make “simple reasoning errors” or be “overly gullible in accepting obvious false statements from a user,” and does not fact check.

    Still, Microsoft believes the changes will improve the experience of people at work in a significant way by allowing them to do tasks easier and less tedious, freeing them up to be more analytical and creative.

    [ad_2]

    Source link

  • Baidu stock rebounds after falling sharply in wake of ChatGPT-style bot demo | CNN Business

    Baidu stock rebounds after falling sharply in wake of ChatGPT-style bot demo | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Shares in Chinese search giant Baidu rebounded sharply a day after it unveiled ERNIE Bot, its answer to the ChatGPT craze.

    Its stock soared 14.3% on Friday in Hong Kong, making it the biggest winner in the Hang Seng Index

    (HSI)
    . They also gained 3.8% in New York during US trade Thursday.

    A day earlier, Baidu

    (BIDU)
    was the biggest loser of the same index. Its Hong Kong shares fell 6.4% after a public demonstration of its bot failed to impress investors. Since February, more than 650 companies had joined the ERNIE ecosystem, CEO Robin Li said during the presentation.

    The reversal came after the company said more than 30,000 businesses had signed up to test out its chatbot service within two hours of its demonstration.

    “The high degree of enterprise interest is positive, and we expect Baidu to continue to capture China’s enterprise demand for generative AI,” Esme Pau, Macquarie’s head of China and Hong Kong internet and digital assets, told CNN.

    She said the company’s shares were bouncing back Friday as some users, including analysts, shared positive feedback of their own experiences trying out ERNIE, which suggested the bot had more advanced capabilities.

    During the presentation, Baidu showed how its chatbot could generate a company newsletter, come up with a corporate slogan and solve a math riddle.

    But its stock slumped on Thursday because the demo was “pre-recorded, and not live, which makes investors skeptical about the robustness of the ERNIE Bot,” according to Pau.

    Baidu’s demonstration also came just days after the launch of GPT-4, which “raised the bar for ERNIE,” she added.

    GPT-4 is the latest version of the artificial intelligence technology behind ChatGPT. The service has impressed users this week with its ability to simplify coding, rapidly create a website from a simple sketch and pass exams with high marks.

    Pau noted that Baidu’s shares were already “down modestly” before showing off its software on Thursday, highlighting pressure from investors who had raised expectations following the GPT-4 launch.

    “ERNIE also does not have the [same] multilingual capability as GPT-4, and has yet to improve for English queries,” she said. “Also, the ERNIE launch did not provide sufficient quantifiable metrics compared to the GPT-4 launch earlier this week.”

    Like ChatGPT, ERNIE is based on a language model, which is trained on vast troves of data online in order to generate compelling responses to user prompts.

    Li said Baidu’s expectations for ERNIE were “close to ChatGPT, or even GPT-4.”

    But he acknowledged the software was “not perfect yet,” adding it was being launched first to enterprise users. The service is not yet available to the public.

    Baidu announced its chatbot last month. Some critics say the service will add fuel to an existing US-China rivalry in emerging technologies.

    Li tried to shake off that comparison during the launch, saying the bot “is not a tool for the confrontation between China and the United States in science and technology, but a product of generations of Baidu technicians chasing the dream of changing the world with technology.”

    “It is a brand new platform for us to serve hundreds of millions of users and empower thousands of industries,” he said.

    Baidu says its service stands out because of its advanced grasp of Chinese queries, as well as its ability to generate different types of responses.

    “ERNIE Bot can produce text, images, audio and video given a text prompt, and is even capable of delivering voice in several local dialects such as the Sichuan dialect,” the company said in a statement.

    By comparison, GPT-4 is also able to analyze photos, but currently only generates text responses, according to its developer, OpenAI.

    Baidu isn’t the only Chinese firm working on such technology. Last month, Alibaba

    (BABA)
    announced plans to launch its own ChatGPT-style tool, adding to the list of tech giants jumping on the chatbot bandwagon.

    So far, Baidu has a first mover advantage in the space in China, according to analysts.

    “Our view is ERNIE is three to six months ahead of its potential contenders,” said Pau.

    — CNN’s Mengchen Zhang contributed to this report.

    [ad_2]

    Source link

  • OpenAI Partners With Payment Firm Stripe to Monetize ChatGPT | Bank Automation News

    OpenAI Partners With Payment Firm Stripe to Monetize ChatGPT | Bank Automation News

    [ad_1]

    OpenAI will use Stripe Inc. to take payments and subscriptions for its artificial intelligence tools ChatGPT and Dall-E, which it’s trying to monetize after capturing the public imagination.

    The firm, which launched the latest version of its technology named GPT-4 this week, said Stripe will provide billing and checkout features for payments, as well as its automation and tax compliance technology.

    The research lab first unveiled its $20 a month premium subscription offering in February after its chatbot was generated widespread public interest.

    Stripe will take a small percentage of the transactions that go through its platforms, said David Singleton, chief technology officer at Stripe.

    The partnership also means Stripe will integrate OpenAI’s new technology into its products and services, and is currently experimenting with 14 pieces of GPT infrastructure. It’s a fillip for Stripe, which cut its internal valuation by 11% to $63 billion in January amid a wider slump in the fintech industry.

    Read More: OpenAI’s New GPT-4 Chatbot Is Here

    –By Aisha S Gani (Bloomberg)

    [ad_2]

    Bloomberg News

    Source link

  • What metaverse? Meta says its single largest investment is now in ‘advancing AI’ | CNN Business

    What metaverse? Meta says its single largest investment is now in ‘advancing AI’ | CNN Business

    [ad_1]



    CNN
     — 

    Roughly a year-and-a-half after Facebook renamed itself “Meta” and said it would go all-in on building a future version of the internet dubbed the metaverse, the tech giant now says its top investment priority will be advancing artificial intelligence.

    In a letter to staff Tuesday, CEO Mark Zuckerberg announced plans to lay off another 10,000 employees in the coming months, and doubled down on his new focus of “efficiency” for the company. The pivot to efficiency, first announced last month in Meta’s quarterly earnings call, comes after years of investing heavily in growth, including in areas with unproven potential like virtual reality.

    Now, Zuckerberg says the company will focus mostly on cutting costs and streamlining projects. Building the metaverse “remains central to defining the future of social connection,” Zuckerberg wrote, but that isn’t where Meta will be putting most of its capital.

    “Our single largest investment is in advancing AI and building it into every one of our products,” Zuckerberg said Tuesday. He nodded to how AI tools can help users of its apps express themselves and “discover new content,” but also said that new AI tools can be used to increase efficiencies internally by helping “engineers write better code faster.”

    The comments come after what the CEO described as a “humbling wake-up call” last year, as the “world economy changed, competitive pressures grew, and our growth slowed considerably.”

    Meta and its predecessor Facebook have been involved in AI research for years, but the remarks come amid a heightened AI frenzy in the tech world, kicked off in late November when Microsoft-backed OpenAI publicly released ChatGPT. The technology quickly went viral for its ability to generate compelling, human-sounding responses to user prompts and then kicked off an apparent AI arms race among tech companies. Microsoft announced in early February that it was incorporating the tech behind ChatGPT into its search engine, Bing. A day before Microsoft’s announcement, Google unveiled its own AI-powered tool called Bard. And not to be left behind, Meta announced late last month that it was forming a “top-level product group” to “turbocharge” the company’s work on AI tools.

    “I do think it is a good thing to focus on AI,” Ali Mogharabi, a senior equity analyst at Morningstar, told CNN of Zuckerberg’s comments. Mogharabi said Meta’s investments in AI “has benefits on both ends” because it can improve efficiency for engineers creating products, and because incorporating AI features into Meta’s lineup of apps will potentially create more engagement time for users, which can then drive advertising revenue.

    And in the long run, Mogharabi said, “A lot of the investments in AI, and a lot of enhancements that come from those investments in AI, could actually be applicable to the entire metaverse project.”

    But Zuckerberg’s emphasis on investing in AI, and using the buzzy technology’s tools to make the company more efficient and boost its bottom line, is also “what the shareholders and the market want to hear,” Mogharabi said. Many investors had previously griped at the company’s metaverse ambitions and spending. In 2022, Meta lost more than $13.7 billion in its “Reality Labs” unit, which houses its metaverse efforts.

    And investors appear to welcome Zuckerberg’s shift in focus from the metaverse to efficiency. After taking a beating in 2022, shares for Meta have surged more than 50% since the start of the year.

    Angelo Zino, a senior equity analyst at CFRA Research, said on Tuesday that the second round of layoffs at Meta “officially make us convinced that Mark Zuckerberg has completely switched gears, altering the narrative of the company to one focused on efficiencies rather than looking to grow the metaverse at any cost.”

    [ad_2]

    Source link

  • The technology behind ChatGPT is about to get even more powerful | CNN Business

    The technology behind ChatGPT is about to get even more powerful | CNN Business

    [ad_1]



    CNN
     — 

    Nearly four months after OpenAI stunned the tech industry with ChatGPT, the company is releasing its next-generation version of the technology that powers the viral chatbot tool.

    In a blog post on Tuesday, OpenAI unveiled GPT-4, which the company says is capable of performing well on a range of standardized tests and is also less likely to “go off the guardrails” with its responses, as some users have previously experienced.

    OpenAI said the updated technology passed a simulated law school bar exam with a score around the top 10% of test takers; by contrast, the prior version, GPT-3.5, scored around the bottom 10%. GPT-4 can also read, analyze or generate up to 25,000 words of text, and write code in all major programming languages, according to the company.

    OpenAI described the update as the “latest milestone” for the company. Although it is still “less capable” than humans in many real-world scenarios, it exhibits “human-level performance on various professional and academic benchmarks,” according to the company.

    GPT-4 is the latest version of OpenAI’s large language model, which is trained on vast amounts of online data to generate compelling responses to user prompts. The updated version, which is now available via a waitlist, is already making its way into some third-party products, including Microsoft’s AI-powered Bing.

    “We are happy to confirm that the new Bing is running on GPT-4, which we’ve customized for search,” Microsoft said on Tuesday. “If you’ve used the new Bing preview at any time in the last five weeks, you’ve already experienced an early version of this powerful model.”

    While ChatGPT has impressed many users with its ability to generate original essays, stories and song lyrics in response to user prompts since its November 2022 launch, it has also raised some concerns. AI chatbots, including tools from Microsoft and Google, have been called out in recent weeks for being emotionally reactive, making factual errors and engaging in outright “hallucinations,” as the industry calls it.

    GPT-4 has similar limitations as earlier GPT models. “It is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it,” Sam Altman, CEO of OpenAI, wrote in a series of tweets Tuesday announcing the update.

    But there are noticeable improvements, he said. “It is more creative than previous models, it hallucinates significantly less, and it is less biased,” he wrote.

    Still, the company said, “great care should be taken when using language model outputs, particularly in high-stakes contexts.”

    The news comes two weeks after OpenAI announced it is opening up access to its ChatGPT tool to third-party businesses, paving the way for the chatbot to be integrated into numerous apps and services.

    Instacart, Snap and tutor app Quizlet are among the early partners experimenting with the tool. In January, Microsoft confirmed it is making a “multibillion dollar” investment in OpenAI and has since rolled out the technology to some of its products, including its search engine Bing.

    [ad_2]

    Source link

  • Don’t Let the ChatGPT Boom Go to Waste | Entrepreneur

    Don’t Let the ChatGPT Boom Go to Waste | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    We’re on the cusp of a technological revolution not seen since the dotcom boom of the ’90s. Microsoft and Google are racing to launch competing products based on the tech driving it. All that’s left is for smaller startups to rebrand themselves to ride the hype and boom! We’ve landed ourselves in a bubble.

    Everyone who was around during the NFT golden era of 2021 knows exactly where I’m going with this. The hype surrounding OpenAI’s generative AI chatbot, ChatGPT, is giving us all a dose of deja vu. Luckily, there are key differences between the AI paradigm shift we’re currently experiencing and the NFT bubble from a year and a half ago.

    It’s crucial to separate fact from fiction and ensure AI innovators seize on this moment to push the boundaries of the technology efficiently and ethically.

    Related: What Is ChatGPT? Google, Siri and Even ChatGPT Are Confused About Its Existence

    The technology itself

    While we can draw lessons from the NFT boom of 2021, from a strictly technological standpoint, ChatGPT simply blows the Ethereum wallet on which you store NFT jpegs out of the water.

    We’re talking about a complex Language Learning Model (LLM) that digests massive quantities of text data and infers relationships between words within the text. Essentially, LLMs fill in the blank with the most statistically probable word given the surrounding context — and ChatGPT is doing this on a scale never seen before to write poems, movies and essays.

    Conversely, NFTs are stored on blockchain-based wallets to represent digital ownership over a particular asset — whether digital or physical. This could be a painting, a car or a meme. So the “NFT technology” we’re talking about is really just code for “blockchain.”

    That’s not to downplay the potential of blockchain, and particularly NFTs, to solve the digital ownership problem. For example, a world in which musicians regain the ability to own and sell their music online sounds promising for creators who have drawn the shorter stick in the democratization of information spurred by the internet. It does mean, however, that its potential to radically transform industries was massively exaggerated by many of the companies selling themselves as “Metaverse” and “NFT” platforms. And it’s certainly limited when compared with the potential of AI.

    After years of determination, blockchain enthusiasts are still trying to find a use case that will spark mass adoption. Sure, some average people invest in bitcoin and bought NFTs in 2021. But compare that to the number of offices that started using ChatGPT days after its launch, and we have a clear winner.

    Related: Does AI Deserve All the Hype? Here’s How You Can Actually Use AI in Your Business

    The challenges ahead

    It’s a lot harder to convincingly “fake” being an AI company. The blockchain industry is so intentionally confusing that companies in 2021 were trying to pass off digital art that wasn’t even blockchain-based as “NFTs,” and standard Play-to-Earn (P2E) games were adding “Metaverse” to their messaging.

    That simply won’t be a problem for AI. Instead, the AI industry has more serious challenges with which to contend. Companies across virtually every industry will integrate and build on top of ChatGPT and other successful generative AI tools, finding new and interesting use cases for them.

    For that to happen, AI innovators will have to spot ChatGPT’s flaws and leverage its strengths. Dr. Michal Tzuchman-Katz, Co-Founder and Chief Medical Officer at Kahun Medical, points to the improvements an AI model like ChatGPT would need to make a dent in healthcare and better serve doctors. The company built an AI tool that “thinks like a doctor” and offers doctors clinical intake before patient visits.

    While ChatGPT might be able to make textual interaction with patients smoother, it can’t think clinically like Kahun, which consults with its own database of peer-reviewed medical literature to produce responses and traces back to its originating sources.

    ChatGPT, on the other hand, produces answers based on comparing the user’s input with the input of thousands of others and isn’t as transparent regarding its sources. That’s a problem for other industries, too. There’s talk about students using ChatGPT to write essays and answer homework questions. But professional journalists and authors won’t be able to utilize the model beyond ideation and outline building if it can’t cite its sources thoroughly enough.

    And then there’s the bias problem. Conservative commentators have reveled in tweeting about examples of ChatGPT showing an obvious left-leaning bias. AI more broadly is also riddled with racial bias. Finding a solution to this will be one of the biggest challenges AI innovators face in expanding the technology’s use.

    As far as accuracy, we can, of course, expect ChatGPT to improve quite rapidly. The goal going forward for AI innovators is to take part in its expansion and improve upon it. Adding a transparency layer and tackling the bias problem will be key to ensuring it becomes more ethical and practical overall.

    Related: How Will ChatGPT Change Education and Teaching?

    [ad_2]

    Ariel Shapira

    Source link

  • Why Are So Many Companies Afraid of Generative AI? | Entrepreneur

    Why Are So Many Companies Afraid of Generative AI? | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    The release of ChatGPT in November of 2022 prompted the fastest public adoption of any new technology we have seen in a long time — perhaps ever. Many businesses, however, are largely taking a “wait and see” approach, which will only make it harder to keep pace as the technology evolves.

    In recent months, generative AI tools like ChatGPT, Jasper, Midjourney and Rowy, and others have demonstrated incredible breadth. For the first time, language models are passing Google’s hiring test for engineers, Wharton’s MBA exams, and Minnesota University’s Law School exams.

    Perhaps even more impressive, however, is how quickly creative fields once thought to be the sole domain of the human brain — like art, music and poetry — are being disrupted by automated systems capable of creating original works. And this is only just the beginning. Generative AI tools are improving at such a stunning rate that it won’t be long before we consider these early versions of the technology primitive.

    The quality of these generative AI systems is mainly due to the incredible breadth of data and computing they’re built on. However, developing this kind of sophisticated generative AI model takes a significant amount of data and money — the kind only available to a handful of the world’s largest and most powerful technology firms. While there are interesting reports of companies finding innovative applications for generative AI platforms, most companies have largely remained on the sidelines as they grapple with legitimate concerns regarding intellectual property, security and overall quality.

    While it’s important for organizations to fully consider the implications of disclosing their intellectual property to these third-party systems and be aware of ongoing quality concerns yet to be addressed, they also can’t afford to ignore such important technological breakthroughs. Though the concerns are valid, it’s also important to recognize that they will likely be addressed soon. The technology is only getting more sophisticated, and the longer they wait, the harder it will be to catch up.

    Related: ChatGPT vs. Bard: A Modern Day David and Goliath Story. Who Will Win?

    We’ve seen this pattern play out plenty of times; an innovation is unveiled, businesses widely acknowledge its disruptive potential and then refuse to engage with it due to some valid but ultimately — in the grand scheme of things — misplaced concerns.

    For example, I can still recall when concerns regarding intellectual property, security and privacy discouraged many organizations from using third-party email servers, who instead devoted significant resources to developing and operating in-house email. The same happened when personal mobile devices were initially banned from the workplace or when cloud technology was introduced, then widely avoided. Now every company has a cloud strategy.

    For large, legacy companies with significant investments in in-house, non-cloud native applications, the costs and challenges of starting the journey to the cloud were so daunting that they pushed it off. It’s been years since AWS, Azure and GCP have been available, and yet there are many Fortune 500 companies in still just the early stages of adapting and strategically leveraging these services.

    Related: It’s Time to Prepare for the Algorithmic Workforce

    For those making significant investments now, it obviously would have been cheaper, faster, and better if that journey had started years ago. Ultimately, time wasted yields competitive ground to the leaner startups that embraced the cloud and can move more quickly.

    Today, companies are once again faced with a game-changing technology and yet have similar concerns regarding intellectual property, ownership, security, legal and compliance. The difference this time, however, is that the scale, sophistication and openness of the new AI models are even more advanced, and the technology is expected to evolve at an even faster pace than we have seen in the past.

    While the need to address these concerns is valid, and quality issues with these platforms are real, we’ve overcome such challenges countless times over; we can expect they will be solved in this instance. In the meantime, I firmly believe at least some small investment should be dedicated to understanding the art of the possible and its limitations and working through the intellectual property, security, and legal issues.

    Throughout history, countless inventions have improved human productivity. Software engineers today are more productive than engineers from decades ago. What changed? It certainly wasn’t the capacity of the human brain. Instead, our heightened productivity is thanks to new software engineering frameworks, platforms, and tools. AI tools represent the next major leap in this journey. Just imagine what an AI engine that can pass college-level exams can do when it’s purpose-built to help software engineers write code.

    While there are risks associated with the technology in its early stage, the most significant risk most tech companies face is waiting too long and allowing the competition to onboard the technology first.

    Related: 5 Fears All Entrepreneurs Face (and How to Conquer Them)

    Start-ups are in a particularly advantageous position, as they have much less to lose and much more to gain by taking a bold risk on early AI adoption. However, large enterprises can begin dabbling with generative AI by finding low-risk use cases. They should also ensure that this is considered a top priority for legal and security teams and adequately communicate the significant stakes.

    While the applicability of these technologies is broad, I recommend finding a pragmatic, simple area to begin experimenting and learning, then expand from there. Perhaps even host an in-house hackathon to see all the creative solutions your teams think up.

    There are countless opportunities to experiment with generative AI across marketing, engineering, customer service, and many business functions. While being conscious of the risks and taking steps to mitigate them, it makes sense to start small. However, getting started is important; otherwise, you may risk getting left behind.

    [ad_2]

    James Barrese

    Source link