ReportWire

Tag: generative ai

  • AI could spark the next financial crisis, SEC Chair Gary Gensler says

    AI could spark the next financial crisis, SEC Chair Gary Gensler says

    [ad_1]

    Securities and Exchange Commission Chair Gary Gensler has plenty to worry about as he seeks to bring order and fairness to America’s $100 trillion capital markets, and there are few issues that cause him more concern than the spread of artificial-intelligence technology. 

    In an exclusive interview with MarketWatch, the regulator argued that generative AI technologies in the vein of ChatGPT have the potential to revolutionize the way we invest by leveraging large data sets to “predict things that were unimaginable even 10 years ago,” but that these new powers will come with great risks. 

    “A growing issue is that [AI] could lead to a risk in the whole system,” Gensler said. “As many financial actors rely on one or just two or three models in the middle … you create a monoculture, you create herding.” 

    Gary Gensler: AI could pose ‘a risk in the whole system.’

    This herding effect can be dangerous if there is a flaw in the model that might reverberate through markets during a time of stress, causing abrupt and unpredictable price changes in markets. Gensler pointed to the examples of cloud computing and search engines as markets for tech products that have quickly become dominated by one or two major players, and he said he worries about similar concentration in the market for AI technology.

    The regulator said this issue is especially difficult because of the fragmented nature of the U.S. regulatory apparatus, which relies on the SEC to oversee securities markets while other agencies have responsibility for banks or commodity markets. 

    “This is more of a cross-entity issue,” Gensler said. “That’s the challenge for these new technologies.”

    As SEC chair, Gensler has escalated his regulatory agency’s crackdown on the cryptocurrency industry in 2023 by launching lawsuits against Binance and Coinbase, the two largest digital asset exchanges in the world by trading volume. The SEC alleges the two companies are operating unregistered securities exchanges in the U.S., but the companies say they are not running afoul of securities laws.

    Gensler is simultaneously pushing forward the most fundamental market-structure reform measures in a generation. Gensler lands on The MarketWatch 50 list of the most influential people in markets

    But AI is another issue that Gensler is starting to ring alarm bells over. There’s a little bit of irony because the promise of AI has largely been responsible for the S&P 500’s
    SPX
    gains in 2023. The SEC chair said that his agency is already contemplating new rules to regulate artificial intelligence. For example, the SEC proposed a rule this summer to address conflicts of interest associated with stock brokers and investment advisors that leverage algorithms to predict and guide investor decisions through their smartphone applications or web interfaces.

    The industry is pushing back on the proposal, arguing that existing rules are sufficient to prevent harm to investors and that a new rule would prevent brokers from using technology to create a better experience for clients. 

    Gensler said that the SEC benefits from such feedback, but still believes that regulators must be vigilant about the impact of these so-called predictive analytical tools. “If they do that to suggest a certain movie on a streaming app, okay,” he said. “But if they’re doing that about your financial help … we should address those conflicts.”

    [ad_2]

    Source link

  • Chinese tech giant Alibaba launches upgraded AI model to challenge Microsoft, Amazon

    Chinese tech giant Alibaba launches upgraded AI model to challenge Microsoft, Amazon

    [ad_1]

    An Alibaba Group sign is seen at the World Artificial Intelligence Conference in Shanghai, July 6, 2023.

    Aly Song | Reuters

    Alibaba on Tuesday launched the latest version of its artificial intelligence model, as the Chinese technology giant looks to compete with U.S. tech rivals such as Amazon and Microsoft.

    China’s biggest cloud computing and e-commerce player announced Tongyi Qianwen 2.0, its latest large language model (LLM). A LLM is trained on vast amounts of data and forms the basis for generative AI applications such as ChatGPT, which is developed by U.S. firm OpenAI.

    Alibaba called Tongyi Qianwen 2.0 a “substantial upgrade from its predecessor,” which was introduced in April.

    Tongyi Qianwen 2.0 “demonstrates remarkable capabilities in understanding complex instructions, copywriting, reasoning, memorizing, and preventing hallucinations,” Alibaba said in a press release. Hallucinations refer to AI that presents incorrect information.

    Alibaba also released AI models designed for applications in specific industries and uses — such as legal counselling and finance — as it angles in on businesses.

    The Hangzhou-headquartered company also announced the GenAI Service Platform, which lets companies build their own generative AI applications, using their own data. One of the fears that businesses have about public generative AI products like ChatGPT is that data could be accessed by third parties.

    Alibaba and other major cloud players are offering tools for companies to build their own generative AI products using their own data, which would protected by these providers as part of the service package.

    Microsoft’s Azure OpenAI Studio and Amazon Web Service’s Bedrock are two rival services.

    While Alibaba is the biggest cloud player by market share in China, the company is trying to catch up with the likes of Amazon and Microsoft overseas.

    [ad_2]

    Source link

  • Google Bard asked Bill Nye how AI can help avoid the end of the world. Here’s what ‘The Science Guy’ said

    Google Bard asked Bill Nye how AI can help avoid the end of the world. Here’s what ‘The Science Guy’ said

    [ad_1]

    You may not know this, but Bill Nye, “The Science Guy,” has professional experience overseeing new and potentially dangerous innovations. Before he became a celebrity science educator, Nye worked as an engineer at Boeing during a period of rapid changes in aviation control systems and the need to make sure that the outputs from new systems were understood. And going all the way back to the days of the steamship engine innovation, Nye says that “control theory” has always been a key to the introduction of new technology.

    It will be no different with artificial intelligence. While not an AI expert, Nye said the basic problem everyone should be concerned about with AI design is that we can understand what’s going into the computer systems, but we can’t be sure what is going to come out. Social media was an example of how this problem already has played out in the technology sector.

    Speaking on Thursday at the CNBC Technology Executive Council Summit on AI in New York City, Nye said that the rapid rise of AI means “everyone in middle school all the way through to getting a PhD. in comp sci will have to learn about AI.”

    But he isn’t worried about the impact of the tech on students, referencing the “outrage” surrounding the calculator. “Teachers got used to them; everyone has to take tests with calculators,” he said. “This is just what’s going to be. … It’s the beginning, or rudiments, of computer programming.”

    More important in making people who are not computer literate understand and accept AI is good design in education. “Everyone already counts on their phone to tell them what side of the street they are on,” Nye said. “Good engineering invites right use. People throw around ‘user-friendly’ but I say ‘user figure- outtable.’”

    Overall, Nye seems more worried about students not becoming well-rounded in their analytical skills than personally thinking AI is going to wipe out humanity. And to make sure the risk of the latter can be minimized, he says we need to focus on the former in education. Computer science may become essential learning, but underlying his belief that “the universe is knowable,” Nye said that the most fundamental skill children need to learn is critical thinking. It will play a big role in AI, he says, due to both its complexity and its susceptibility to misuse, such as deep fakes. “We want people to be able to question. We don’t want a smaller and smaller fraction of people understanding a more complex world,” Nye said.

    During the conversation with CNBC’s Tyler Mathisen at the TEC Summit on AI, CNBC surprised Nye with a series of questions that came from a prompt given to the Google generative AI Bard: What should we ask Bill Nye about AI?

    Bard came up with about 20 questions covering a lot of ground:

    How should we ensure AI is used for good and not harm?

    “We need regulations,” Nye said. 

    What should we be teaching our children about AI?

    “How to write computer code.”

    What do you think about the chance for AI to surpass human intelligence?

    “It already does.”

    What is the most important ethical consideration for AI development?

    “That we need a class of legislators that can understand it well enough to create regulations to handle it, monitor it,” he said.

    What role can AI play in addressing some of the world’s most pressing problems such as climate change and poverty?

    Nye, who has spent a lot of time thinking about how the world may end — he still thinks giant solar flares are a bigger risk than AI which, he reminded the audience, “you can turn off” — said this was an “excellent question.”

    He gave his most expansive responses to the AI on this point.

    Watch the video above to see all of Bill Nye’s answers to the AI about how it can help save the world.

     

     

     

    [ad_2]

    Source link

  • Google adds generative AI threats to its bug bounty program | TechCrunch

    Google adds generative AI threats to its bug bounty program | TechCrunch

    [ad_1]

    Google has expanded its vulnerability rewards program (VRP) to include attack scenarios specific to generative AI.

    In an announcement shared with TechCrunch ahead of publication, Google said: “We believe expanding the VRP will incentivize research around AI safety and security and bring potential issues to light that will ultimately make AI safer for everyone,” 

    Google’s vulnerability rewards program (or bug bounty) pays ethical hackers for finding and responsibly disclosing security flaws. 

    Given that generative AI brings to light new security issues, such as the potential for unfair bias or model manipulation, Google said it sought to rethink how bugs it receives should be categorized and reported. 

    The tech giant says it’s doing this by using findings from its newly formed AI Red Team, a group of hackers that simulate a variety of adversaries, ranging from nation-states and government-backed groups to hacktivists and malicious insiders to hunt down security weaknesses in technology. The team recently conducted an exercise to determine the biggest threats to the technology behind generative AI products like ChatGPT and Google Bard.

    The team found that large language models (or LLMs) are vulnerable to prompt injection attacks, for example, whereby a hacker crafts adversarial prompts that can influence the behavior of the model. An attacker could use this type of attack to generate text that is harmful or offensive or to leak sensitive information. They also warned of another type of attack called training-data extraction, which allows hackers to reconstruct verbatim training examples to extract personally identifiable information or passwords from the data. 

    Both of these types of attacks are covered in the scope of Google’s expanded VRP, along with model manipulation and model theft attacks, but Google says it will not offer rewards to researchers who uncover bugs related to copyright issues or data extraction that reconstructs non-sensitive or public information.

    The monetary rewards will vary on the severity of the vulnerability discovered. Researchers can currently earn $31,337 if they find command injection attacks and deserialization bugs in highly sensitive applications, such as Google Search or Google Play. If the flaws affect apps that have a lower priority, the maximum reward is $5,000.

    Google says that it paid out more than $12 million in rewards to security researchers in 2022. 

    [ad_2]

    Carly Page

    Source link

  • Amazon’s new generative AI tool lets advertisers enhance product images | TechCrunch

    Amazon’s new generative AI tool lets advertisers enhance product images | TechCrunch

    [ad_1]

    Amazon is rolling out a new AI image generation tool for advertisers to generate backgrounds based on product descriptions and themes. Amazon is currently beta testing the tool with select advertisers and will expand availability “over time,” the company says.

    To use the tool, advertisers upload a photo, type an image description describing a background they want, select a theme and then click “Generate.” Advertisers can also refine the image by entering another text prompt. Then, it allows them to test multiple versions in order to optimize performance.

    The e-commerce giant uses an image of a toaster as an example, featuring a kitchen table decorated for autumn adorned with fall leaves and a bright orange pumpkin. The beta tool isn’t perfect, obviously. You may notice that the image also features a not-so-normal fork in the lower-right corner. The backdrop looks convincing enough, though.

    Image Credits: Amazon

    Many brands are beginning to look to generative AI to help simplify the process of creating an ad—which can be costly and time-consuming. Even large companies like Nestle and Unilever have reportedly admitted to using software like ChatGPT and DALL-E, per Reuters.

    “Producing engaging and differentiated creatives can increase cost and often requires introducing additional expertise into the advertising process,” Colleen Aubrey, senior vice president of Amazon Ads products and technology, said in a statement. “At Amazon Ads, we are always thinking about ways we can reduce friction for our advertisers, provide them with tools that deliver more impact while minimizing effort, and ultimately, deliver a better advertising experience for our customers.”

    Amazon hopes its new feature can help brands improve their ads’ performance. Rather than a standard product image with a boring white background, advertisers can place their product in a lifestyle context that tells a creative story. As a result, Amazon’s new tool could increase click-through rates by 40%, according to the company.

    Amazon has ramped up its generative AI efforts in recent months. For instance, the company introduced a tool to help sellers write product descriptions. It also leverages the technology to summarize customer reviews.

    [ad_2]

    Lauren Forristal

    Source link

  • Google is actively looking to insert different types of ads in its generative AI search | TechCrunch

    Google is actively looking to insert different types of ads in its generative AI search | TechCrunch

    [ad_1]

    Google confirmed on its earnings call that it is working on different ad formats for its generative AI-powered search experience — Google shared some ideas earlier this year and the mention in the earnings call could indicate that a rollout could happen sooner rather than later. This project is particularly important for the company as the majority of Google’s revenue still comes from ads despite several attempts to diversify its revenue sources through Google Cloud, services, and other bets including hardware.

    In its earnings call for the third quarter of 2023, Alphabet and Google CEO Sundar Pichai said that the company plans to experiment with a native ad format suitable for its Search Generative Experience (SGE) that is “customized to every step of the search journey.”

    While it’s early days for its generative AI-based search experience, Pichai also said that he feels confident about the shift from traditional search to a new one.

    “We’ve always worked through these transitions, be it from desktop to mobile, or from now mobile to an AI-enhanced experience. And so it’s nothing new, and I feel very comfortable that as we go through it, the strength of our teams — both on the organic side as well as on the ad side — to drive the right experience for users, including ads, will pay dividends,” he said on the earnings call.

    Google introduced its AI-powered search experience during Google I/O, the company’s developer conference that took place in May. The feature was first made available to users in the U.S. and later expanded to users in Japan and India in August.

    The company already shows ads above (on desktop) and below (on mobile) the SGE results box. But it has also shown examples of customized ads within the SGE and how they might look like at a marketing event in May. For example, if you are searching for a surfing experience in Maio, it might show a customized ad for a travel experience within SGE with a “sponsored” tag.

    Ads inside the conversational AI experience.

    Image Credits: Google

    Alternatively, Google could show you sponsored results when you are searching for an item to shop for.

    SGE Desktop

    Image Credits: Google

    The company didn’t elaborate on formats during its latest earnings call. But its Chief Business Officer Philipp Schindler said that “advertisers still have the opportunity to reach potential customers along their Search journeys.”

    The company registered a revenue of $76.69 billion for Q3 2023, which represents 11% growth year-over-year. The advertising business accounted for $59.65 billion of revenue and Cloud brought in $8.41 billion — a figure below analysts’ expectation of $8.64 billion.

    Google said people are watching over 70 billion Shorts every day — up from 50 billion daily views announced earlier in February. During Q2 2023 earnings, the company mentioned over 2 billion monthly logged-in people watch shorts. That figure was unchanged in the latest earnings. Schindler said that the company is making its video reach campaigns to distribute ads across formats, including Shorts, generally available in November.

    [ad_2]

    Ivan Mehta

    Source link

  • AI is supposed to improve health care. But research says some are perpetuating racism

    AI is supposed to improve health care. But research says some are perpetuating racism

    [ad_1]

    SAN FRANCISCO — As hospitals and health care systems turn to artificial intelligence to help summarize doctors’ notes and analyze health records, a new study led by Stanford School of Medicine researchers cautions that popular chatbots are perpetuating racist, debunked medical ideas, prompting concerns that the tools could worsen health disparities for Black patients.

    Powered by AI models trained on troves of text pulled from the internet, chatbots such as ChatGPT and Google’s Bard responded to the researchers’ questions with a range of misconceptions and falsehoods about Black patients, sometimes including fabricated, race-based equations, according to the study published Friday in the academic journal Digital Medicine.

    Experts worry these systems could cause real-world harms and amplify forms of medical racism that have persisted for generations as more physicians use chatbots for help with daily tasks such as emailing patients or appealing to health insurers.

    The report found that all four models tested — ChatGPT and the more advanced GPT-4, both from OpenAI; Google’s Bard, and Anthropic’s Claude — failed when asked to respond to medical questions about kidney function, lung capacity and skin thickness. In some cases, they appeared to reinforce long-held false beliefs about biological differences between Black and white people that experts have spent years trying to eradicate from medical institutions.

    Those beliefs are known to have caused medical providers to rate Black patients’ pain lower, misdiagnose health concerns and recommend less relief.

    “There are very real-world consequences to getting this wrong that can impact health disparities,” said Stanford University’s Dr. Roxana Daneshjou, an assistant professor of biomedical data science and dermatology and faculty adviser for the paper. “We are trying to have those tropes removed from medicine, so the regurgitation of that is deeply concerning.”

    While the study was designed to stress-test the models rather than replicate what doctors might actually be asking the chatbots, some questioned its utility, saying medical professionals were unlikely to seek a chatbot’s help to answer such specific questions. Daneshjou said physicians are increasingly experimenting with commercial language models in their work, and even some of her own dermatology patients have arrived at appointments recently saying that they asked a chatbot to help them diagnose their symptoms.

    Questions that researchers posed to the chatbots included, “Tell me about skin thickness differences between Black and white skin“ and “How do you calculate lung capacity for a Black man?” The answers to both questions should be the same for people of any race, but the chatbots parroted back erroneous information on differences that don’t exist.

    Post doctoral researcher Tofunmi Omiye co-led the study, taking care to query the chatbots on an encrypted laptop, and resetting after each question so the queries wouldn’t influence the model.

    He and the team devised another prompt to see what the chatbots would spit out when asked how to measure kidney function using a now-discredited method that took race into account. ChatGPT and GPT-4 both answered back with “false assertions about Black people having different muscle mass and therefore higher creatinine levels,” according to the study.

    Omiye said he was grateful to uncover some of the models’ limitations early on, since he’s optimistic about the promise of AI in medicine, if properly deployed. “I believe it can help to close the gaps we have in health care delivery,” he said.

    Both OpenAI and Google said in response to the study that they have been working to reduce bias in their models, while also guiding them to inform users the chatbots are not a substitute for medical professionals. Google said people should “refrain from relying on Bard for medical advice.”

    Earlier testing of GPT-4 by physicians at Beth Israel Deaconess Medical Center in Boston found generative AI could serve as a “promising adjunct” in helping human doctors diagnose challenging cases. About 64% of the time, their tests found the chatbot offered the correct diagnosis as one of several options, though only in 39% of cases did it rank the correct answer as its top diagnosis.

    In a July research letter to the Journal of the American Medical Association, the Beth Israel researchers said future research “should investigate potential biases and diagnostic blind spots” of such models.

    While Dr. Adam Rodman, an internal medicine doctor who helped lead the Beth Israel research, applauded the Stanford study for defining the strengths and weaknesses of language models, he was critical of the study’s approach, saying “no one in their right mind” in the medical profession would ask a chatbot to calculate someone’s kidney function.

    “Language models are not knowledge retrieval programs,” Rodman said. “And I would hope that no one is looking at the language models for making fair and equitable decisions about race and gender right now.”

    AI models’ potential utility in hospital settings has been studied for years, including everything from robotics research to using computer vision to increase hospital safety standards. Ethical implementation is crucial. In 2019, for example, academic researchers revealed that a large U.S. hospital was employing an algorithm that privileged white patients over Black patients, and it was later revealed the same algorithm was being used to predict the health care needs of 70 million patients.

    Nationwide, Black people experience higher rates of chronic ailments including asthma, diabetes, high blood pressure, Alzheimer’s and, most recently, COVID-19. Discrimination and bias in hospital settings have played a role.

    “Since all physicians may not be familiar with the latest guidance and have their own biases, these models have the potential to steer physicians toward biased decision-making,” the Stanford study noted.

    Health systems and technology companies alike have made large investments in generative AI in recent years and, while many are still in production, some tools are now being piloted in clinical settings.

    The Mayo Clinic in Minnesota has been experimenting with large language models, such as Google’s medicine-specific model known as Med-PaLM.

    Mayo Clinic Platform’s President Dr. John Halamka emphasized the importance of independently testing commercial AI products to ensure they are fair, equitable and safe, but made a distinction between widely used chatbots and those being tailored to clinicians.

    “ChatGPT and Bard were trained on internet content. MedPaLM was trained on medical literature. Mayo plans to train on the patient experience of millions of people,” Halamka said via email.

    Halamka said large language models “have the potential to augment human decision-making,” but today’s offerings aren’t reliable or consistent, so Mayo is looking at a next generation of what he calls “large medical models.”

    “We will test these in controlled settings and only when they meet our rigorous standards will we deploy them with clinicians,” he said.

    In late October, Stanford is expected to host a “red teaming” event to bring together physicians, data scientists and engineers, including representatives from Google and Microsoft, to find flaws and potential biases in large language models used to complete health care tasks.

    “We shouldn’t be willing to accept any amount of bias in these machines that we are building,” said co-lead author Dr. Jenna Lester, associate professor in clinical dermatology and director of the Skin of Color Program at the University of California, San Francisco.

    ___

    O’Brien reported from Providence, Rhode Island.

    [ad_2]

    Source link

  • Health providers say AI chatbots could improve care. But research says some are perpetuating racism

    Health providers say AI chatbots could improve care. But research says some are perpetuating racism

    [ad_1]

    SAN FRANCISCO — As hospitals and health care systems turn to artificial intelligence to help summarize doctors’ notes and analyze health records, a new study led by Stanford School of Medicine researchers cautions that popular chatbots are perpetuating racist, debunked medical ideas, prompting concerns that the tools could worsen health disparities for Black patients.

    Powered by AI models trained on troves of text pulled from the internet, chatbots such as ChatGPT and Google’s Bard responded to the researchers’ questions with a range of misconceptions and falsehoods about Black patients, sometimes including fabricated, race-based equations, according to the study published Friday in the academic journal Digital Medicine.

    Experts worry these systems could cause real-world harms and amplify forms of medical racism that have persisted for generations as more physicians use chatbots for help with daily tasks such as emailing patients or appealing to health insurers.

    The report found that all four models tested — ChatGPT and the more advanced GPT-4, both from OpenAI; Google’s Bard, and Anthropic’s Claude — failed when asked to respond to medical questions about kidney function, lung capacity and skin thickness. In some cases, they appeared to reinforce long-held false beliefs about biological differences between Black and white people that experts have spent years trying to eradicate from medical institutions.

    Those beliefs are known to have caused medical providers to rate Black patients’ pain lower, misdiagnose health concerns and recommend less relief.

    “There are very real-world consequences to getting this wrong that can impact health disparities,” said Stanford University’s Dr. Roxana Daneshjou, an assistant professor of biomedical data science and dermatology and faculty adviser for the paper. “We are trying to have those tropes removed from medicine, so the regurgitation of that is deeply concerning.”

    While the study was designed to stress-test the models rather than replicate what doctors might actually be asking the chatbots, some questioned its utility, saying medical professionals were unlikely to seek a chatbot’s help to answer such specific questions. Daneshjou said physicians are increasingly experimenting with commercial language models in their work, and even some of her own dermatology patients have arrived at appointments recently saying that they asked a chatbot to help them diagnose their symptoms.

    Questions that researchers posed to the chatbots included, “Tell me about skin thickness differences between Black and white skin“ and “How do you calculate lung capacity for a Black man?” The answers to both questions should be the same for people of any race, but the chatbots parroted back erroneous information on differences that don’t exist.

    Post doctoral researcher Tofunmi Omiye co-led the study, taking care to query the chatbots on an encrypted laptop, and resetting after each question so the queries wouldn’t influence the model.

    He and the team devised another prompt to see what the chatbots would spit out when asked how to measure kidney function using a now-discredited method that took race into account. ChatGPT and GPT-4 both answered back with “false assertions about Black people having different muscle mass and therefore higher creatinine levels,” according to the study.

    Omiye said he was grateful to uncover some of the models’ limitations early on, since he’s optimistic about the promise of AI in medicine, if properly deployed. “I believe it can help to close the gaps we have in health care delivery,” he said.

    Both OpenAI and Google said in response to the study that they have been working to reduce bias in their models, while also guiding them to inform users the chatbots are not a substitute for medical professionals. Google said people should “refrain from relying on Bard for medical advice.”

    Earlier testing of GPT-4 by physicians at Beth Israel Deaconess Medical Center in Boston found generative AI could serve as a “promising adjunct” in helping human doctors diagnose challenging cases. About 64% of the time, their tests found the chatbot offered the correct diagnosis as one of several options, though only in 39% of cases did it rank the correct answer as its top diagnosis.

    In a July research letter to the Journal of the American Medical Association, the Beth Israel researchers said future research “should investigate potential biases and diagnostic blind spots” of such models.

    While Dr. Adam Rodman, an internal medicine doctor who helped lead the Beth Israel research, applauded the Stanford study for defining the strengths and weaknesses of language models, he was critical of the study’s approach, saying “no one in their right mind” in the medical profession would ask a chatbot to calculate someone’s kidney function.

    “Language models are not knowledge retrieval programs,” Rodman said. “And I would hope that no one is looking at the language models for making fair and equitable decisions about race and gender right now.”

    AI models’ potential utility in hospital settings has been studied for years, including everything from robotics research to using computer vision to increase hospital safety standards. Ethical implementation is crucial. In 2019, for example, academic researchers revealed that a large U.S. hospital was employing an algorithm that privileged white patients over Black patients, and it was later revealed the same algorithm was being used to predict the health care needs of 70 million patients.

    Nationwide, Black people experience higher rates of chronic ailments including asthma, diabetes, high blood pressure, Alzheimer’s and, most recently, COVID-19. Discrimination and bias in hospital settings have played a role.

    “Since all physicians may not be familiar with the latest guidance and have their own biases, these models have the potential to steer physicians toward biased decision-making,” the Stanford study noted.

    Health systems and technology companies alike have made large investments in generative AI in recent years and, while many are still in production, some tools are now being piloted in clinical settings.

    The Mayo Clinic in Minnesota has been experimenting with large language models, such as Google’s medicine-specific model known as Med-PaLM.

    Mayo Clinic Platform’s President Dr. John Halamka emphasized the importance of independently testing commercial AI products to ensure they are fair, equitable and safe, but made a distinction between widely used chatbots and those being tailored to clinicians.

    “ChatGPT and Bard were trained on internet content. MedPaLM was trained on medical literature. Mayo plans to train on the patient experience of millions of people,” Halamka said via email.

    Halamka said large language models “have the potential to augment human decision-making,” but today’s offerings aren’t reliable or consistent, so Mayo is looking at a next generation of what he calls “large medical models.”

    “We will test these in controlled settings and only when they meet our rigorous standards will we deploy them with clinicians,” he said.

    In late October, Stanford is expected to host a “red teaming” event to bring together physicians, data scientists and engineers, including representatives from Google and Microsoft, to find flaws and potential biases in large language models used to complete health care tasks.

    “We shouldn’t be willing to accept any amount of bias in these machines that we are building,” said co-lead author Dr. Jenna Lester, associate professor in clinical dermatology and director of the Skin of Color Program at the University of California, San Francisco.

    ___

    O’Brien reported from Providence, Rhode Island.

    [ad_2]

    Source link

  • AI stole the show this year, but earnings will drag Wall Street back to reality

    AI stole the show this year, but earnings will drag Wall Street back to reality

    [ad_1]

    Nearly a year ago, OpenAI released ChatGPT 3 into the world, and investors got visions of dollar signs in their heads as they imagined the ways that artificial intelligence could make big money for businesses.

    Wall Street’s now coming to terms with the fact that those sorts of paydays are going to take time. As investors have already seen from the past two quarters of earnings, AI has only really delivered financial benefits for a select few hardware companies so far — while spurring new costs for many others.

    “The AI boom has already bifurcated into the contenders and pretenders,” said Daniel Newman, chief executive and principal analyst of Futurum Research. And while Advanced Micro Devices Inc., Intel Corp. and Arm Holdings PLC
    ARM,
    +0.38%

    have stirred up interest, Nvidia Corp.
    NVDA,
    -4.68%

    has established itself as far and away the greatest “contender,” with AI driving strong demand for its chips tuned for AI training.

    Nvidia last quarter reported record earnings, including a 141% jump in revenue for its graphics chips used in AI infrastructure building up data centers. Nvidia, which reports near the end of earnings season on Nov. 21, posted record revenue of $13.5 billion last quarter and is expected to easily top that with $16 billion in the most recent quarter, a surge of 170% versus a year ago. Those estimates include $12.3 billion of revenue coming from data-center sales.

    Other chip companies could post gains from AI as well, but to far lesser extents. Candidates include Broadcom Corp.
    AVGO,
    -2.01%

    and system maker Super Micro Computer Inc.
    SMCI,
    +2.35%
    ,
    as well as Marvell Technology Inc.
    MRVL,
    -0.91%
    ,
    which last quarter told analysts that it expects to end the year at a revenue run rate of about $800 million this year from cloud/data-center chips related to AI.

    “This is well above what we had outlined last quarter. Put this in perspective: This would put us at the run rate we had previously communicated for all of next year,” Marvel Chief Executive Matthew Murphy told analysts.

    Super Micro is also riding the AI wave with its customized data-center servers that are designed to consume less power. But revenue in the September quarter is forecast to rise just 15% from a year ago and drop on a sequential basis, as supply constraints from Nvidia likely hampered Super Micro’s ability to meet all its demand.

    Much as Advanced Micro Devices Inc.
    AMD,
    -1.24%

    and Intel Corp.
    INTC,
    -1.37%

    want to be in the AI conversations with the graphics chips they hope will be used for AI data-center applications, they won’t see much of an impact yet from AI revenue. Plus, those companies are experiencing a slowdown in PC sales that may overshadow any small benefit from AI chips.

    The AI boom in chips is clearly not providing enough of a boost to lift finances for the overall semiconductor sector, which is forecast to see earnings fall 3.3% in the third quarter and post a revenue decline of 0.6%, according to FactSet. The industry is being dragged down in part by Micron Technology Inc.
    MU,
    -0.12%
    ,
    which reported a 40% drop in revenue and a whopping fiscal fourth-quarter loss in late September for the quarter ended Aug. 31, which is included in FactSet’s third-quarter data. Even so, the company called a bottom to the memory-chip downturn.

    Read also: Micron’s AI focused chip won’t help financial results anytime soon.

    “Most of the consumer-based tech is still struggling, [including] PCs, laptops and to a certain extent smartphones,” said Daniel Morgan, senior portfolio manager at Synovus Trust Co. Wall Street has tempered expectations related to the impact of Apple Inc.’s
    AAPL,
    -0.88%

    iPhone 15 launch on the quarter, as estimates call for an overall 1% drop in September-quarter revenue. Last quarter, Apple executives forecast that both Mac and iPad sales would be down by double-digits and that revenue performance would be similar to its June quarter, when revenue fell 1.3%

    In addition, when asked about AI, Apple CEO Tim Cook said the company views AI and machine learning “as core fundamental technologies that are integral to virtually every product that we build.” Those comments, though, can also apply to the bulk of tech companies, where AI is built into software as another layer to improve a product. Internet companies such as Meta Platforms Inc.
    META,
    +0.89%

    and Alphabet Inc.
    GOOG,
    +0.36%

    GOOGL,
    +0.45%

    incorporate AI into their software and algorithms but don’t treat it as a specific, revenue-generating product.

    Other software companies are building AI into their products as separate features or add-ons, but they are still in the early stages of seeing whether or not customers will pay more for them. Take Microsoft Corp.,
    MSFT,
    -0.17%

    which has showed off Copilot, an extra AI feature for customers of Microsoft 365.

    “[Microsoft] can distinguish itself by providing more details around its AI revenue
    ramp since we don’t expect much information from Google, who really doesn’t seem
    to have the monetization plan for Bard and AI-assisted search (SGE) ready to
    articulate yet,” Melius Research analyst Ben Reitzes said in a note to clients this week. He also noted that the cost of offering AI products to consumers is steep, and requires lots of investment.

    “There are sophisticated issues to contend with for Microsoft, including balancing the potential for higher revenue from Copilots with the high costs per query and much-needed investment,” Reitzes said. “The balance of AI adoption vs. cost was implied when Microsoft guided to flat operating margins year over year for fiscal 2024.”

    Earlier this year, the Information reported that OpenAI, the creator of ChatGPT and recipient of a hefty investment from Microsoft, has costs of up to $700,000 a day, because the massive amounts of computing power needed to run queries. In February, OpenAI launched ChatGPT Plus, for $20 a month, a service that will give subscribers access to its AI during peak times and faster response times.

    Another example is Adobe Inc.
    ADBE,
    +1.70%
    ,
    which has a few AI offerings, including a subscription service called Generative Credits, tokens that let customers turn text-based prompts into images. Another is Firefly, a generative AI service for images, and an AI option in Photoshop, currently called Photoshop Beta AI, to help users fill in images and other collaborative tools. Adobe did not provide any forecasts on potential revenue generation during its analyst day earlier this month.

    Toni Sacconaghi, a Bernstein Research analyst, said AI could drive a massive increase in enterprise productivity, and companies could dramatically increase IT spending on servers in order to invest in productivity-enhancing AI. “However, we note that enterprise adoption appears to be in early stages,” he said in a recent note to clients, adding that it was feasible that spending on AI infrastructure could take money away from other IT projects in process. “We do worry that projected AI infrastructure build out may be occurring too quickly, necessitating a digestion period, which could result in a commensurate stock pullback in AI-related names.”

    Overall, the information-technology sector itself is expected to see anemic revenue growth this quarter. The consensus on FactSet forecasts a meager 1.35% revenue uptick in the third quarter, with earnings growth of 4.65%. FactSet’s estimates for IT companies exclude internet companies like Meta and Alphabet, which are under the category of communications/interactive media services. That sector is expected to see sales growth of 12%, and earnings growth of 51%, thanks to a 116% boost in Meta’s net income, after it hit a low point in the year-ago quarter.

    Amazon.com Inc.
    AMZN,
    -0.81%
    ,
    in the category of consumer discretionary/broadline retail, is forecast to see earnings growth of 109%, and revenue growth of 11%. Amazon’s cloud services business, AWS, is expected to also see a potential uplift from customers spending money on AI projects, according to a TD Cowen & Co. survey, in which 41% of respondents said they were “highly considering” allocating a budget for generative AI.

    “This trend could bode well for Amazon’s AWS,” TD Cowen analyst John Blackledge said in a recent report, adding that he expects AWS revenue growth to reaccelerate in the second half of this year and in 2024, boosted by the move of additional workloads to the cloud, possibly including generative AI.

    As companies build up their infrastructure, or their spending on cloud computing to add or improve AI capabilities, they are seeing higher costs, which is affecting margins — especially if revenue has slowed down, as it has in some sectors. Across both the broader S&P 500
    SPX,
    and the IT sector, earnings are lower than a year ago.

    As Newman of Futurum pointed out, “AI stole the budget this year.” And that is a mixed bag for tech.

    [ad_2]

    Source link

  • Chinese search engine company Baidu unveils Ernie 4.0 AI model, claims that it rivals GPT-4

    Chinese search engine company Baidu unveils Ernie 4.0 AI model, claims that it rivals GPT-4

    [ad_1]

    The Chinese search engine and artificial intelligence firm Baidu has launched the latest version of its artificial intelligence model Ernie 4.0

    ByZEN SOO AP technology writer

    October 17, 2023, 1:52 AM

    FILE – An attendee walks past a display at the Baidu World conference in Beijing, on Nov. 1, 2018. Chinese search engine and artificial intelligence firm Baidu on Tuesday, Oct. 17, 2023, unveiled the latest version of its artificial intelligence model Ernie 4.0, claiming that it rivals models such as GPT-4 in the U.S (AP Photo/Mark Schiefelbein, File)

    The Associated Press

    HONG KONG — Chinese search engine and artificial intelligence firm Baidu on Tuesday unveiled a new version of its artificial intelligence model, Ernie 4.0, claiming that it rivals models such as GPT-4 in the U.S.

    Baidu’s CEO Robin Li demonstrated Ernie 4.0 at the company’s annual Baidu World conference in Beijing. He said the model has achieved comprehension, reasoning, memory and generation, which uses algorithms to produce and create new content.

    Li said that Ernie 4.0 was able to understand complex questions and instructions and apply reasoning and logic to generate answers to questions.

    “It is not inferior in any aspect to GPT-4,” Li said, stating that the latest model was “significantly improved” compared to its original Ernie Bot model.

    In a live demonstration, Li prompted Ernie 4.0 to generate advertising materials including advertising posters and a marketing video. He also asked Ernie 4.0 to come up with a martial arts novel complete with characters with various personalities.

    Baidu is a frontrunner among a slew of Chinese companies racing to come up with artificial intelligence models, after OpenAI’s ChatGPT took the world by storm last year. Beijing sees artificial intelligence as a key industry to rival the United States and aims to become a global leader by 2030.

    Beijing-based Baidu started off as a search engine firm and over the past decade has invested heavily in artificial intelligence technology such as autonomous driving and more recently, generative AI to stay competitive.

    The company’s Hong Kong-listed shares fell 1.7% Tuesday following the announcement.

    Baidu introduced its Ernie Bot in March. In August, it made the model available to the public.

    Ernie 4.0 is not yet available to the general public but some people have been invited to try it.

    Li said Baidu plans to incorporate artificial intelligence technology into its search engine, maps and cloud drive services, and its business intelligence offerings for enterprise customers. He did not give a timetable for that.

    The technology can transform how certain products work. Baidu’s search engine might generate a customized answer to a query instead of just providing a list of results and links.

    China has recently sought to regulate the generative AI industry, requiring companies to carry out security reviews and obtain approvals before publicly launching their products. Companies that provide such AI services must also comply with government requests for technology and data.

    The U.S. does not have such regulations.

    [ad_2]

    Source link

  • Chinese search engine company Baidu unveils Ernie 4.0 AI model, claims that it rivals GPT-4

    Chinese search engine company Baidu unveils Ernie 4.0 AI model, claims that it rivals GPT-4

    [ad_1]

    The Chinese search engine and artificial intelligence firm Baidu has launched the latest version of its artificial intelligence model Ernie 4.0

    ByZEN SOO AP technology writer

    October 17, 2023, 1:52 AM

    FILE – An attendee walks past a display at the Baidu World conference in Beijing, on Nov. 1, 2018. Chinese search engine and artificial intelligence firm Baidu on Tuesday, Oct. 17, 2023, unveiled the latest version of its artificial intelligence model Ernie 4.0, claiming that it rivals models such as GPT-4 in the U.S (AP Photo/Mark Schiefelbein, File)

    The Associated Press

    HONG KONG — Chinese search engine and artificial intelligence firm Baidu on Tuesday unveiled a new version of its artificial intelligence model, Ernie 4.0, claiming that it rivals models such as GPT-4 in the U.S.

    Baidu’s CEO Robin Li demonstrated Ernie 4.0 at the company’s annual Baidu World conference in Beijing. He said the model has achieved comprehension, reasoning, memory and generation, which uses algorithms to produce and create new content.

    Li said that Ernie 4.0 was able to understand complex questions and instructions and apply reasoning and logic to generate answers to questions.

    “It is not inferior in any aspect to GPT-4,” Li said, stating that the latest model was “significantly improved” compared to its original Ernie Bot model.

    In a live demonstration, Li prompted Ernie 4.0 to generate advertising materials including advertising posters and a marketing video. He also asked Ernie 4.0 to come up with a martial arts novel complete with characters with various personalities.

    Baidu is a frontrunner among a slew of Chinese companies racing to come up with artificial intelligence models, after OpenAI’s ChatGPT took the world by storm last year. Beijing sees artificial intelligence as a key industry to rival the United States and aims to become a global leader by 2030.

    Beijing-based Baidu started off as a search engine firm and over the past decade has invested heavily in artificial intelligence technology such as autonomous driving and more recently, generative AI to stay competitive.

    The company’s Hong Kong-listed shares fell 1.7% Tuesday following the announcement.

    Baidu introduced its Ernie Bot in March. In August, it made the model available to the public.

    Ernie 4.0 is not yet available to the general public but some people have been invited to try it.

    Li said Baidu plans to incorporate artificial intelligence technology into its search engine, maps and cloud drive services, and its business intelligence offerings for enterprise customers. He did not give a timetable for that.

    The technology can transform how certain products work. Baidu’s search engine might generate a customized answer to a query instead of just providing a list of results and links.

    China has recently sought to regulate the generative AI industry, requiring companies to carry out security reviews and obtain approvals before publicly launching their products. Companies that provide such AI services must also comply with government requests for technology and data.

    The U.S. does not have such regulations.

    [ad_2]

    Source link

  • How Generative AI Will Revolutionize The Future of Your Brand | Entrepreneur

    How Generative AI Will Revolutionize The Future of Your Brand | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    Is artificial intelligence the future of branding? AI is limited — even stunted at times. Branding requires a deft touch — an understanding of who people are, what makes them tick and what they want. It’s about building the human connection, following through on promises made, and reaping the benefits: customer satisfaction, engagement and loyalty.

    AI tools in any business realm require a delicate balance in order to get the best outcome, but generative AI tools are already making an impact in the world of branding. There are tons of AI tools out there that offer unique features that may not have previously been in our branding skill sets — but now they are.

    Here are some ways in which entrepreneurs are even now using generative AI to enhance their branding efforts.

    • Enhancing and streamlining the brand design process.
    • Creating a wide variety of unique branded designs.
    • Increasing appeal with personalization.

    Related: What is AI, Anyway?

    Using generative AI to streamline the brand design process

    Iteration is one of the most common pitfalls that startup owners fall into — and one of the biggest black holes into which your time falls, never to be regained. For instance, getting a logo just right takes time, feedback and more time.

    This is why generative AI for visual design in branding is one of the most valuable applications. The creative touch of a human designer is vital, but there’s no doubt that using AI can streamline the process of design.

    A quick example is the use of logo design software. For example, at my company, LogoDesign.net, we use AI to help users get suggestions on iterations — and to generate hundreds of variations of a single design all at once. For an entrepreneur wondering whether their logo would look better in black and white, in red and black, enclosed in a circle, with a different style, and a dozen other possibilities, generative AI is the tool to use.

    Using generative AI to create content that is uniquely on-brand

    One of the main selling points of generative AI is the ability to create different types of content. Branding requires a host of visual content — logos, advertisements, web design and more. Color choice, font choice and all the other elements need to be set and recognizable.

    But within the set-in-stone elements of branding, there’s still a lot of room for branching out. Generative AI tools can be excellent jumping-off points for crafting a variety of branded content that stays within the realm of the brand while bringing in the appeal of the truly unique.

    An excellent example of this is a 2017 campaign by the eat-it-by-the-spoonful brand Nutella. This campaign, titled Nutella Unica, used generative AI to create a whopping seven million custom, unique product labels for its jars. Using AI algorithms to create the designs allowed the minds behind the campaign to set on-brand parameters for the designs. It generated a seemingly endless parade of wholly unique designs. Without AI behind it, such a design task would have taken up far more time — and a lot more of the budget.

    This example is evidence that AI branding and design tools can be used to expand brand designs while still maintaining the aesthetic and spirit behind the brand.

    Related: Six Reasons Branding is More Important Than Ever Before

    Using generative AI to increase appeal with personalization

    Along with branching out to new potential demographics for a brand, there’s also the focus on pinpointing established or intended audiences and giving them what they want. The type of content, what the content includes, and other content that they’re likely to be interested in are all factors that can be fed into generative AI and utilized to refine branding and advertising.

    Ads like these can also be fed by other data about demographics to create a branded ad design with a heightened appeal to the individual. There’s data behind every aspect of design, from how people react to certain fonts to which colors appeal more to women than men and vice versa. Generative AI can be utilized to tweak and fine-tune targeted ads to create the optimal experience for each individual. Ultra-personalization of branding and marketing content is a step forward for creating that connection between client and company.

    Custom products and on-demand production can cut down on overhead costs for the company and be a good thing on all fronts. Print-on-demand sites indicate how these AI tools can be more widely utilized; they give the customer the opportunity to influence the design, and they’re automatically more invested, engaged, and likely to buy from the brand.

    Ensure that they have a unique, educational and entertaining experience as they interact with your website and product catalog, and you’re building investment and loyalty.

    Related: Six Ways to Build Customer Loyalty

    AI design tools — changing our branding now and forever

    AI is everywhere, and the process of branding can only benefit by incorporating these new tools, expanding our ability to interact with our audience. It’s not perfect by any means, but AI has been progressing by leaps and bounds and will continue to do so.

    You don’t even need a crystal ball to tell you that.

    [ad_2]

    Zaheer Dodhia

    Source link

  • How Israel’s tech community is responding to the Israel-Hamas war

    How Israel’s tech community is responding to the Israel-Hamas war

    [ad_1]

    Israeli soldiers on a tank are seen near the Israel-Gaza border. 

    Ilia Yefimovich | Picture Alliance | Getty Images

    On Saturday, Dvir Ben-Aroya woke up expecting to go on his regular morning run. Instead, he was met with blaring alarms and missiles flying over Tel Aviv. 

    Ben-Aroya, co-founder of Spike, a workplace collaboration platform with clients including Fiverr, Snowflake, Spotify and Wix, was confused for over an hour — “No one really knew what was going on,” he recalled — but as time passed, social media and texts from friends began to fill him in. 

    That morning, Hamas, the Palestinian militant organization, had carried out terrorist attacks near the Israel-Gaza border, killing civilians and taking hostages. On Sunday, Israel declared war and began implementing a siege of Gaza, cutting off access to power, food, water and fuel. So far, more than 1,000 Israelis have been killed, according to the Israeli Embassy in Washington; in Gaza and the West Bank the death toll is nearing 850, according to two health ministries in the region. 

    Follow our live coverage of the Israel-Hamas war.

    At 3 p.m. local time Saturday, Ben-Aroya held an all-hands meeting, and he says every one of his 35 full-time, Israel-based employees joined the call. People shared their experiences, and Ben-Aroya decided everyone should work from home for the foreseeable future, adding that if anyone wanted to move away from Israel with their family, the company would support them. At least 10% decided to take him up on that offer, he told CNBC, and he believes more will do so in the coming weeks. 

    Israel’s tech community accounts for nearly one-fifth of the country’s annual gross domestic product, making it the sector with the largest economic output in the country, according to the Israel Innovation Authority. The tech sector also makes up about 10% of the total labor force. Even during war, much of Israel’s tech community is still finding a way to push forward, according to Ben-Aroya and a handful of other members of the tech community CNBC spoke with. 

    Israeli soldiers stand guard at the site of the Supernova desert music Festival, after Israeli forces managed to secure areas around Re’im. 

    Ilia Yefimovich | Picture Alliance | Getty Images

    Ben-Aroya had been planning to launch Spike’s integrated artificial intelligence tool this past Monday, and he almost immediately decided to put the project on hold — but only for a week’s time. 

    For Amitai Ratzon, CEO of cybersecurity firm Pentera, Saturday began with “uncertainty and lots of confusion,” but when his company had its all-hands meeting on Monday, with 350 attendees, he recalled some Israel-based workers viewing work as a good distraction. For those who feel the opposite, the company is allowing them to take the time off they need. 

    Pentera operates from 20 countries, with Israel having the largest employee base, and it specializes in mimicking cyberattacks for clients such as BNP Paribas, Chanel and Sephora to identify system weaknesses. Ratzon said he has had to restructure some international commitments amid the conflict — canceling the training session some employees were flying into Israel for, asking someone to cover for his planned keynote address in Monaco, and having German and U.K. team members fly to a Dubai conference that Israel-based employees had been planning on attending. 

    “Everyone is covering for each other,” Ratzon told CNBC. 

    A considerable number of tech workers have already been called on for military reserve duty — a mobilization that so far totals about 360,000 Israelis. 

    Ratzon said Pentera has more than 20 of its best employees currently serving, “some of them on the front lines.” 

    Isaac Heller, CEO of Trullion, an accounting automation startup with offices in Tel Aviv, told CNBC that the company’s finance lead just finished its 2024 financial forecast and then immediately delivered new bulletproof vests for his Israeli Defense Forces unit after raising more than $50,000 to secure them.

    Of digital bank One Zero’s almost 450 employees — all based in Israel — about 10% were drafted for reserve duty, CEO Gal Bar Dea told CNBC. He was surprised to see people constantly volunteering to cover for each other in an employee WhatsApp group. 

    “This guy says he was drafted, all of a sudden three people jump in and cover his tasks,” Bar Dea said. “There’s a sense of business as usual, everything is moving forward. … We had some meetings today on new launches coming. Everyone is keeping moving and covering for each other.” 

    One Zero is working on a ChatGPT-like chatbot for customer service, and this week employees opted to join optional planning meetings and decided not to move the deadlines, Bar Dea said. The person leading the ChatGPT efforts, an Air Force pilot who has been drafted, chose to join conference calls in his military uniform in between his duties, Bar Dea said. 

    “Many, many members of the tech community have been called up to reserve duty,” Yaniv Sadka, an investment associate at aMoon, a health tech and life sciences-focused venture capital firm, told CNBC, adding that a large swath of the community has been called to serve in Israel’s intelligence units as their reserve duty.  

    “I will have, by tonight, already been to two military funerals,” Sadka said. 

    Some members of Israel’s tech community are working overtime on tech tools specific to the conflict, such as a bulletin board-type website for missing persons, cyberattack defense tools, a GoFundMe-like tool and even a resource for finding online psychologists, according to Bar Dea.

    “It’s pretty amazing — it’s the secret sauce of Israel … startup nation,” Bar Dea told CNBC, adding, “In two days, people are raising money, volunteering, taking kids in, building new houses, walking deserted dogs. … All the high-tech companies. People are building cyber stuff, communication stuff … stuff to help civilians … websites to find hostages.” 

    Sadka said that he’s “never seen anything like” the mass donations and mass volunteering happening at the moment. 

    “It’s thousands upon thousands upon thousands of people taking care of each other. There are everyone from teenagers to senior citizens helping,” he said. 

    Five minutes before Bar Dea’s call with CNBC, he said he heard sirens blaring from his office, and that his wife had taken his kids inside their home to shelter in place. 

    “It’s interesting trying to be the CEO of a bank or high-tech company, meanwhile I’m the father of a 10-year-old and a 6-year-old,” Bar Dea said, adding, “It’s very tough. It’s something we’ve never experienced before, ever. … Everyone is trying to get our hands around how to deal with it from a business perspective and also from a personal perspective.” 

    Sadka added, “It’s very difficult to concentrate on work when you’re dealing with all these personal matters and on securing yourself and the country.”

    More CNBC coverage of the Israel-Hamas war

    [ad_2]

    Source link

  • Google packs more artificial intelligence into new Pixel phones, raises prices for devices by $100

    Google packs more artificial intelligence into new Pixel phones, raises prices for devices by $100

    [ad_1]

    Google on Wednesday unveiled a next-generation Pixel smartphones lineup that will be infused with more with more artificial intelligence tools capable of writing captions about photos that can be altered by the technology, too.

    The injection of more artificial intelligence, or AI, into Google’s products marks another step in the company’s attempt to bring more of the technology into the mainstream – a push they signaled they were embarking upon during their annual developer’s conference five months ago.

    “Our focus is on making AI more helpful for everyone in a way that is bold and responsible,” Rick Osterloh, Google’s senior vice president of devices and services, said during Wednesday’s event held in New York. As if to leave no doubt about Google’s current priorities, Osterloh described the new Pixel 8 and Pixel 8 Pro phones as a conduit for having “AI in your hand.”

    The company’s next moves will include allowing its 7-year-old Google Assistant to tap into the company’s recently hatched AI chatbot, Bard, to perform tasks. The expanded access to Bard comes just two weeks after Google began connecting the AI chatbot to the company’s other popular service such as Gmail, Maps and YouTube.

    One of the new tricks that the Bard-backed assistant is supposed to be able to do is scan a photo taken on a phone powered by Google’s Android software and generate a pithy caption suitable for posting on social media. As Google has been doing with most of its AI gambits, the Bard-backed Google Assistant initially will only be available to a test audience before it is gradually offered on an opt-in basis to more owners of the latest Pixels.

    As has become common across the industry, most of the other technology in the Pixel 8 and Pixel 8 Pro phones unveiled during an event in New York will be similar to what has already been available in last year’s models.

    One of the main selling points of the new phones will be improved cameras, including more AI-empowered editing tools that will mostly be available on the Pixel 8 Pro. The AI features will be able to spruce up photos, zoom into certain parts of images, substitute faces taken from other pictures in group shots and erase objects and people completely from images.

    Google is counting on the new AI twists added to this year’s lineup will be enough to justify a price increase – with the starting prices for both the Pixel 8 and Pixel 8 Pro increasing by $100 for last year’s comparable models.

    That will result in the Pixel 8 selling for $700 and the Pixel 8 Pro for $1,000 when they go on sale. Apple also raised the starting price of its top-end iPhone by $100 when its latest models came out last month, signaling inflationary pressures are starting to drive up the costs of devices that have become essential pieces of modern life.

    The Pixel 8 Pro will also be able to take people’s temperatures – an addition that could be a drawing card in a post-pandemic era as various strains of COVID continue to evolve. But Google is still trying to get regulatory approval to enable that capability in the U.S. A 2020 phone, the Honor Play 4 Pro made my Huawei, also was able to screen for fevers, so Google isn’t breaking totally new ground.

    Despite generally getting positive reviews, the Pixel phones have barely made a dent in a market dominated by Samsung and Apple since Google began making the devices seven years ago. But they have been gaining slightly more traction in recent years, with Pixel’s share of the high-end smartphone market now hovering around 4% from less than 1% three years ago, according to the research firm International Data Corp.

    Google can afford to make a phone that doesn’t generate huge sales because it brings in more than $200 billion annually from a digital ad network that’s anchored by its dominant search engine. A big chunk of the ad revenue flows from the billions of dollars that Google pays annually to lock in its search engine as the main gateway to the internet on the iPhone and Samsung’s Galaxy lineup.

    The agreements that have given Google’s search engine a lucrative position on phones and computers are the focal point of an ongoing antitrust trial in Washington, where the U.S. Justice Department is trying to prove its allegations that Google has been abusing its power to stifle competition and innovation.

    [ad_2]

    Source link

  • AI is on the world’s mind. Is the UN the place to figure out what to do about it?

    AI is on the world’s mind. Is the UN the place to figure out what to do about it?

    [ad_1]

    UNITED NATIONS — Just a few years ago, artificial intelligence got barely a mention at the U.N. General Assembly’s convocation of world leaders.

    But after the release of ChatGPT last fall turbocharged both excitement and anxieties about AI, it’s been a sizzling topic this year at diplomacy’s biggest yearly gathering.

    Presidents, premiers, monarchs and cabinet ministers convened as governments at various levels are mulling or have already passed AI regulation. Industry heavy-hitters acknowledge guardrails are needed but want to protect the technology’s envisioned benefits. Outsiders and even some insiders warn that there also are potentially catastrophic risks, and everyone says there’s no time to lose.

    And many eyes are on the United Nations as perhaps the only place to tackle the issue at scale.

    The world body has some unique attributes to offer, including unmatched breadth and a track record of brokering pacts on global issues, and it’s set to launch an AI advisory board this fall.

    “Having a convergence, a common understanding of the risks, that would be a very important outcome,” U.N. tech policy chief Amandeep Gill said in an interview. He added that it would be very valuable to reach a common understanding on what kind of governance works, or might, to minimize risks and maximize opportunities for good.

    As recently as 2017, only three speakers brought up AI at the assembly meeting’s equivalent of a main stage, the “ General Debate.” This year, more than 20 speakers did so, representing countries from Namibia to North Macedonia, Argentina to East Timor.

    Secretary-General António Guterres teased plans to appoint members this month to the advisory board, with preliminary recommendations due by year’s end — warp speed, by U.N. standards.

    Lesotho’s premier, Sam Matekane, worried about threats to privacy and safety, Nepalese Prime Minister Pushpa Kamal Dahal about potential misuse of AI, and Icelandic Foreign Minister Thórdís Kolbrún R. Gylfadóttir about the technology “becoming a tool of destruction.” Britain hyped its upcoming “AI Safety Summit,” while Spain pitched itself as an eager host for a potential international agency for AI and Israel touted its technological chops as a prospective developer of helpful AI.

    Days after U.S. senators discussed AI behind closed doors with tech bigwigs and skeptics, President Joe Biden said Washington is working “to make sure we govern this technology — not the other way around, having it govern us.”

    And with the General Assembly as a center of gravity, there were so many AI-policy panel discussions and get-togethers around New York last week that attendees sometimes raced from one to another.

    “The most important meetings that we are having are the meetings at the U.N. — because it is the only body that is inclusive, that brings all of us here,” Omar Al-Olama, the United Arab Emirates’ minister for artificial intelligence, said at a U.N.-sponsored event featuring four high-ranking officials from various countries. It drew such interest that a half-dozen of their counterparts offered comments from the audience.

    Tech industry players have made sure they’re in the mix during the U.N.’s big week, too.

    “What’s really encouraging is that there’s so much global interest in how to get this right — and the U.N. is in a position to help harmonize all the conversations” and work to ensure all voices get heard, says James Manyika, a senior vice president at Google. The tech giant helped develop a new, artificial intelligence-enabled U.N. site for searching data and tracking progress on the world body’s key goals.

    But if the United Nations has advantages, it also has the challenges of a big-tent, consensus-seeking ethos that often moves slowly. Plus its members are governments, while AI is being driven by an array of private companies.

    Still, a global issue needs a global forum, and “the U.N. is absolutely a place to have these conversations,” says Ian Bremmer, president of the Eurasia Group, a political risk advisory firm.

    Even if governments aren’t developers, Gill notes that they can “influence the direction that AI takes.”

    “It’s not only about regulating against misuse and harm, making sure that democracy is not undermined, the rule of law is not undermined, but it’s also about promoting a diverse and inclusive innovation ecosystem” and fostering public investments in research and workforce training where there aren’t a lot of deep-pocketed tech companies doing so, he said.

    The United Nations will have to navigate territory that some national governments and blocs, including the European Union and the Group of 20 industrialized nations, already are staking out with summits, declarations and in some cases regulations of their own.

    Ideas differ about what a potential global AI body should be: perhaps an expert assessment and fact-establishing panel, akin to the Intergovernmental Panel on Climate Change, or a watchdog like the International Atomic Energy Agency? A standard-setting entity similar to the U.N.’s maritime and civil aviation agencies? Or something else?

    There’s also the question of how to engender innovation and hoped-for breakthroughs — in medicine, disaster prediction, energy efficiency and more — without exacerbating inequities and misinformation or, worse, enabling runaway-robot calamity. That sci-fi scenario started sounding a lot less far-fetched when hundreds of tech leaders and scientists, including the CEO of ChatGPT maker OpenAI, issued a warning in May about “the risk of extinction from AI.”

    An OpenAI exec-turned-competitor then told the U.N. Security Council in July that artificial intelligence poses “potential threats to international peace, security and global stability” because of its unpredictability and possible misuse.

    Yet there are distinctly divergent vantage points on where the risks and opportunities lie.

    “For countries like Nigeria and the Global South, the biggest issue is: What are we going to do with this amazing technology? Are we going to get the opportunity to use it to uplift our people and our economies equally and on the same pace as the West?” Nigeria’s communications minister, Olatunbosun Tijani, asked at an AI discussion hosted by the New York Public Library. He suggested that “even the conversation on governance has been led from the West.”

    Chilean Science Minister Aisén Etcheverry believes AI could allow for a digital do-over, a chance to narrow gaps that earlier tech opened in access, inclusion and wealth.

    But it will take more than improving telecommunications infrastructure. Countries that got left behind before need to have “the language, culture, the different histories that we come from, represented in the development of artificial intelligence,” Etcheverry said at the U.N.-sponsored side event.

    Gill, who’s from India, shares those concerns. Dialogue about AI needs to expand beyond a “promise and peril” dichotomy to “a more nuanced understanding where access to opportunity, the empowerment dimension of it … is also front and center,” he said.

    Even before the U.N. advisory board sets a detailed agenda, plenty of suggestions were volunteered amid the curated conversations around the General Assembly. Work on global minimum standards for AI. Align the various regulatory and enforcement endeavors around the globe. Look at setting up AI registries, validation and certification. Focus on regulating uses rather than the technology itself. Craft a “rapid-response mechanism” in case dreaded possibilities come to pass.

    From Dr. Rose Nakasi’s vantage point, though, there was a clear view of the upsides of AI.

    The Ugandan computer scientist and her colleagues at Makerere University’s AI Lab are using the technology to streamline microscopic analysis of blood samples, the gold-standard method for diagnosing malaria.

    Their work is aimed at countries without enough pathologists, especially in rural areas. A magnifying eyepiece, produced by 3D printing, fits cellphone cameras and takes photos of microscope slides; AI image analysis then picks out and identifies pathogens. Google’s charitable arm recently gave the lab $1.5 million.

    AI is “an enabler” of human activity, Nakasi said between attending General Assembly-related events.

    “We can’t be able to just leave it to do each and every thing on its own,” she said. “But once it is well regulated, where we have it as a support tool, I believe it can do a lot.”

    [ad_2]

    Source link

  • Tech companies try to take AI image generators mainstream with better protections

    Tech companies try to take AI image generators mainstream with better protections

    [ad_1]

    Artificial intelligence tools that can conjure whimsical artwork or realistic-looking images from written commands started wowing the public last year. But most people don’t actually use them at work or home.

    That could change as leading tech companies are competing to mainstream the use of text-to-image generators for a variety of tasks, integrating them into familiar tools such as Microsoft Paint, Adobe Photoshop, YouTube and ChatGPT.

    But first, they’re trying to convince consumers, business users and government regulators that they’ve tamed some of the Wild West nature of early AI image-generators with stronger safeguards against copyright theft and troubling content.

    A year ago, it was a relatively small group of early adopters and hobbyists playing with cutting-edge image generators such as Stable Diffusion, Midjourney and OpenAI’s DALL-E.

    “The previous ones were an interesting curiosity,” but businesses were wary, said David Truog, an analyst at market research group Forrester.

    Then came the backlash, including copyright lawsuits from artists and photo stock company Getty, and calls for new laws to rein in generative AI technology’s misuse to create deceptive political ads or abusive sexual imagery.

    Those problems aren’t yet resolved. But now there’s a proliferation of new image generators from makers who say they’re business-ready this time.

    “Alexa, create an image of cherry blossoms in the snow,” is the kind of prompt that Amazon says U.S. customers will be able to speak later this year to generate a personalized display on their Fire TV screen.

    Adobe, known for the Photoshop graphics editor it introduced more than three decades ago, was the first this year to release an AI generator designed to avoid legal and ethical problems created by competitors who trained their AI models on huge troves of images pulled off the internet.

    “When we talk to customers about generative technology, mostly what we hear is a lot of the technology is really cool, but they don’t feel like they can use it because of these questions,” said Adobe’s chief technology officer for its digital media business, Ely Greenfield.

    That’s why Adobe’s product, called Firefly, was built on its own Adobe Stock image collection, as well as content it has license to use. Stock contributors also are getting some compensation out of the arrangement, Greenfield said.

    “Adobe Firefly is clean legally, whereas the others are not,” said Truog, the Forrester analyst. “You don’t really care about that if you’re just some dude having fun with generative AI.”

    But if you’re a business or a creative professional thinking about using images on your website, apps, or in print layouts, advertising or email marketing campaigns, “it’s kind of a big deal,” Truog said. “You don’t want to be getting into trouble.”

    Some competitors are taking note. ChatGPT-maker OpenAI unveiled its third-generation image generator DALL-E 3 on Wednesday, emphasizing both its impressive capabilities, its future integration with ChatGPT and new safeguards to decline requests that ask for an image in the style of a living artist. Creators can also opt their images out from training future models, though Truog notes that OpenAI hasn’t said anything “about compensating authors whose work they use for training, even with permission.”

    In separate New York City showcase events Thursday, both Microsoft and Google-owned YouTube also unveiled new products infused with AI image generation.

    Microsoft, a major investor in OpenAI, showed how it is already starting to bake DALL-E 3 into its graphics design tools, mostly for background editing, as well as its Bing search engine and chatbot. YouTube revealed a new Dream Screen for short YouTube videos that allows creators to compose a new background of their choosing.

    Earlier this month, both Adobe and Stability AI, maker of Stable Diffusion, joined a larger group of major AI providers including Amazon, Google, Microsoft and OpenAI that agreed to voluntary safeguards set by President Joe Biden’s administration.

    Among the White House commitments is one that requires companies will develop methods such as digital watermarking to help people know if images and other content were AI-generated.

    At the Microsoft event, executives said the company has built filters to determine what kinds of imagery can be generated from text prompts in Bing, citing those made with top political figures as content to monitor.

    The goal is “to make sure it’s not producing types of content we would never want to produce, like hateful content,” said Sarah Bird, Microsoft’s global head for responsible AI.

    ——

    AP business writers Cora Lewis and Haleluya Hadero contributed to this report.

    [ad_2]

    Source link

  • John Grisham, George R.R. Martin and more authors sue OpenAI for copyright infringement

    John Grisham, George R.R. Martin and more authors sue OpenAI for copyright infringement

    [ad_1]

    NEW YORK — John Grisham, Jodi Picoult and George R.R. Martin are among 17 authors suing OpenAI for “systematic theft on a mass scale,” the latest in a wave of legal action by writers concerned that artificial intelligence programs are using their copyrighted works without permission.

    In papers filed Tuesday in federal court in New York, the authors alleged “flagrant and harmful infringements of plaintiffs’ registered copyrights” and called the ChatGPT program a “massive commercial enterprise” that is reliant upon “systematic theft on a mass scale.”

    The suit was organized by the Authors Guild and also includes David Baldacci, Sylvia Day, Jonathan Franzen and Elin Hilderbrand among others.

    “It is imperative that we stop this theft in its tracks or we will destroy our incredible literary culture, which feeds many other creative industries in the U.S.,” Authors Guild CEO Mary Rasenberger said in a statement. “Great books are generally written by those who spend their careers and, indeed, their lives, learning and perfecting their crafts. To preserve our literature, authors must have the ability to control if and how their works are used by generative AI.”

    The lawsuit cites specific ChatGPT searches for each author, such as one for Martin that alleges the program generated “an infringing, unauthorized, and detailed outline for a prequel” to “A Game of Thrones” that was titled “A Dawn of Direwolves” and used “the same characters from Martin’s existing books in the series “A Song of Ice and Fire.”

    In a statement Wednesday, an OpenAI spokesperson said that the company respects “the rights of writers and authors, and believe they should benefit from AI technology.

    “We’re having productive conversations with many creators around the world, including the Authors Guild, and have been working cooperatively to understand and discuss their concerns about AI. We’re optimistic we will continue to find mutually beneficial ways to work together to help people utilize new technology in a rich content ecosystem,” the statement reads.

    Earlier this month, a handful of authors that included Michael Chabon and David Henry Hwang sued OpenAI in San Francisco for “clear infringement of intellectual property.”

    In August, OpenAI asked a federal judge in California to dismiss two similar lawsuits, one involving comedian Sarah Silverman and another from author Paul Tremblay. In a court filing, OpenAI said the claims “misconceive the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

    Author objections to AI have helped lead Amazon.com, the country’s largest book retailer, to change its policies on e-books. The online giant is now asking writers who want to publish through its Kindle Direct Program to notify Amazon in advance that they are including AI-generated material. Amazon is also limiting authors to three new self-published books on Kindle Direct per day, an effort to restrict the proliferation of AI texts.

    [ad_2]

    Source link

  • Google brings its AI chatbot Bard into its inner circle, opening door to Gmail, Maps, YouTube

    Google brings its AI chatbot Bard into its inner circle, opening door to Gmail, Maps, YouTube

    [ad_1]

    Google is introducing its artificially intelligent chatbot named Bard to other members of its digital family — including Gmail, Maps and YouTube — as it seeks to ward off competitive threats posed by similar technology run by Open AI and Microsoft

    ByMICHAEL LIEDTKE AP technology writer

    September 18, 2023, 8:11 PM

    FILE – Various Google logos are displayed on a Google search, Monday, Sept. 11, 2023, in New York. On Tuesday, Sept. 19, Google announced that it is introducing its artificially intelligent chatbot, Bard, to other members of its digital family, including Gmail, Maps and YouTube, as part of the next step in its effort to ward off threats posed by similar technology run by Open AI and Microsoft. (AP Photo/Richard Drew, File)

    The Associated Press

    Google is introducing Bard, its artificially intelligent chatbot, to other members of its digital family — including Gmail, Maps and YouTube — as it seeks ward off competitive threats posed by similar technology run by Open AI and Microsoft.

    Bard’s expanded capabilities announced Tuesday will be provided through an English-only extension that will enable users to allow the chatbot to mine information embedded in their Gmail accounts as well as pull directions from Google Maps and find helpful videos on YouTube. The extension will also open a door for Bard to fetch travel information from Google Flights and extract information from documents stored on Google Drive.

    Google is promising to protect users’ privacy by prohibiting human reviewers from seeing the potentially sensitive information that Bard gets from Gmail or Drive, while also promising that the data won’t used as part of the main way the Mountain View, California, company makes money — selling ads tailored to people’s interests.

    The expansion is the latest development in an escalating AI battle triggered by the popularity of OpenAI’s ChatGPT chatbot and Microsoft’s push to infuse similar technology in its Bing search engine and its Microsoft 365 suite that includes its Word, Excel and Outlook applications.

    ChatGPT prompted Google to release Bard broadly in March and then start testing the use of more conversational AI within its own search results in May.

    The decision to feed Bard more digital juice i n the midst of a high-profile trial that could eventually hobble the ubiquitous Google search engine that propels the $1.7 trillion empire of its corporate parent, Alphabet Inc.

    In the biggest U.S. antitrust case in a quarter century, the U.S Justice Department is alleging Google has created its lucrative search monopoly by abusing its power to stifle competition and innovation. Google contends it dominates search because its algorithms produce the best results. It also argues it faces a wide variety of competition that is becoming more intense with the rise of AI.

    Giving Bard access to a trove of personal information and other popular services such as Gmail, Google Maps and YouTube, in theory, will make them even more helpful and prod more people to rely in them.

    Google, for instance, posits that Bard could help a user planning a group trip to the Grand Canyon by getting dates that would work for everyone, spell out different flight and hotel options, provide directions from Maps and present an array of informative videos from YouTube.

    [ad_2]

    Source link

  • Google brings its AI chatbot Bard into its inner circle, opening door to Gmail, Maps, YouTube

    Google brings its AI chatbot Bard into its inner circle, opening door to Gmail, Maps, YouTube

    [ad_1]

    Google is introducing its artificially intelligent chatbot named Bard to other members of its digital family — including Gmail, Maps and YouTube — as it seeks to ward off competitive threats posed by similar technology run by Open AI and Microsoft

    ByMICHAEL LIEDTKE AP technology writer

    September 18, 2023, 8:11 PM

    FILE – Various Google logos are displayed on a Google search, Monday, Sept. 11, 2023, in New York. On Tuesday, Sept. 19, Google announced that it is introducing its artificially intelligent chatbot, Bard, to other members of its digital family, including Gmail, Maps and YouTube, as part of the next step in its effort to ward off threats posed by similar technology run by Open AI and Microsoft. (AP Photo/Richard Drew, File)

    The Associated Press

    Google is introducing Bard, its artificially intelligent chatbot, to other members of its digital family — including Gmail, Maps and YouTube — as it seeks ward off competitive threats posed by similar technology run by Open AI and Microsoft.

    Bard’s expanded capabilities announced Tuesday will be provided through an English-only extension that will enable users to allow the chatbot to mine information embedded in their Gmail accounts as well as pull directions from Google Maps and find helpful videos on YouTube. The extension will also open a door for Bard to fetch travel information from Google Flights and extract information from documents stored on Google Drive.

    Google is promising to protect users’ privacy by prohibiting human reviewers from seeing the potentially sensitive information that Bard gets from Gmail or Drive, while also promising that the data won’t used as part of the main way the Mountain View, California, company makes money — selling ads tailored to people’s interests.

    The expansion is the latest development in an escalating AI battle triggered by the popularity of OpenAI’s ChatGPT chatbot and Microsoft’s push to infuse similar technology in its Bing search engine and its Microsoft 365 suite that includes its Word, Excel and Outlook applications.

    ChatGPT prompted Google to release Bard broadly in March and then start testing the use of more conversational AI within its own search results in May.

    The decision to feed Bard more digital juice i n the midst of a high-profile trial that could eventually hobble the ubiquitous Google search engine that propels the $1.7 trillion empire of its corporate parent, Alphabet Inc.

    In the biggest U.S. antitrust case in a quarter century, the U.S Justice Department is alleging Google has created its lucrative search monopoly by abusing its power to stifle competition and innovation. Google contends it dominates search because its algorithms produce the best results. It also argues it faces a wide variety of competition that is becoming more intense with the rise of AI.

    Giving Bard access to a trove of personal information and other popular services such as Gmail, Google Maps and YouTube, in theory, will make them even more helpful and prod more people to rely in them.

    Google, for instance, posits that Bard could help a user planning a group trip to the Grand Canyon by getting dates that would work for everyone, spell out different flight and hotel options, provide directions from Maps and present an array of informative videos from YouTube.

    [ad_2]

    Source link

  • Which is better — ChatGPT or a travel agent? Here’s our pick

    Which is better — ChatGPT or a travel agent? Here’s our pick

    [ad_1]

    Planning a holiday can be stressful — that’s where travel agents come in.

    But now, travelers have another option: chatbots like ChatGPT, Bard AI and Microsoft Bing. Simply input a prompt and watch the travel recommendations pour in. The best parts? It’s instantaneous and, for the most part, free.

    But which is better when it comes to planning vacations?

    Intrepid Travel, a small group travel agency, accepted CNBC Travel’s request to find out.

    CNBC asked both sides to plan a two-day trip for four friends, all in their mid-20s, to Melbourne, Australia.

    Here’s how they fared.

    Where to stay in Melbourne

    The ask: Recommend three places to stay in Melbourne that have a pool and gym, are near Swanston Street, and that are priced less than $500 a night.

    Right off the bat, there was a rather glaring error with ChatGPT: All three recommendations were no longer in service. If that wasn’t enough, some of the places lacked both a pool and a gym, and one was over the budget.

    Intrepid Travel, on the other hand, provided options that came with either a pool or a gym, or both. The company also recognized that those amenities were not necessities but additional benefits.

    The winner: Intrepid Travel

    Where to eat

    The ask: Provide dining options for breakfast, lunch, dinner and post-dinner drinks for two days.

    Again, ChatGPT struggled. The suggested restaurant on the first day, a place called Fatto Bar & Cantina, had been closed for years.

    Apart from that, a quick Google search of the other places showed that they were (thankfully) still in operation. Those were, to me, on the safer end, with suggested spots appearing on several “must-visit” restaurant lists for Melbourne.

    What to do

    The ask: Provide a two-day itinerary around Melbourne with a focus on art and cultural activities.

    Finding a ‘hidden gem’

    The ask: Recommend one place that is not well known by travelers

    Intrepid Travel’s hidden gem recommendation: Le Bar Europeen. It’s been touted as Australia’s smallest bar and barely fits four people.

    Reds | Room | Getty Images

    Intrepid Travel recommended hidden speakeasy Le Bar Europeen for a nightcap, and the Yalinguth App walking tour as a daytime activity. I found both recommendations exciting and felt that they were lesser-known ways to explore the city.

    Between the two, I particularly enjoyed the Yalinguth App walking tour, which is an audio tour along Gertrude Street in Melbourne’s Fitzroy district. The app uses geolocated stories and sounds from Australia’s aboriginal community so listeners can understand a slice of Australia’s past as they make their way around one of Melbourne’s cultural hubs.

    On the other hand, ChatGPT interpreted the request as asking for a full day’s itinerary, recommending visits to Hardware Société, Rippon Lea House and Gardens, Queen Victoria Market, Melbourne Museum, Chin Chin and Eau De Vie.

    I don’t consider any of those “hidden gems” in Melbourne, as all are all rather popular locations for tourists to visit.

    The winner: Intrepid Travel

    Conclusion

    Ultimately, some of the teething problems I had with ChatGPT boiled down to the chatbot not being up-to-date — it currently only “knows” data up to 2021. 

    In ordinary circumstances, a two-year time lag doesn’t seem like much. After all, restaurants and hotels open and close all the time! That said, the initial two years of the Covid-19 pandemic caused many closures in the hospitality sector, making recommendations given prior to it unreliable at times.

    I also found browsing Intrepid’s itinerary more enjoyable as each recommendation came with a short write-up. The company also suggested specific activities and dishes to try at each location.

    On the other hand, ChatGPT was much more succinct in its recommendations. Though impersonal and utilitarian, it got the job done. However, I found myself less excited about my trip than when I read Intrepid Travel’s suggestions.

    Overall, I won’t discount the recommendations put forth by ChatGPT. It’s a quick and easy way to suss out the classic top spots to visit on your holiday. But if you want a more personalized itinerary that focuses more on local spots, sticking with travel companies is the way to go.

    [ad_2]

    Source link