ReportWire

Tag: generative ai

  • What to Know About Generative AI in Corporate Workplaces | Entrepreneur

    What to Know About Generative AI in Corporate Workplaces | Entrepreneur

    Opinions expressed by Entrepreneur contributors are their own.

    Companies of all sizes have become accustomed to using predictive AI to achieve a range of outcomes, such as anticipating risk, developing new products and forecasting buying behaviors. However, many enterprises are struggling to figure out how to realistically incorporate generative AI into their operations. It poses many advantages, of course, but it’s also fraught with fear and uncertainty.

    Perhaps because of that, only 12% of IT decision-makers recently surveyed by Enterprise Technology Research, as reported by the Wall Street Journal, said they plan to use OpenAI technology — creator of the most popular generative AI tool, ChatGPT. Yet, the global generative AI market is expected to reach $111 billion by 2030, per Acumen Research and Consulting.

    With all the buzz around it and advancements in the technology, there’s little doubt that generative AI is going to be an asset across industries as widespread as healthcare, insurance and logistics. However, it’s a newer solution. As such, businesses and their leadership teams are only starting to determine how best to leverage it to its fullest — and safest — degree.

    This leaves corporate leaders at a crossroads. Many want to bring generative AI solutions in-house. Some — particularly those at enterprise-level corporations — have even put a budget behind this desire. They want to access this emerging technology in the most efficient ways possible. I believe the easiest way to make that happen is for businesses to join forces with AI-based startups.

    Related: The Secret to How Businesses Can Fully Harness the Power of AI

    Attributes, advantages and areas of concern around generative AI

    Because of its continual learning capacity, generative AI might well be described as creative AI. That is, it can create content that didn’t exist before. While that’s exciting, it’s brought about much discussion on how to handle its downsides, such as inaccuracies. Generative AI isn’t able to identify or self-correct when it gets things wrong or even pushes out content that’s inappropriate or biased.

    Another overarching problem with generative AI concerns data. Because it’s trained on vast amounts of data, it may produce content that violates intellectual property rights. What is the law around generative AI content that leans heavily on existing content? It’s a fine line between unique expression and plagiarism, and the laws haven’t quite caught up to where that line lies.

    In addition, vertical, industry-specific solutions with unique data libraries, rather than general generative AI models, provide the most applicable answers but can be costly. Accessing the vast amounts of data needed to produce accurate insights is expensive, and the computing power required to do so is highly demanding and unsustainable in terms of expense. However, Microsoft seems to be exploring collaborations with AMD to lower computing costs, and potential software technologies could reduce computing consumption.

    Of course, generative AI is far from being all negatives and no positives. Due to its transformative nature as a technology, it could become a tool for sector disruption, helping companies save time and resources and improve their decision-making.

    In my view, I see generative AI as a value-added tool that’s only going to become more capable and intelligent. New models are emerging that could address the issues of cost by using smaller data sets, but it will take a few years for new models to evolve to a stage where they are affordable and user-friendly enough for practical applications. At present, generative AI is most effective when used in conjunction with human input. Human intervention fosters consideration of different perspectives and minimizes ethical and flawed data risks.

    Take ChatGPT, for example. The quality of its output and answers depends on the quality of the input and human intelligence involved. To get high-quality answers, content and results from ChatGPT, human users must take active roles in the process to create feedback loops. Otherwise, ChatGPT (and similar generative AI solutions) is interesting but not reliable or holistically useful.

    Related: The Top Fears and Dangers of Generative AI — and What to Do About Them

    Collaboration: Key to bringing generative AI solutions into corporate settings

    Collaboration between startups and corporate enterprises can be the game-changing factor across the entire generative AI landscape. Not only do partnerships allow founders to explore various options and even work with different model providers, but they also lower the barriers for companies to access generative AI. It also produces more interest in open-source model ecosystems. With open-source contributions, there can be a collective and effective effort to push generative AI’s boundaries, challenge dominant AI players and drive down costs. Ultimately, it fuels a positive innovation environment for both the startup and the collaborating corporation.

    Collaboration offers another opportunity: Businesses and generative AI solutions startups can focus on implementation and adoption rather than investing in more fundamental systems. Such a partnership will entice large companies to integrate generative AI into their workflows, making it less complicated for the startup to explore faster and potentially attract more investors for future developments.

    With that being said, enterprises won’t just center into a partnership with a generative AI startup without consideration. Keep these things in mind to streamline and inform your decision-making when partnering:

    1. The CIO and CTO must be comfortable with the solution

    Right now, CIOs and CTOs are in a state of panic. Why? They’re being pressured by their boards to understand the implications of generative AI because it accesses sensitive data. Consequently, although partnering with a startup is a perfect way to train and retrain a generative AI model with industry-specific input to ensure accuracy and consistency, it may feel like a liability risk.

    To help the CIO and CTO get comfortable, talk about what data security measures are or could be put into place. This could include data encryption solutions and secure learning techniques. Once these measures are established, the major players in your business are likely to be more confident about implementing generative AI internally. Remember: Most CIO and CTO executives understand that generative AI will need domain knowledge and access to unique industry data libraries. They simply want to avoid a breach that could put your brand in an unwanted spotlight.

    Related: Generative AI: the Rising Kid on the Start-up Block

    2. The employees will have to learn how to effectively use generative AI

    If you want employees to jump in and implement generative AI for that competitive advantage boost, you have to make it happen. This means more than just implementing generative AI applications. It means explaining the best practices regarding the technology’s use and data regulations. Presently, there are extensive discussions swirling around data regulation, so your team will need to stay up to date.

    Providing the most current information on the regulation of the usage and processing of data — not to mention data ownership concerns — to employees is critical. The more they know, the more they can control their generative AI usage and mitigate problems.

    Generative AI is making a huge splash across the world right now, especially with last year’s release of ChatGPT. While it’s still in its infancy, corporations such as yours can get ahead of the pack by working with startups developing generative AI models and applications. You just need to conduct some due diligence to ensure you get all the advantages of generative AI and avoid preventable snags.

    Lu Zhang and Trevor Mottl

    Source link

  • China’s Baidu makes AI chatbot Ernie Bot publicly available

    China’s Baidu makes AI chatbot Ernie Bot publicly available

    Chinese search engine and artificial intelligence firm Baidu has made its ChatGPT-equivalent language model, Ernie Bot, fully available to the public

    Staff members chat each other at a booth promoting the AI chatbot Ernie Bot during the Wave Summit in Beijing on Aug. 16, 2023. Chinese search engine and artificial intelligence firm Baidu on Thursday made its ChatGPT-equivalent language model available to the public, in a sign of a green light from Beijing which has in recent months taken steps to regulate the industry. (AP Photo/Andy Wong)

    The Associated Press

    HONG KONG — Chinese search engine and artificial intelligence firm Baidu made its ChatGPT-equivalent language model fully available to the public Thursday, raising the company’s stock price by over 3% following the announcement.

    Beijing sees artificial intelligence as a key industry to rival the United States and aims to become a global leader by 2030. Chinese technology firms have also raced to unveil their generative AI models — in which algorithms allow the technology to produce and create new content — after U.S. firm OpenAI launched the widely popular ChatGPT.

    Baidu said Thursday that Ernie Bot would be fully open to the general public via an app or an official website. By Thursday afternoon, the Ernie Bot app had topped the charts on Apple’s iOS store in China for free apps.

    By releasing the model publicly, Baidu will be able to collect massive real-world human feedback, according to Baidu CEO Robin Li, who said this would in turn help improve Ernie and Baidu’s foundation models.

    Like Europe, China has made efforts in recent months to regulate the generative AI industry.

    China issued AI regulations Aug. 15 requiring companies to carry out a security review and obtain approvals before their product can be publicly launched. Beijing also requires companies providing such generative AI services to comply with government requests for technology and data.

    The U.S. does not currently have regulations in place.

    Baidu CEO Li said he was optimistic and described the AI regulations as “more pro-innovation than regulation” in the company’s earnings call earlier in August.

    Two other AI companies in China, Baichuan and Zhipu AI, also launched their AI language models Thursday.

    Source link

  • OpenAI launches ChatGPT Enterprise, the company’s biggest announcement since ChatGPT’s debut

    OpenAI launches ChatGPT Enterprise, the company’s biggest announcement since ChatGPT’s debut

    OpenAI on Monday announced its biggest news since ChatGPT’s debut: It’s launching ChatGPT Enterprise, the AI chatbot’s business tier, available starting Monday.

    The tool has been in development for “under a year” and had the help of more than 20 companies of varying sizes and industries, OpenAI COO Brad Lightcap told CNBC. ChatGPT Enterprise includes access to GPT-4 with no usage caps, performance that’s up to two times faster than previous versions, and API credits. Lightcap said that pricing would not be publicly announced and that it “it will depend, for us, on every company’s use cases and size.” Beta users included Block, Canva and The Estée Lauder Cos. 

    Earlier this year, Microsoft‘s expanded investment in OpenAI — an additional $10 billion — made it the biggest AI investment of the year, according to PitchBook, and in April, the startup reportedly closed a $300 million share sale at a valuation between $27 billion-$29 billion, with investments from firms such as Sequoia Capital and Andreessen Horowitz. Two months after ChatGPT’s launch in November, it surpassed 100 million monthly active users, breaking records for the fastest-growing consumer application in history: “a phenomenal uptake – we’ve frankly never seen anything like it, and interest has grown ever since,” Brian Burke, a research vice president at Gartner, told CNBC in May

    More than 80% of Fortune 500 companies had teams actively using ChatGPT, per Lightcap and OpenAI. 

    One key differentiator between ChatGPT Enterprise and the consumer-facing version: ChatGPT Enterprise will allow clients to input company data to train and customize ChatGPT for their own industries and use cases, although some of those features aren’t yet available in Monday’s debut. The company also plans to introduce another tier of usage, called ChatGPT Business, for smaller teams, but did not specify a timeline. 

    Lightcap told CNBC that rolling out the enterprise version first, and waiting on the business tier, “gives us a little bit more of a way to engage with teams in a hands-on way and understand what the deployment motion looks like before we fully open it up.” 

    OpenAI noted in a blog post that “We do not train on your business data or conversations, and our models don’t learn from your usage,” adding that clients’ conversation data would be encrypted both at transit and at rest. The company does, however, log aggregate data on how the tool is used, including performance metadata and more, as is relatively standard, Lightcap said. 

    ChatGPT Enterprise’s debut comes as the AI arms race continues to heat up among chatbot leaders such as OpenAI, Microsoft, Google and Anthropic. In an effort to encourage consumers to adopt generative AI into their daily routines, tech giants are racing to launch not only new chatbot apps, but also new features. In May, OpenAI launched its iOS app, followed by the Android app in July. Google is regularly rolling out updates to its Bard chatbot, and Microsoft is doing the same with Bing, introducing features like visual search. Anthropic, the AI startup founded by ex-OpenAI executives, debuted a new AI chatbot, Claude 2, in July, months after raising $750 million over two financing rounds. 

    ChatGPT, like many large language models, is expensive to operate, with each chat likely costing OpenAI “single-digit cents,” according to a December tweet by CEO Sam Altman, suggesting that operating the service for 100 million people a month could cost millions of dollars.

    The biggest obstacle to ChatGPT Enterprise’s development was figuring out how to prioritize features, Lightcap told CNBC. 

    Out of all the things shipping in the next couple of months, he said, “the prioritization of how you pulled forward those things based on how people are using the product — and what people really want and what’s empowering — was the topic of a lot of debate, I would say, on the team.” 

    One concrete example is Code Interpreter, a ChatGPT Plus feature that has since been renamed to Advanced Data Analysis. Lightcap said that the team questioned whether the feature was a priority for ChatGPT Enterprise and that it “sat stack-ranked in a list with a bunch of other things that we think are kind of equally or more exciting,” but companies’ feedback caused them to prioritize offering it sooner rather than later. 

    OpenAI plans to onboard “as many enterprises as we can over the next few weeks,” per the company’s blog post.

    Source link

  • 3 Ways You Can Actually Use AI in Your Business (and Why You Should Still Be Careful With It) | Entrepreneur

    3 Ways You Can Actually Use AI in Your Business (and Why You Should Still Be Careful With It) | Entrepreneur

    Opinions expressed by Entrepreneur contributors are their own.

    The sky appears to be the limit for the ways entrepreneurs and CEOs can incorporate generative AI tools into their workflows. The one thing you can’t afford to do with generative AI is ignore it.

    I recently hosted a workshop for 30 early-stage CEOs to discuss ways to fuse generative AI into their business strategies. Here is some of the intel I shared, plus how you can give yourself an edge by responsibly and effectively incorporating this technology into your startup or business.

    Entrepreneurs wisely using generative AI can strategically implement it into their narrative and general storytelling of their companies to in turn receive higher valuations from investors as well — here’s how.

    Related: How AI Is Becoming a Game-Changer in Startup Fundraising

    1. Expand your product offerings and stay competitive

    I suggest reading up on Microsoft’s collaboration with OpenAI. Microsoft jumped to incorporate ChatGPT’s technology into Bing and its other products. Now, its users can work more efficiently in PowerPoint and its suite of Office products. Google responded by exploring the use of generative AI to expand its search capabilities.

    Software development is another key area getting gen AI attention. Gen AI is helping developers code more efficiently, predicting the next lines of code based on code already written and responding to prompts. There’s a spotlight on generative AI algorithm models like Large Language Models (LLMs) that can craft text based on the user’s input data.

    Every entrepreneur who wants to edge out the competition must find ways to apply generative AI to improve products and develop new offerings.

    My company, Verbit, hosted an internal hackathon to gamify identifying ways to incorporate generative AI. It helped to get greater buy-in and inspire ideas. Our hackathon uncovered 13 ways to employ more AI, including two that we’re commercializing.

    Consider replicating this hackathon idea or encouraging brainstorms. Run them company-wide. Instead of just involving your more obvious teams, acknowledge that generative AI has the ability to impact the roles of nearly everyone. By involving less obvious stakeholders, you’ll identify use cases for generative AI to disrupt processes you weren’t even aware of.

    Engaging your team in these ways won’t just boost morale; it will release apprehension around the “negative” human impact of greater generative AI use. Instead, your team will be inspired by how they can apply it to expand your offerings to deliver better.

    Related: The Secret to How Businesses Can Fully Harness the Power of AI

    2. Drive employee productivity

    AI should be seen as a gateway to make work more meaningful and efficient, not replace jobs. Using generative AI to eliminate dreaded, time-consuming tasks will keep your employees engaged. It will grant them the ability to focus on more creative tasks they’re passionate about. Employee engagement is a metric entrepreneurs can’t overlook because it translates to 23% higher profitability.

    Since newer forms of AI are learning to be intuitive and interact naturally with humans, start by using AI that communicates with your teams and learns from their feedback to boost productivity. For example, generative AI has advanced the possibilities of working with chatbots. Teams can now summarize and pull data from chatbot-powered customer surveys and much more.

    3. Predict market trends more accurately

    For entrepreneurs to make informed decisions about investments, strategies and products, they must understand market trends. Generative AI is helping entrepreneurs gather more quality data than earlier AI forms.

    AI is excellent at analyzing large sets of data, but generative AI can gather insights from unstructured data, like social media posts, audio files, text and other content. To be successful, entrepreneurs must pull in this additional information accessible to them through generative AI.

    Generative AI can also create simulations to determine the impact of hypothetical “what-if” situations. Researchers at the University of Pennsylvania used generative AI to simulate the spread of COVID-19 and the efficacy of different responses. Audi used simulations to model manufacturing strategies and reduce its assembly line cycle time by 30%.

    As an entrepreneur, you can benefit greatly by using generative AI for market simulation. If you don’t use these tools, you’ll be operating with less complete, lower-quality information than your competition.

    Related: How to Protect and Improve Your Business with AI During Challenging Times

    Know where to draw the line

    There are dangers in relying too heavily on gen AI. For example, AI uses data inputs for results. If the data is flawed, it can have consequences. This issue is already appearing in recruitment and hiring practices. Amazon canceled an AI-powered recruitment program after it proved to be biased against women. If you lean too much on AI alone, you could find yourself violating employment laws.

    You’ll need to be aware of ethical concerns to avoid instances of sharing sensitive information or violating data privacy laws as well. Generative AI can also hallucinate, meaning that it might give entirely wrong information, but package it in convincing language and reassuring confidence. Turning over too much responsibility to a chatbot could cause more harm than good.

    For example, experts are warning against relying too much on tools like ChatGPT for search engine optimization (SEO). Google may decide to penalize companies that publish automated content, undermining their past SEO work. Make sure that your team has a process in place to check the outputs of the AI it’s using.

    There was the case of the “ChatGPT lawyer,” who used the tool to draft a motion and ended up citing fake cases in court. The firm faced a fine and public humiliation, but in fields like health care, the consequences of faulty information could be worse and more dangerous.

    Smart entrepreneurs will understand how to intelligently and strategically use generative AI, but they’ll know where to draw the line. My advice is to be as savvy about the technology you employ as you are about the people you hire.

    However, don’t delay. Challenge your teams to use generative AI to work productively. Decide on a few areas of focus to implement it now, whether it’s personalized content creation, marketing efforts, software development, customer operations or data analysis. Trust me, your competitors are already doing so.

    Tom Livne

    Source link

  • Nvidia stock jumps 7% after Morgan Stanley says chipmaker benefits from ‘massive shift’ in A.I.

    Nvidia stock jumps 7% after Morgan Stanley says chipmaker benefits from ‘massive shift’ in A.I.

    Jen-Hsun Huang, CEO, Nvidia

    David Paul Morris | Bloomberg | Getty Images

    As long as companies are interested in generative artificial intelligence, Nvidia stands to benefit.

    Nvidia shares closed up more than 7% on Monday, underscoring how investors believe the company’s graphics processing units, or GPUs, will continue to be the most popular computer chips used to power massive large language models that can generate compelling text.

    Morgan Stanley released an analyst note Monday reiterating that Nvidia continues to be a “Top Pick” coming off the company’s most recent earnings report, in which it offered a better-than-expected forecast.

    “We think the recent selloff is a good entry point, as despite supply constraints, we still expect a meaningful beat and raise quarter — and, more importantly, strong visibility over the next 3-4 quarters,” the Morgan Stanley analysts wrote. “Nvidia remains our Top Pick, with a backdrop of the massive shift in spending towards AI, and a fairly exceptional supply demand imbalance that should persist for the next several quarters.”

    Nvidia, now valued at over $1 trillion, bested all other companies during this year’s tech rebound following a market slump in 2022, with the chip giant’s shares up nearly 200% so far in 2023.

    Although Nvidia shares dropped a little more than 10% this month, partly attributed to supply constraints and ongoing concerns over the broader economy and whether it will experience a significant rebound, the Morgan Stanley analysts predict that Nvidia will benefit in the long run.

    “The bottom line is that this is a very positive situation, October numbers are entirely gated by supply, and the upper end of the buy side consensus has been reined in,” the analysts wrote. “We see numbers are going up at least enough that this stock will trade at P/Es more similar to the upper end of semis, with material upside still ahead.”

    Nvidia’s stock has tripled this year. The company will announce second-quarter results Aug. 23.

    Source link

  • How Amazon is racing to catch Microsoft and Google in generative A.I. with custom AWS chips

    How Amazon is racing to catch Microsoft and Google in generative A.I. with custom AWS chips

    In an unmarked office building in Austin, Texas, two small rooms contain a handful of Amazon employees designing two types of microchips for training and accelerating generative AI. These custom chips, Inferentia and Trainium, offer AWS customers an alternative to training their large language models on Nvidia GPUs, which have been getting difficult and expensive to procure. 

    “The entire world would like more chips for doing generative AI, whether that’s GPUs or whether that’s Amazon’s own chips that we’re designing,” Amazon Web Services CEO Adam Selipsky told CNBC in an interview in June. “I think that we’re in a better position than anybody else on Earth to supply the capacity that our customers collectively are going to want.”

    Yet others have acted faster, and invested more, to capture business from the generative AI boom. When OpenAI launched ChatGPT in November, Microsoft gained widespread attention for hosting the viral chatbot, and investing a reported $13 billion in OpenAI. It was quick to add the generative AI models to its own products, incorporating them into Bing in February. 

    That same month, Google launched its own large language model, Bard, followed by a $300 million investment in OpenAI rival Anthropic. 

    It wasn’t until April that Amazon announced its own family of large language models, called Titan, along with a service called Bedrock to help developers enhance software using generative AI.

    “Amazon is not used to chasing markets. Amazon is used to creating markets. And I think for the first time in a long time, they are finding themselves on the back foot and they are working to play catch up,” said Chirag Dekate, VP analyst at Gartner.

    Meta also recently released its own LLM, Llama 2. The open-source ChatGPT rival is now available for people to test on Microsoft‘s Azure public cloud.

    Chips as ‘true differentiation’

    In the long run, Dekate said, Amazon’s custom silicon could give it an edge in generative AI. 

    “I think the true differentiation is the technical capabilities that they’re bringing to bear,” he said. “Because guess what? Microsoft does not have Trainium or Inferentia,” he said.

    AWS quietly started production of custom silicon back in 2013 with a piece of specialized hardware called Nitro. It’s now the highest-volume AWS chip. Amazon told CNBC there is at least one in every AWS server, with a total of more than 20 million in use. 

    AWS started production of custom silicon back in 2013 with this piece of specialized hardware called Nitro. Amazon told CNBC in August that Nitro is now the highest volume AWS chip, with at least one in every AWS server and a total of more than 20 million in use.

    Courtesy Amazon

    In 2015, Amazon bought Israeli chip startup Annapurna Labs. Then in 2018, Amazon launched its Arm-based server chip, Graviton, a rival to x86 CPUs from giants like AMD and Intel.

    “Probably high single-digit to maybe 10% of total server sales are Arm, and a good chunk of those are going to be Amazon. So on the CPU side, they’ve done quite well,” said Stacy Rasgon, senior analyst at Bernstein Research.

    Also in 2018, Amazon launched its AI-focused chips. That came two years after Google announced its first Tensor Processor Unit, or TPU. Microsoft has yet to announce the Athena AI chip it’s been working on, reportedly in partnership with AMD

    CNBC got a behind-the-scenes tour of Amazon’s chip lab in Austin, Texas, where Trainium and Inferentia are developed and tested. VP of product Matt Wood explained what both chips are for.

    “Machine learning breaks down into these two different stages. So you train the machine learning models and then you run inference against those trained models,” Wood said. “Trainium provides about 50% improvement in terms of price performance relative to any other way of training machine learning models on AWS.”

    Trainium first came on the market in 2021, following the 2019 release of Inferentia, which is now on its second generation.

    Trainum allows customers “to deliver very, very low-cost, high-throughput, low-latency, machine-learning inference, which is all the predictions of when you type in a prompt into your generative AI model, that’s where all that gets processed to give you the response, ” Wood said.

    For now, however, Nvidia’s GPUs are still king when it comes to training models. In July, AWS launched new AI acceleration hardware powered by Nvidia H100s. 

    “Nvidia chips have a massive software ecosystem that’s been built up around them over the last like 15 years that nobody else has,” Rasgon said. “The big winner from AI right now is Nvidia.”

    Amazon’s custom chips, from left to right, Inferentia, Trainium and Graviton are shown at Amazon’s Seattle headquarters on July 13, 2023.

    Joseph Huerta

    Leveraging cloud dominance

    AWS’ cloud dominance, however, is a big differentiator for Amazon.

    “Amazon does not need to win headlines. Amazon already has a really strong cloud install base. All they need to do is to figure out how to enable their existing customers to expand into value creation motions using generative AI,” Dekate said.

    When choosing between Amazon, Google, and Microsoft for generative AI, there are millions of AWS customers who may be drawn to Amazon because they’re already familiar with it, running other applications and storing their data there.

    “It’s a question of velocity. How quickly can these companies move to develop these generative AI applications is driven by starting first on the data they have in AWS and using compute and machine learning tools that we provide,” explained Mai-Lan Tomsen Bukovec, VP of technology at AWS.

    AWS is the world’s biggest cloud computing provider, with 40% of the market share in 2022, according to technology industry researcher Gartner. Although operating income has been down year-over-year for three quarters in a row, AWS still accounted for 70% of Amazon’s overall $7.7 billion operating profit in the second quarter. AWS’ operating margins have historically been far wider than those at Google Cloud.

    AWS also has a growing portfolio of developer tools focused on generative AI.

    “Let’s rewind the clock even before ChatGPT. It’s not like after that happened, suddenly we hurried and came up with a plan because you can’t engineer a chip in that quick a time, let alone you can’t build a Bedrock service in a matter of 2 to 3 months,” said Swami Sivasubramanian, AWS’ VP of database, analytics and machine learning.

    Bedrock gives AWS customers access to large language models made by Anthropic, Stability AI, AI21 Labs and Amazon’s own Titan.

    “We don’t believe that one model is going to rule the world, and we want our customers to have the state-of-the-art models from multiple providers because they are going to pick the right tool for the right job,” Sivasubramanian said.

    An Amazon employee works on custom AI chips, in a jacket branded with AWS’ chip Inferentia, at the AWS chip lab in Austin, Texas, on July 25, 2023.

    Katie Tarasov

    One of Amazon’s newest AI offerings is AWS HealthScribe, a service unveiled in July to help doctors draft patient visit summaries using generative AI. Amazon also has SageMaker, a machine learning hub that offers algorithms, models and more. 

    Another big tool is coding companion CodeWhisperer, which Amazon said has enabled developers to complete tasks 57% faster on average. Last year, Microsoft also reported productivity boosts from its coding companion, GitHub Copilot. 

    In June, AWS announced a $100 million generative AI innovation “center.” 

    “We have so many customers who are saying, ‘I want to do generative AI,’ but they don’t necessarily know what that means for them in the context of their own businesses. And so we’re going to bring in solutions architects and engineers and strategists and data scientists to work with them one on one,” AWS CEO Selipsky said.

    Although so far AWS has focused largely on tools instead of building a competitor to ChatGPT, a recently leaked internal email shows Amazon CEO Andy Jassy is directly overseeing a new central team building out expansive large language models, too.

    In the second-quarter earnings call, Jassy said a “very significant amount” of AWS business is now driven by AI and more than 20 machine learning services it offers. Some examples of customers include Philips, 3M, Old Mutual and HSBC. 

    The explosive growth in AI has come with a flurry of security concerns from companies worried that employees are putting proprietary information into the training data used by public large language models.

    “I can’t tell you how many Fortune 500 companies I’ve talked to who have banned ChatGPT. So with our approach to generative AI and our Bedrock service, anything you do, any model you use through Bedrock will be in your own isolated virtual private cloud environment. It’ll be encrypted, it’ll have the same AWS access controls,” Selipsky said.

    For now, Amazon is only accelerating its push into generative AI, telling CNBC that “over 100,000” customers are using machine learning on AWS today. Although that’s a small percentage of AWS’s millions of customers, analysts say that could change.

    “What we are not seeing is enterprises saying, ‘Oh, wait a minute, Microsoft is so ahead in generative AI, let’s just go out and let’s switch our infrastructure strategies, migrate everything to Microsoft.’ Dekate said. “If you’re already an Amazon customer, chances are you’re likely going to explore Amazon ecosystems quite extensively.”

    — CNBC’s Jordan Novet contributed to this report.

    Source link

  • Paper exams, chatbot bans: Colleges seek to ‘ChatGPT-proof’ assignments

    Paper exams, chatbot bans: Colleges seek to ‘ChatGPT-proof’ assignments

    When philosophy professor Darren Hick came across another case of cheating in his classroom at Furman University last semester, he posted an update to his followers on social media: “Aaaaand, I’ve caught my second ChatGPT plagiarist.”

    Friends and colleagues responded, some with wide-eyed emojis. Others expressed surprise.

    “Only 2?! I’ve caught dozens,” said Timothy Main, a writing professor at Conestoga College in Canada. “We’re in full-on crisis mode.”

    Practically overnight, ChatGPT and other artificial intelligence chatbots have become the go-to source for cheating in college.

    Now, educators are rethinking how they’ll teach courses this fall from Writing 101 to computer science. Educators say they want to embrace the technology’s potential to teach and learn in new ways, but when it comes to assessing students, they see a need to “ChatGPT-proof” test questions and assignments.

    For some instructors that means a return to paper exams, after years of digital-only tests. Some professors will be requiring students to show editing history and drafts to prove their thought process. Other instructors are less concerned. Some students have always found ways to cheat, they say, and this is just the latest option.

    An explosion of AI-generated chatbots including ChatGPT, which launched in November, has raised new questions for academics dedicated to making sure not only that students can get the right answer, but also understand how to do the work. Educators say there is agreement at least on some of the most pressing challenges.

    — Are AI detectors reliable? Not yet, says Stephanie Laggini Fiore, associate vice provost at Temple University. This summer, Fiore was part of a team at Temple that tested the detector used by Turnitin, a popular plagiarism detection service, and found it to be “incredibly inaccurate.” It worked best at confirming human work, she said, but was spotty in identifying chatbot-generated text and least reliable with hybrid work.

    — Will students get falsely accused of using artificial intelligence platforms to cheat? Absolutely. In one case last semester, a Texas A&M professor wrongly accused an entire class of using ChatGPT on final assignments. Most of the class was subsequently exonerated.

    — So, how can educators be certain if a student has used an AI-powered chatbot dishonestly? It’s nearly impossible unless a student confesses, as both of Hicks’ students did. Unlike old-school plagiarism where text matches the source it is lifted from, AI-generated text is unique each time.

    In some cases, the cheating is obvious, says Main, the writing professor, who has had students turn in assignments that were clearly cut-and-paste jobs. “I had answers come in that said, ‘I am just an AI language model, I don’t have an opinion on that,’” he said.

    In his first-year required writing class last semester, Main logged 57 academic integrity issues, an explosion of academic dishonesty compared to about eight cases in each of the two prior semesters. AI cheating accounted for about half of them.

    This fall, Main and colleagues are overhauling the school’s required freshman writing course. Writing assignments will be more personalized to encourage students to write about their own experiences, opinions and perspectives. All assignments and the course syllabi will have strict rules forbidding the use of artificial intelligence.

    College administrators have been encouraging instructors to make the ground rules clear.

    Many institutions are leaving the decision to use chatbots or not in the classroom to instructors, said Hiroano Okahana, the head of the Education Futures Lab at the American Council on Education.

    At Michigan State University, faculty are being given “a small library of statements” to choose from and modify as they see fit on syllabi, said Bill Hart-Davidson, associate dean in MSU’s College of Arts and Letters who is leading AI workshops for faculty to help shape new assignments and policy.

    “Asking students questions like, ‘Tell me in three sentences what is the Krebs cycle in chemistry?’ That’s not going to work anymore, because ChatGPT will spit out a perfectly fine answer to that question,” said Hart-Davidson, who suggests asking questions differently. For example, give a description that has errors and ask students to point them out.

    Evidence is piling up that chatbots have changed study habits and how students seek information.

    Chegg Inc., an online company that offers homework help and has been cited in numerous cheating cases, said in May its shares had tumbled nearly 50% in the first quarter of 2023 because of a spike in student usage of ChatGPT, according to Chegg CEO Dan Rosensweig. He said students who normally pay for Chegg’s service were now using the AI platform for free.

    At Temple this spring, the use of research tools like library databases declined notably following the emergence of chatbots, said Joe Lucia, the university’s dean of libraries.

    “It seemed like students were seeing this as a quick way of finding information that didn’t require the effort or time that it takes to go to a dedicated resource and work with it,” he said.

    Shortcuts like that are a concern partly because chatbots are prone to making things up, a glitch known as “hallucination.” Developers say they are working to make their platforms more reliable but it’s unclear when or if that will happen. Educators also worry about what students lose by skipping steps.

    “There is going to be a big shift back to paper-based tests,” said Bonnie MacKellar, a computer science professor at St. John’s University in New York City. The discipline already had a “massive plagiarism problem” with students borrowing computer code from friends or cribbing it from the internet, said MacKellar. She worries intro-level students taking AI shortcuts are cheating themselves out of skills needed for upper-level classes.

    “I hear colleagues in humanities courses saying the same thing: It’s back to the blue books,” MacKellar said. In addition to requiring students in her intro courses to handwrite their code, the paper exams will count for a higher percentage of the grade this fall, she said.

    Ronan Takizawa, a sophomore at Colorado College, has never heard of a blue book. As a computer science major, that feels to him like going backward, but he agrees it would force students to learn the material. “Most students aren’t disciplined enough to not use ChatGPT,” he said. Paper exams “would really force you to understand and learn the concepts.”

    Takizawa said students are at times confused about when it’s OK to use AI and when it’s cheating. Using ChatGPT to help with certain homework like summarizing reading seems no different from going to YouTube or other sites that students have used for years, he said.

    Other students say the arrival of ChatGPT has made them paranoid about being accused of cheating when they haven’t.

    Arizona State University sophomore Nathan LeVang says he doublechecks all assignments now by running them through an AI detector.

    For one 2,000-word essay, the detector flagged certain paragraphs as “22% written by a human, with mostly AI voicing.”

    “I was like that is definitely not true because I just sat here and wrote it word for word,” LeVang said. But he rewrote those paragraphs anyway. “If it takes me 10 minutes after I write my essay to make sure everything checks out, that’s fine. It’s extra work, but I think that’s the reality we live in.”

    ___

    The Associated Press education team receives support from the Carnegie Corporation of New York. The AP is solely responsible for all content.

    Source link

  • Paper exams, chatbot bans: Colleges seek to ‘ChatGPT-proof’ assignments

    Paper exams, chatbot bans: Colleges seek to ‘ChatGPT-proof’ assignments

    When philosophy professor Darren Hick came across another case of cheating in his classroom at Furman University last semester, he posted an update to his followers on social media: “Aaaaand, I’ve caught my second ChatGPT plagiarist.”

    Friends and colleagues responded, some with wide-eyed emojis. Others expressed surprise.

    “Only 2?! I’ve caught dozens,” said Timothy Main, a writing professor at Conestoga College in Canada. “We’re in full-on crisis mode.”

    Practically overnight, ChatGPT and other artificial intelligence chatbots have become the go-to source for cheating in college.

    Now, educators are rethinking how they’ll teach courses this fall from Writing 101 to computer science. Educators say they want to embrace the technology’s potential to teach and learn in new ways, but when it comes to assessing students, they see a need to “ChatGPT-proof” test questions and assignments.

    For some instructors that means a return to paper exams, after years of digital-only tests. Some professors will be requiring students to show editing history and drafts to prove their thought process. Other instructors are less concerned. Some students have always found ways to cheat, they say, and this is just the latest option.

    An explosion of AI-generated chatbots including ChatGPT, which launched in November, has raised new questions for academics dedicated to making sure not only that students can get the right answer, but also understand how to do the work. Educators say there is agreement at least on some of the most pressing challenges.

    — Are AI detectors reliable? Not yet, says Stephanie Laggini Fiore, associate vice provost at Temple University. This summer, Fiore was part of a team at Temple that tested the detector used by Turnitin, a popular plagiarism detection service, and found it to be “incredibly inaccurate.” It worked best at confirming human work, she said, but was spotty in identifying chatbot-generated text and least reliable with hybrid work.

    — Will students get falsely accused of using artificial intelligence platforms to cheat? Absolutely. In one case last semester, a Texas A&M professor wrongly accused an entire class of using ChatGPT on final assignments. Most of the class was subsequently exonerated.

    — So, how can educators be certain if a student has used an AI-powered chatbot dishonestly? It’s nearly impossible unless a student confesses, as both of Hicks’ students did. Unlike old-school plagiarism where text matches the source it is lifted from, AI-generated text is unique each time.

    In some cases, the cheating is obvious, says Main, the writing professor, who has had students turn in assignments that were clearly cut-and-paste jobs. “I had answers come in that said, ‘I am just an AI language model, I don’t have an opinion on that,’” he said.

    In his first-year required writing class last semester, Main logged 57 academic integrity issues, an explosion of academic dishonesty compared to about eight cases in each of the two prior semesters. AI cheating accounted for about half of them.

    This fall, Main and colleagues are overhauling the school’s required freshman writing course. Writing assignments will be more personalized to encourage students to write about their own experiences, opinions and perspectives. All assignments and the course syllabi will have strict rules forbidding the use of artificial intelligence.

    College administrators have been encouraging instructors to make the ground rules clear.

    Many institutions are leaving the decision to use chatbots or not in the classroom to instructors, said Hiroano Okahana, the head of the Education Futures Lab at the American Council on Education.

    At Michigan State University, faculty are being given “a small library of statements” to choose from and modify as they see fit on syllabi, said Bill Hart-Davidson, associate dean in MSU’s College of Arts and Letters who is leading AI workshops for faculty to help shape new assignments and policy.

    “Asking students questions like, ‘Tell me in three sentences what is the Krebs cycle in chemistry?’ That’s not going to work anymore, because ChatGPT will spit out a perfectly fine answer to that question,” said Hart-Davidson, who suggests asking questions differently. For example, give a description that has errors and ask students to point them out.

    Evidence is piling up that chatbots have changed study habits and how students seek information.

    Chegg Inc., an online company that offers homework help and has been cited in numerous cheating cases, said in May its shares had tumbled nearly 50% in the first quarter of 2023 because of a spike in student usage of ChatGPT, according to Chegg CEO Dan Rosensweig. He said students who normally pay for Chegg’s service were now using the AI platform for free.

    At Temple this spring, the use of research tools like library databases declined notably following the emergence of chatbots, said Joe Lucia, the university’s dean of libraries.

    “It seemed like students were seeing this as a quick way of finding information that didn’t require the effort or time that it takes to go to a dedicated resource and work with it,” he said.

    Shortcuts like that are a concern partly because chatbots are prone to making things up, a glitch known as “hallucination.” Developers say they are working to make their platforms more reliable but it’s unclear when or if that will happen. Educators also worry about what students lose by skipping steps.

    “There is going to be a big shift back to paper-based tests,” said Bonnie MacKellar, a computer science professor at St. John’s University in New York City. The discipline already had a “massive plagiarism problem” with students borrowing computer code from friends or cribbing it from the internet, said MacKellar. She worries intro-level students taking AI shortcuts are cheating themselves out of skills needed for upper-level classes.

    “I hear colleagues in humanities courses saying the same thing: It’s back to the blue books,” MacKellar said. In addition to requiring students in her intro courses to handwrite their code, the paper exams will count for a higher percentage of the grade this fall, she said.

    Ronan Takizawa, a sophomore at Colorado College, has never heard of a blue book. As a computer science major, that feels to him like going backward, but he agrees it would force students to learn the material. “Most students aren’t disciplined enough to not use ChatGPT,” he said. Paper exams “would really force you to understand and learn the concepts.”

    Takizawa said students are at times confused about when it’s OK to use AI and when it’s cheating. Using ChatGPT to help with certain homework like summarizing reading seems no different from going to YouTube or other sites that students have used for years, he said.

    Other students say the arrival of ChatGPT has made them paranoid about being accused of cheating when they haven’t.

    Arizona State University sophomore Nathan LeVang says he doublechecks all assignments now by running them through an AI detector.

    For one 2,000-word essay, the detector flagged certain paragraphs as “22% written by a human, with mostly AI voicing.”

    “I was like that is definitely not true because I just sat here and wrote it word for word,” LeVang said. But he rewrote those paragraphs anyway. “If it takes me 10 minutes after I write my essay to make sure everything checks out, that’s fine. It’s extra work, but I think that’s the reality we live in.”

    ___

    The Associated Press education team receives support from the Carnegie Corporation of New York. The AP is solely responsible for all content.

    Source link

  • Paper exams, chatbot bans: Colleges seek to ‘ChatGPT-proof’ assignments

    Paper exams, chatbot bans: Colleges seek to ‘ChatGPT-proof’ assignments

    When philosophy professor Darren Hick came across another case of cheating in his classroom at Furman University last semester, he posted an update to his followers on social media: “Aaaaand, I’ve caught my second ChatGPT plagiarist.”

    Friends and colleagues responded, some with wide-eyed emojis. Others expressed surprise.

    “Only 2?! I’ve caught dozens,” said Timothy Main, a writing professor at Conestoga College in Canada. “We’re in full-on crisis mode.”

    Practically overnight, ChatGPT and other artificial intelligence chatbots have become the go-to source for cheating in college.

    Now, educators are rethinking how they’ll teach courses this fall from Writing 101 to computer science. Educators say they want to embrace the technology’s potential to teach and learn in new ways, but when it comes to assessing students, they see a need to “ChatGPT-proof” test questions and assignments.

    For some instructors that means a return to paper exams, after years of digital-only tests. Some professors will be requiring students to show editing history and drafts to prove their thought process. Other instructors are less concerned. Some students have always found ways to cheat, they say, and this is just the latest option.

    An explosion of AI-generated chatbots including ChatGPT, which launched in November, has raised new questions for academics dedicated to making sure not only that students can get the right answer, but also understand how to do the work. Educators say there is agreement at least on some of the most pressing challenges.

    — Are AI detectors reliable? Not yet, says Stephanie Laggini Fiore, associate vice provost at Temple University. This summer, Fiore was part of a team at Temple that tested the detector used by Turnitin, a popular plagiarism detection service, and found it to be “incredibly inaccurate.” It worked best at confirming human work, she said, but was spotty in identifying chatbot-generated text and least reliable with hybrid work.

    — Will students get falsely accused of using artificial intelligence platforms to cheat? Absolutely. In one case last semester, a Texas A&M professor wrongly accused an entire class of using ChatGPT on final assignments. Most of the class was subsequently exonerated.

    — So, how can educators be certain if a student has used an AI-powered chatbot dishonestly? It’s nearly impossible unless a student confesses, as both of Hicks’ students did. Unlike old-school plagiarism where text matches the source it is lifted from, AI-generated text is unique each time.

    In some cases, the cheating is obvious, says Main, the writing professor, who has had students turn in assignments that were clearly cut-and-paste jobs. “I had answers come in that said, ‘I am just an AI language model, I don’t have an opinion on that,’” he said.

    In his first-year required writing class last semester, Main logged 57 academic integrity issues, an explosion of academic dishonesty compared to about eight cases in each of the two prior semesters. AI cheating accounted for about half of them.

    This fall, Main and colleagues are overhauling the school’s required freshman writing course. Writing assignments will be more personalized to encourage students to write about their own experiences, opinions and perspectives. All assignments and the course syllabi will have strict rules forbidding the use of artificial intelligence.

    College administrators have been encouraging instructors to make the ground rules clear.

    Many institutions are leaving the decision to use chatbots or not in the classroom to instructors, said Hiroano Okahana, the head of the Education Futures Lab at the American Council on Education.

    At Michigan State University, faculty are being given “a small library of statements” to choose from and modify as they see fit on syllabi, said Bill Hart-Davidson, associate dean in MSU’s College of Arts and Letters who is leading AI workshops for faculty to help shape new assignments and policy.

    “Asking students questions like, ‘Tell me in three sentences what is the Krebs cycle in chemistry?’ That’s not going to work anymore, because ChatGPT will spit out a perfectly fine answer to that question,” said Hart-Davidson, who suggests asking questions differently. For example, give a description that has errors and ask students to point them out.

    Evidence is piling up that chatbots have changed study habits and how students seek information.

    Chegg Inc., an online company that offers homework help and has been cited in numerous cheating cases, said in May its shares had tumbled nearly 50% in the first quarter of 2023 because of a spike in student usage of ChatGPT, according to Chegg CEO Dan Rosensweig. He said students who normally pay for Chegg’s service were now using the AI platform for free.

    At Temple this spring, the use of research tools like library databases declined notably following the emergence of chatbots, said Joe Lucia, the university’s dean of libraries.

    “It seemed like students were seeing this as a quick way of finding information that didn’t require the effort or time that it takes to go to a dedicated resource and work with it,” he said.

    Shortcuts like that are a concern partly because chatbots are prone to making things up, a glitch known as “hallucination.” Developers say they are working to make their platforms more reliable but it’s unclear when or if that will happen. Educators also worry about what students lose by skipping steps.

    “There is going to be a big shift back to paper-based tests,” said Bonnie MacKellar, a computer science professor at St. John’s University in New York City. The discipline already had a “massive plagiarism problem” with students borrowing computer code from friends or cribbing it from the internet, said MacKellar. She worries intro-level students taking AI shortcuts are cheating themselves out of skills needed for upper-level classes.

    “I hear colleagues in humanities courses saying the same thing: It’s back to the blue books,” MacKellar said. In addition to requiring students in her intro courses to handwrite their code, the paper exams will count for a higher percentage of the grade this fall, she said.

    Ronan Takizawa, a sophomore at Colorado College, has never heard of a blue book. As a computer science major, that feels to him like going backward, but he agrees it would force students to learn the material. “Most students aren’t disciplined enough to not use ChatGPT,” he said. Paper exams “would really force you to understand and learn the concepts.”

    Takizawa said students are at times confused about when it’s OK to use AI and when it’s cheating. Using ChatGPT to help with certain homework like summarizing reading seems no different from going to YouTube or other sites that students have used for years, he said.

    Other students say the arrival of ChatGPT has made them paranoid about being accused of cheating when they haven’t.

    Arizona State University sophomore Nathan LeVang says he doublechecks all assignments now by running them through an AI detector.

    For one 2,000-word essay, the detector flagged certain paragraphs as “22% written by a human, with mostly AI voicing.”

    “I was like that is definitely not true because I just sat here and wrote it word for word,” LeVang said. But he rewrote those paragraphs anyway. “If it takes me 10 minutes after I write my essay to make sure everything checks out, that’s fine. It’s extra work, but I think that’s the reality we live in.”

    ___

    The Associated Press education team receives support from the Carnegie Corporation of New York. The AP is solely responsible for all content.

    Source link

  • FACT FOCUS: Zoom says it isn’t training AI on calls without consent. But other data is fair game

    FACT FOCUS: Zoom says it isn’t training AI on calls without consent. But other data is fair game

    An update to Zoom’s terms of service is raising alarm bells on social media, with users claiming it reveals the videoconferencing company is now tapping their online doctor visits and virtual happy hours to train artificial intelligence models.

    “Zoom terms of service now require you to allow AI to train on ALL your data — audio, facial recognition, private conversations — unconditionally and irrevocably, with no opt out,” read one widely-shared tweet this week that has since been deleted. “Don’t try to negotiate with our new overlords.”

    The company quickly responded with a blog post on Monday stressing that it “will not use audio, video, or chat customer content to train our artificial intelligence models without your consent,” and adding a line to the terms to make this clearer.

    Online privacy experts say that this policy is now accurately reflected in the document. However, the terms do still allow Zoom to train AI on other data, such as how customers behave — and they question how much choice some meeting participants will have to opt out if, say, their boss decides otherwise.

    Here’s a closer look at the facts.

    CLAIM: Zoom’s terms of service give the company permission to use all customer data, including private conversations, for training artificial intelligence, with no ability to opt out.

    THE FACTS: That is not accurate, at least now that Zoom has added language to the terms to make its policy clearer, experts say.

    The current terms would not allow the company to tap user-generated content like video and chat for AI training without a customer opting in. However, once a meeting host agrees, other participants would have to leave if they don’t want to consent. The terms also allow Zoom to use other data, including information about user behavior, without additional permission.

    “The face of these terms of service does now assure the user that Zoom is not going to use their customer content for the purpose of training artificial intelligence models without their consent,” John Davisson, director of litigation and senior counsel at the Electronic Privacy Information Center, told The Associated Press.

    At issue is language Zoom added to its terms in March. The document differentiates between two types of data: “service generated data,” such as what features customers use and what part of the world they are in, and “customer content,” which is the data created by users themselves, such as audio or chat transcripts.

    The terms state that service-generated data can be used for “machine learning or artificial intelligence (including the purposes of training and tuning algorithms and models.” Zoom’s blog post says the company considers such data “to be our data,” and experts confirm this language would allow the company to use this data for AI training without obtaining additional consent.

    Separately, the terms say that customer content may also be used “for the purpose” of machine learning or AI.

    After this was highlighted on social media this week, the company clarified in its post that this refers to new generative AI features that users must agree to, which create things like automated meeting summaries for customers. Zoom said in a statement to the AP that in addition to enabling the features for themselves, users must separately consent to sharing this data with the company.

    Experts said that the language in the March update was wide-reaching and could have opened the door for the company to use that data without additional permission if it wanted to.

    But Zoom added a more explicit caveat to the terms on Monday, saying: “Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.”

    With this language, Davisson said that using such data to train AI without a user consenting would now constitute a violation of the terms on Zoom’s part, opening the company up to litigation.

    However, experts said the way this function works could still pose problems for some participants in Zoom calls if the host opts into the generative AI features.

    Zoom says that if a meeting organizer decides to use the meeting summary feature, participants are sent a notification alerting them that an AI feature has been enabled and that their data may be shared for AI training. They are prompted to either proceed with the meeting or to leave.

    Although this in theory offers all participants the ability to control how their data is used, it may not be possible for someone to opt out of a meeting or forgo Zoom altogether if they disagree, said Katharine Trendacosta, director of policy and advocacy at the Electronic Frontier Foundation.

    “If the administrator consents and it’s your boss at your work who requires you to use Zoom, how is that really consent?” she asked.

    Davisson has similar concerns.

    “That sort of just-in-time acquisition of ‘consent,’ is not real consent,” he said. “And so that’s a pretty misleading caveat from them to introduce. It claims to rest on consent, but what it’s resting on really is just sort of what the meeting organizer or system administrator has decided to do.”

    ___

    This is part of AP’s effort to address widely shared misinformation, including work with outside companies and organizations to add factual context to misleading content that is circulating online. Learn more about fact-checking at AP.

    Source link

  • D&D fans turned off by A.I.-generated art spurred a Hasbro unit into banning it: ‘We are revising our process’

    D&D fans turned off by A.I.-generated art spurred a Hasbro unit into banning it: ‘We are revising our process’

    The Dungeons & Dragons role-playing game franchise says it won’t allow artists to use artificial intelligence technology to draw its cast of sorcerers, druids and other characters and scenery.

    D&D art is supposed to be fanciful. But at least one ax-wielding giant seemed too weird for some fans, leading them to take to social media to question if it was human-made.

    Hasbro-owned D&D Beyond, which makes online tools and other companion content for the franchise, said it didn’t know until Saturday that an illustrator it has worked with for nearly a decade used AI to create commissioned artwork for an upcoming book. The franchise, run by the Hasbro subsidiary Wizards of the Coast, said in a statement that it has talked to that artist and is clarifying its rules.

    “He will not use AI for Wizards’ work moving forward,” said a post from D&D Beyond’s account on X, formerly Twitter. “We are revising our process and updating our artist guidelines to make clear that artists must refrain from using AI art generation as part of their art creation process for developing D&D.”

    Today’s AI-generated art often shows telltale glitches, such as distorted limbs, which is what caught the eye of skeptical D&D fans.

    Hasbro and Wizards of the Coast didn’t respond to requests for further comment Sunday. Hasbro bought D&D Beyond for $146.3 million last year. The Rhode Island-based toy giant has owned Wizards of the Coast for more than two decades.

    The art in question is in a soon-to-be-released hardcover book of monster descriptions and lore called “Bigby Presents: Glory of the Giants.” The digital and physical version of the package is selling for $59.95 on the D&D website and due for an Aug. 15 release.

    The use of AI tools to assist in creative work has raised copyright and labor concerns in a number of industries, helping to fuel the Hollywood strike, causing the music industry’s Recording Academy to revise its Grammy Awards protocols and leading some visual artists to sue AI companies for ingesting their work without their consent to build image-generators that anyone can use.

    Hasbro rival Mattel used AI-generated images to help come up with ideas for new Hot Wheels toy cars, though it hasn’t said if that was more than an experiment.

    Matt O’Brien, The Associated Press

    Source link

  • I Left Dream Job at Google Because It Wasn’t Innovative Enough | Entrepreneur

    I Left Dream Job at Google Because It Wasn’t Innovative Enough | Entrepreneur

    This as-told-to essay is based on an interview with Tyler Ashby, president at Agents Only Technologies.

    Google was a dream opportunity that offered me career validation, financial security and a sense of personal pride.

    Before joining Google, I spent 20 years in the customer service and business process outsourcing field. Specifically, I launched and ran customer contact centers around the globe. From the start of my career, I was always drawn to the people side of the business. This drove me to take a particular interest in people-first operating principles with an emphasis on creating technology and processes that make the humans involved more valuable.

    My time with Bain trained me to tackle customer experience and costs using customer journey mapping, service blueprints and people/process/technology analytics. As a result, I developed a strong passion for identifying and improving efficiency without compromising the end experience. I’ve launched projects across the world for notable companies such as Sprint, Dell, Epson, Citibank, Samsung, Telstra and Virgin.

    I joined Google somewhat circuitously. I originally interviewed with Fitbit, but during the hiring process, Google announced its plans to acquire the company. At that point, I had several job options, but I chose Fitbit, primarily because I would have the chance to work at Google down the road.

    Initially, the experience at Google was exciting; the opportunities seemed endless. I underwent onboarding from an engineering group, Noogler Training, and had unlimited access to self-paced training and experts from various skills. It seemed like everyone at the company had a “go try it” attitude and there were no restrictions on creativity.

    Related: How Do You Win the Innovation Game? Google’s ‘X’ Marks the Spot.

    However, I soon realized that Google’s innovation, particularly regarding generative AI, was limited.

    “The customer service department had limited to no access to even basic data tools — let alone any of the leading tech being developed.”

    When Rick Osterloh, senior vice president of devices and services, presented his roadmap, the priorities and core action plans laid out in it did not align with those of a tech innovator. Although our engineering groups were enabled and encouraged to experiment and developed brand-new applications for our technology through side projects, we in the customer service department had limited to no access to even basic data tools — let alone any of the leading tech being developed within Google.

    The customer service teams primarily used outdated third-party tools. Even with the perfect execution of the 18-month roadmap, we would still be technologically behind the leading industry standard. Google’s roadmap lacked any plans for generative AI in customer service.

    In 2018, Sundar Pichai, CEO of Alphabet Inc. and its subsidiary Google, showcased Google Duplex’s AI capabilities at I/O, where it successfully called a hair salon and booked an appointment using AI voice and transcription. But in 2020, when I asked about it, no one in customer service had used it or brainstormed applications for Google customer service. In 2021, I gained access to Meena, Google’s AI chatbot, but I was the sole user within my team.

    I discussed the matter with engineering mentors and learned about Google’s unwritten stance: Google doesn’t invest in customer experience through service; the company believes in making the product better. I was given several examples from YouTube and Stadia that indicated the limited influence of the services organization within Google. Because of this, I believed Google had no intention to innovate the customer experience.

    Neither Pichai nor Osterloh explicitly stated that Meena, a generative AI, wouldn’t be part of customer service. Although Pichai has a positive vision for AI’s potential, there were no explicit plans. Most importantly, our roadmap didn’t pave the way for generative AI integration, which would require significant work on processes, tools, data management and tech stack integration. The focus was on cost reduction — not preparing for implementing this groundbreaking tech. Despite raising questions about this through Dory, Google’s internal Q&A, and other channels, they remained unaddressed.

    What’s more, I spoke to four different leaders within my organization — a Fitbit director, direct manager, director of scaled operations and CS tools/transformation director — about the opportunities with Meena and how we could go about working between the engineering org and the customer service org.

    They confirmed what I already knew: Google wasn’t innovative enough for me.

    “Google didn’t excite me or give me a sense that I was truly making a difference.”

    I decided to leave Google when presented with an opportunity at Agents Only, a gigCX platform addressing contact center issues by employing technology to link brands with seasoned gig agents, forming an instant virtual contact center. Centered around the agents, the company aims to harness top-notch talent to create the best customer outcomes.

    My choice between Google and Agents Only was a trade-off between long-term stability and personal satisfaction. Ultimately, I realized that my work at Google didn’t excite me or give me a sense that I was truly making a difference. When I resigned, I sent two of those colleagues I’d spoken with an email referencing the Meena predicament; I told them Agents Only was offering me the chance to use tech to make a difference.

    Without the Agents Only opportunity, I would have stayed at Google due to the stability and benefits that were valuable to my family. It was only the appeal of joining a startup that gave me a chance to make a real difference that could have pulled me away.

    Additionally, Agents Only’s people-centric philosophy, technological vision and innovation potential were major selling points. I also knew and trusted the founder, Ben Block: He had put his money where his mouth was many times in the past when it came to taking care of agents, so I believed him when he said the company was founded to make agents’ lives better.

    I signed on — and it was the right decision.

    Related: How to Tap Into Innovation, the Most Essential Part of Your Entrepreneurial Journey

    Agents Only is at the forefront of disrupting the contact center industry and raising the quality of life of the agents.

    There is such a huge pool of dedicated talent that cares about doing a good job for themselves, the customer and the client. I want to enable these agents to make a larger share of the money by paying them more and use our technology to remove layers of ineffective command and control management. We are currently operational in four countries, and I’m looking forward to adding agents in every country of the world and creating an OnDemand GigCX solution that will become a part of every company’s customer strategy.

    “Internally, we are using generative AI to coach the agents on behaviors that lead to better outcomes.”

    To date, we’ve handled over 12 million customer contacts, using 100 million data points to rate agents, and processed $250 million in revenue for our restaurant and hospitality clients. This was done at a cost 40% lower than normal operations while being able to pay the agents almost twice as much as the industry standard.

    We’ve also delivered incredible flexibility in terms of staffing — going from 700 agents to over 2,000 agents for a single day to handle restaurant calls during the Superbowl.

    Reliability is a key component too. When we allowed 100% of agents to choose their schedule, they successfully delivered on 98% of those hours, surpassing the industry average of 82%. We have achieved an impressively low attrition rate of less than 1% as our agents tend to remain on our platform once they join.

    As for generative AI, the access it gives to knowledge and self-learning capabilities is highly valuable. Internally, we are using generative AI to coach the agents on behaviors that lead to better outcomes. They are incentivized for these behaviors and can have in-depth coaching conversations with the AI that allows them to ask clarifying questions, get expert examples or receive objective feedback on their skills.

    Related: Why Are So Many Companies Afraid of Generative AI? | Entrepreneur

    The most exciting part of generative AI will come when used in tandem with other types of AI. Combining machine learning AI or analytic AI with generative AI will allow human interaction and development that is real-time and takes into account every piece of data available to define and optimize, as well as task automation and real-time augmentation. The result could be a learning and development loop allowing anyone to learn anything — while also putting that knowledge into practical execution via automation.

    Amanda Breen

    Source link

  • Chatbots sometimes make things up. Is AI’s hallucination problem fixable?

    Chatbots sometimes make things up. Is AI’s hallucination problem fixable?

    Spend enough time with ChatGPT and other artificial intelligence chatbots and it doesn’t take long for them to spout falsehoods.

    Described as hallucination, confabulation or just plain making things up, it’s now a problem for every business, organization and high school student trying to get a generative AI system to compose documents and get work done. Some are using it on tasks with the potential for high-stakes consequences, from psychotherapy to researching and writing legal briefs.

    “I don’t think that there’s any model today that doesn’t suffer from some hallucination,” said Daniela Amodei, co-founder and president of Anthropic, maker of the chatbot Claude 2.

    “They’re really just sort of designed to predict the next word,” Amodei said. “And so there will be some rate at which the model does that inaccurately.”

    Anthropic, ChatGPT-maker OpenAI and other major developers of AI systems known as large language models say they’re working to make them more truthful.

    How long that will take — and whether they will ever be good enough to, say, safely dole out medical advice — remains to be seen.

    “This isn’t fixable,” said Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory. “It’s inherent in the mismatch between the technology and the proposed use cases.”

    A lot is riding on the reliability of generative AI technology. The McKinsey Global Institute projects it will add the equivalent of $2.6 trillion to $4.4 trillion to the global economy. Chatbots are only one part of that frenzy, which also includes technology that can generate new images, video, music and computer code. Nearly all of the tools include some language component.

    Google is already pitching a news-writing AI product to news organizations, for which accuracy is paramount. The Associated Press is also exploring use of the technology as part of a partnership with OpenAI, which is paying to use part of AP’s text archive to improve its AI systems.

    In partnership with India’s hotel management institutes, computer scientist Ganesh Bagler has been working for years to get AI systems, including a ChatGPT precursor, to invent recipes for South Asian cuisines, such as novel versions of rice-based biryani. A single “hallucinated” ingredient could be the difference between a tasty and inedible meal.

    When Sam Altman, the CEO of OpenAI, visited India in June, the professor at the Indraprastha Institute of Information Technology Delhi had some pointed questions.

    “I guess hallucinations in ChatGPT are still acceptable, but when a recipe comes out hallucinating, it becomes a serious problem,” Bagler said, standing up in a crowded campus auditorium to address Altman on the New Delhi stop of the U.S. tech executive’s world tour.

    “What’s your take on it?” Bagler eventually asked.

    Altman expressed optimism, if not an outright commitment.

    “I think we will get the hallucination problem to a much, much better place,” Altman said. “I think it will take us a year and a half, two years. Something like that. But at that point we won’t still talk about these. There’s a balance between creativity and perfect accuracy, and the model will need to learn when you want one or the other.”

    But for some experts who have studied the technology, such as University of Washington linguist Bender, those improvements won’t be enough.

    Bender describes a language model as a system for “modeling the likelihood of different strings of word forms,” given some written data it’s been trained upon.

    It’s how spell checkers are able to detect when you’ve typed the wrong word. It also helps power automatic translation and transcription services, “smoothing the output to look more like typical text in the target language,” Bender said. Many people rely on a version of this technology whenever they use the “autocomplete” feature when composing text messages or emails.

    The latest crop of chatbots such as ChatGPT, Claude 2 or Google’s Bard try to take that to the next level, by generating entire new passages of text, but Bender said they’re still just repeatedly selecting the most plausible next word in a string.

    When used to generate text, language models “are designed to make things up. That’s all they do,” Bender said. They are good at mimicking forms of writing, such as legal contracts, television scripts or sonnets.

    “But since they only ever make things up, when the text they have extruded happens to be interpretable as something we deem correct, that is by chance,” Bender said. “Even if they can be tuned to be right more of the time, they will still have failure modes — and likely the failures will be in the cases where it’s harder for a person reading the text to notice, because they are more obscure.”

    Those errors are not a huge problem for the marketing firms that have been turning to Jasper AI for help writing pitches, said the company’s president, Shane Orlick.

    “Hallucinations are actually an added bonus,” Orlick said. “We have customers all the time that tell us how it came up with ideas — how Jasper created takes on stories or angles that they would have never thought of themselves.”

    The Texas-based startup works with partners like OpenAI, Anthropic, Google or Facebook parent Meta to offer its customers a smorgasbord of AI language models tailored to their needs. For someone concerned about accuracy, it might offer up Anthropic’s model, while someone concerned with the security of their proprietary source data might get a different model, Orlick said.

    Orlick said he knows hallucinations won’t be easily fixed. He’s counting on companies like Google, which he says must have a “really high standard of factual content” for its search engine, to put a lot of energy and resources into solutions.

    “I think they have to fix this problem,” Orlick said. “They’ve got to address this. So I don’t know if it’s ever going to be perfect, but it’ll probably just continue to get better and better over time.”

    Techno-optimists, including Microsoft co-founder Bill Gates, have been forecasting a rosy outlook.

    “I’m optimistic that, over time, AI models can be taught to distinguish fact from fiction,” Gates said in a July blog post detailing his thoughts on AI’s societal risks.

    He cited a 2022 paper from OpenAI as an example of “promising work on this front.”

    But even Altman, as he markets the products for a variety of uses, doesn’t count on the models to be truthful when he’s looking for information for himself.

    “I probably trust the answers that come out of ChatGPT the least of anybody on Earth,” Altman told the crowd at Bagler’s university, to laughter.

    Source link

  • A.I. is making some common side hustles more lucrative—these can pay up to $100 per hour

    A.I. is making some common side hustles more lucrative—these can pay up to $100 per hour

    Artificial intelligence still has a long way to go before completely taking over most human jobs. But it can already make some side hustles easier and more lucrative, primarily by saving people time.

    “Automation, I think, is the key to reducing your workload,” Sean Audet, a food photographer who uses generative AI tools like ChatGPT to write emails and business plans, told CNBC Make It earlier this month. “When a client first reaches out to me, I need to be able to quickly deliver a bunch of information about services and costs … in a nice, succinct and personalized way.”

    Time is particularly valuable for side hustles, where your bandwidth is limited by definition. Some gigs that can benefit from current AI platforms are highly lucrative, too — paying up to $100 per hour.

    Notably, few — if any — of today’s AI tools are “set it and forget it” style programs. Chatbots tend to output robotic-sounding language, and can “hallucinate” sentences that are simply wrong. Image generators still struggle to nail small details within larger pictures.

    Those errors can occur from even simple prompts. In March, researchers from Stanford University and the University of California, Berkeley asked ChatGPT 3.5 and 4 — OpenAI’s free chatbot and an updated version available to paying subscribers, respectively — to identify prime numbers. In June, they did it again.

    The results varied wildly, from 2.4% accuracy (ChatGPT 4, in June) to 97.6% accuracy (ChatGPT 4, in March), the study reported.

    Still, editing an AI’s language can be faster than writing multiple paragraphs from scratch, Audet said. Here are three common side hustles where you can already save time — and make money — by using AI.

    Travel agents

    Nicole Cueto, a New York-based public relations consultant, makes money on the side by helping people plan their vacations — booking flights, making reservations and planning excursions. She also has a profile on travel agent platform Fora, where she earns commissions when clients book hotels and experiences through her recommendations.

    In January, when Cueto started her side hustle, she spent five to seven hours planning one day of vacation. Using ChatGPT as a refined, filtered version of Google cuts her “research time in half,” she says.

    Cueto has visited 43 countries and all seven continents, she says. Late last year, she realized she could monetize her passion for travel.

    Nicole Cueto

    “I’ve been to Paris a thousand times, but if I have a client that wants to discover the depths of the city from an old school perspective, I don’t really know how to do that [from personal experience],” she says. “So, I’ll type in, ‘Give me a budget-conscious guide to Paris that incorporates historical neighborhoods where politicians lived in the 1880s.’”

    Following ChatGPT’s proposed itinerary without further research would be risky, but Cueto says she doesn’t mind doing the fact-checking. It’s still more efficient than other search engines, she adds — and saving time means taking on more clients and making more money.

    Today, Cueto makes an average of $670 per month from her side hustle, according to documents reviewed by CNBC Make It. She works 10 to 20 hours per week on it, making her rates roughly $42 per hour, she says.

    Content assistants

    Even as AI may replace human jobs, it can create new ones: Some companies have started hiring part-time content assistants, whose job is to generate blog, newsletter and social media posts using chatbots — and then fact-check the results.

    These jobs, which are also referred to as AI content editing, can pay anywhere from $20 to $100 per hour, experts say.

    “You can literally copy and paste in a transcription and say, ‘Turn this [speech] into a 700-word blog article that has five tips,’” Angelique Rewers, founder of small-business consulting firm BoldHaus, told CNBC Make It last month.

    Rewers called AI content assistants “the biggest new side hustle,” adding that assistants should proofread anything they aggregate from ChatGPT to “make sure that it’s not gobbledygook.”

    The barrier to entry is low, Rewers said. ChatGPT is currently free to use, and aspiring side hustlers can learn to effectively generate prompts on YouTube.

    The job is starting to take off on freelance platforms, too, Upwork vice president of talent solutions Margaret Lilani told CNBC Make It last month.

    “We’ve seen a high demand for this category of work,” Lilani said. Employers “are looking to build up the supply of freelancers who can support this demand.”

    Artists

    In some cases, when AI saves you time, it’s more useful to reinvest it in your future profits.

    Audet, a trained fine dining chef, realized he had a knack for photography while substitute teaching a “Pastry Arts” class at Red River College in Winnipeg, Canada. He turned his side hustle into a full-time gig in 2020, and says he now regularly uses AI to craft emails and build business templates.

    Trained fine dining chef Sean Audet was teaching in Winnipeg, Canada, when he was introduced to food photography. Now, it’s his full-time job.

    Sean Audet

    In the short term, he’s spent as much time practicing his AI prompts as he would’ve spent writing the emails and templates himself, he says: “It’s almost like having an assistant that you have to be really, really, really specific with.”

    That means Audet isn’t making more money due to AI yet. In the long term, those skills should pay off more lucratively, especially as the technology improves, he says.

    Audet has also dabbled with generative AI on photos, through programs like Midjourney. The technology allows him to swap out backgrounds, fix small imperfections or change the color of objects — but not to a degree that he’s ready to use it on professional projects.

    “You’ll sometimes get surprisingly good results … but if the technology can do like 90% of the job, that’s not good enough when you’re working with clients paying a lot of money,” Audet says. “So the impact of it on my business is still relatively low.”

    DON’T MISS: Want to be smarter and more successful with your money, work & life? Sign up for our new newsletter!

    Take your business to the next level: Register for CNBC’s free Small Business Playbook virtual event on August 2 at 1 p.m. ET to learn from premier experts and entrepreneurs how you can beat inflation, hire top talent and get access to capital.

    Source link

  • The Secret to How Businesses Can Fully Harness the Power of AI | Entrepreneur

    The Secret to How Businesses Can Fully Harness the Power of AI | Entrepreneur

    Opinions expressed by Entrepreneur contributors are their own.

    Generative AI — when harnessed correctly — has the potential to revolutionize the way companies operate, innovate and compete. But one question still remains: How can businesses effectively tap into this potential? The answer lies in setting up an AI center of excellence that combines IT with learning and development to serve the needs of business operations.

    Any company can set up an AI center of excellence, large or small — and smaller ones can be more nimble and flexible in setting one up, allowing them to get ahead of their larger competitors. And that center of excellence itself requires a two-pronged approach to establish what it means to be excellent at using AI: observing the application of generative AI in other companies and understanding its use within their own ranks.

    Related: What Is Artificial Intelligence (AI)? Here Are Its Benefits, Uses and More

    Learning about AI best practices

    The first step in the journey towards effective use of generative AI is to look outward. Companies that have already integrated AI into their operations can serve as valuable case studies. These pioneers have navigated the challenges of implementation, and their successes and failures provide a roadmap for others to follow.

    For instance, a number of companies report having marketing teams using AI to generate creative content, while the sales team uses it to predict customer behavior. By identifying these practices, companies can consolidate their efforts and create a unified strategy for AI usage.

    The second prong of this approach involves looking inward. Companies must understand how their own employees are already using generative AI. This internal audit can reveal surprising insights about the company’s current AI capabilities and areas for improvement. This internal exploration is not just about finding existing uses of AI, but also about encouraging employees to come forward with their ideas and experiences. This can foster a culture of innovation and make the integration of AI a company-wide endeavor.

    However, it’s at this step that I most often see problems in companies for which I consult on integrating AI into their workflow. Initial evidence suggests that AI can significantly boost personal efficiency for individual employees by anywhere from 20% to 70% for many tasks, with the quality of output surpassing that of tasks completed without AI assistance. This is a testament to the transformative power of AI when used as a personal productivity tool, especially when operated by someone within their area of expertise.

    However, it’s important to note that the current state of AI primarily enhances individual productivity rather than organizational productivity as a whole, as highlighted by Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania. This is because AI, in its current form, makes for rather unpredictable software. It can be inconsistent, prone to error and generally doesn’t behave in the way that traditional IT is expected to behave. As a result, AI doesn’t scale well in its current state.

    But don’t let this deter you. The key is to recognize the potential of AI as a personal productivity tool and to harness this potential within your organization. By doing so, you can empower your employees, improve efficiency and pave the way for the future integration of AI on a larger scale. As AI technology continues to evolve, we can expect it to become more reliable and scalable, eventually becoming an integral part of organizational productivity.

    Establishing an AI center of excellence

    Once a company has gathered this information, the next step is to establish a center of excellence for using Generative AI. My clients found the most success when this center was co-led by a team from IT, who can handle the technical aspects of AI, and HR, who can oversee learning and development.

    The center of excellence serves as a hub for AI-related activities within the company. It provides guidance, sets best practices, and ensures that all departments are aligned in their use of AI. This centralized approach ensures that AI is used effectively and ethically throughout the company. Moreover, the center of excellence can also serve as a platform for continuous learning and development, keeping the company up-to-date with the latest advancements in AI.

    But what makes a center of excellence truly successful? There are several guiding principles that underpin its operation, each of which can be applied specifically to the integration of generative AI.

    Firstly, the center of excellence should have a clear vision and mission. This includes defining the strategic objectives of the center and how it aligns with the overall business strategy. For instance, if a company’s strategy is to leverage generative AI for content creation, the center’s mission could be to develop and implement best practices for using AI in this area.

    Secondly, the center of excellence should foster collaboration and communication across the organization. It should act as a bridge between different departments, facilitating the sharing of knowledge and best practices. For example, if the marketing team is using generative AI to create content, their insights and experiences could be shared with other departments through the center of excellence.

    Thirdly, the center of excellence should focus on continuous improvement. This involves regularly reviewing and refining its processes and practices to ensure they remain effective and relevant. In the context of generative AI, this could involve staying abreast of the latest AI technologies and updating the company’s practices accordingly.

    Lastly, the center of excellence should be committed to promoting a culture of learning and development. This includes providing training and resources to employees to enhance their AI skills and knowledge. For example, the center could offer workshops on using generative AI tools, or provide resources for self-learning.

    Establishing a center of excellence is a critical step in harnessing the power of generative AI. By adhering to these guiding principles, companies can ensure that their center is effective, relevant and capable of driving AI integration across the organization.

    The ultimate goal: Serving business operations through an AI center of excellence

    The ultimate goal of this two-pronged approach and the establishment of a center of excellence is to serve business operations. Generative AI has the potential to streamline processes, improve efficiency and drive innovation. By learning from others, understanding internal usage and establishing a centralized hub for AI, companies can harness this potential and transform their operations.

    The center of excellence plays a pivotal role in this transformation. It serves as the nerve center of the company’s AI initiatives, guiding the integration of generative AI into business operations. Whether it’s using AI to automate routine tasks, generate creative content or predict market trends, the center of excellence ensures that these initiatives align with the company’s strategic objectives and adhere to best practices.

    For instance, if a company wants to use generative AI to streamline its customer service operations, the center of excellence could develop a roadmap for this initiative. This could involve identifying the best AI tools for the job, training customer service staff on how to use these tools, and setting up a system for monitoring and improving the AI’s performance.

    Moreover, the Center of Excellence also plays a crucial role in fostering a culture of continuous learning and innovation. It keeps the company up-to-date with the latest advancements in AI, encourages employees to explore new ways of using AI, and promotes a culture of experimentation and risk-taking. This culture of innovation is key to harnessing the full potential of generative AI and staying ahead of the competition.

    Related: AI Can Make Some Jobs More Difficult and Time-Consuming — Here’s How

    Conclusion

    The journey towards effective use of generative AI may seem daunting, but with the right approach, it can lead to unprecedented growth and success. So, take the leap, look outward and inward, establish your Center of Excellence, and watch as AI propels your business into the future. Remember, the future of business is not just about adopting new technologies, but about understanding them, integrating them effectively, and using them to drive operational excellence. The Center of Excellence is your guide on this journey, leading the way towards a future powered by generative AI.

    Gleb Tsipursky

    Source link

  • Exec tells first UN council meeting that big tech can’t be trusted to guarantee AI safety

    Exec tells first UN council meeting that big tech can’t be trusted to guarantee AI safety

    UNITED NATIONS — The handful of big tech companies leading the race to commercialize AI can’t be trusted to guarantee the safety of systems we don’t yet understand and that are prone to “chaotic or unpredictable behavior,” an artificial intelligence company executive told the first U.N. Security Council meeting on AI’s threats to global peace on Tuesday.

    Jack Clark, co-founder of the AI company Anthropic, said that’s why the world must come together to prevent the technology’s misuse.

    Clark, who says his company bends over backwards to train its AI chatbot to emphasize safety and caution, said the most useful things that can be done now “are to work on developing ways to test for capabilities, misuses and potential safety flaws of these systems.” Clark left OpenAI, creator of the best-known ChatGPT chatbot, to form Anthropic, whose competing AI product is called Claude.

    He traced the growth of AI over the past decade to 2023 where new AI systems can beat military pilots in air fighting simulations, stabilize the plasma in nuclear fusion reactors, design components for next generation semiconductors, and inspect goods on production lines.

    But while AI will bring huge benefits, its understanding of biology, for example, may also use an AI system that can produce biological weapons, he said.

    Clark also warned of “potential threats to international peace, security and global stability” from two essential qualities of AI systems – their potential for misuse and their unpredictability “as well as the inherent fragility of them being developed by such a narrow set of actors.”

    Clark stressed that across the world it’s the tech companies that have the sophisticated computers, large pools of data and capital to build AI systems and therefore they seem likely to continue to define their development

    In a video briefing to the U.N.’s most powerful body, Clark also expressed hope that global action will succeed.

    He said he’s encouraged to see many countries emphasize the importance of safety testing and evaluation in their AI proposals, including the European Union, China and the United States.

    Right now, however, there are no standards or even best practices on “how to test these frontier systems for things like discrimination, misuse or safety,” which makes it hard for governments to create policies and lets the private sector enjoy an information advantage, he said.

    “Any sensible approach to regulation will start with having the ability to evaluate an AI system for a given capability or flaw,” Clark said. “And any failed approach will start with grand policy ideas that are not supported by effective measurements and evaluations.”

    With robust and reliable evaluation of AI systems, he said, “governments can keep companies accountable, and companies can earn the trust of the world that they want to deploy their AI systems into.” But if there is no robust evaluation, he said, “we run the risk of regulatory capture compromising global security and handing over the future to a narrow set of private sector actors.”

    Other AI executives such as OpenAI’s CEO, Sam Altman, have also called for regulation. But skeptics say regulation could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft as smaller players are elbowed out by the high cost of making their large language models adhere to regulatory strictures.

    U.N. Secretary-General Antonio Guterres said the United Nations is “the ideal place” to adopt global standards to maximize AI’s benefits and mitigate its risks.

    He warned the council that the advent of generative AI could have very serious consequences for international peace and security, pointing to its potential use by terrorists, criminals and governments causing “horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale.”

    As a first step to bringing nations together, Guterres said he is appointing a high-level Advisory Board for Artificial Intelligence that will report back on options for global AI governance by the end of the year.

    The U.N. chief also said he welcomed calls from some countries for the creation of a new United Nations body to support global efforts to govern AI, “inspired by such models as the International Atomic Energy Agency, the International Civil Aviation Organization, or the Intergovernmental Panel on Climate Change.”

    Professor Zeng Yi, director of the Chinese Academy of Sciences Brain-inspired Cognitive Intelligence Lab, told the council “the United Nations must play a central role to set up a framework on AI for development and governance to ensure global peace and security.”

    Zeng, who also co-directs the China-UK Research Center for AI Ethics and Governance, suggested that the Security Council consider establishing a working group to consider near-term and long-term challenges AI poses to international peace and security.

    In his video briefing, Zeng stressed that recent generative AI systems “are all information processing tools that seem to be intelligent” but don’t have real understanding, and therefore “are not truly intelligent.”

    And he warned that “AI should never, ever pretend to be human,” insisting that real humans must maintain control especially of all weapons systems.

    Britain’s Foreign Secretary James Cleverly, who chaired the meeting as the UK holds the council presidency this month, said this autumn the United Kingdom will bring world leaders together for the first major global summit on AI safety.

    “No country will be untouched by AI, so we must involve and engage the widest coalition of international actors from all sectors,” he said. “Our shared goal will be to consider the risks of AI and decide how they can be reduced through coordinated action.”

    ——

    AP Technology Writer Frank Bajak contributed to this report from Boston

    Source link

  • ChatGPT-maker OpenAI signs deal with AP to license news stories

    ChatGPT-maker OpenAI signs deal with AP to license news stories

    ChatGPT-maker OpenAI and The Associated Press said Thursday that they’ve made a deal for the artificial intelligence company to license AP’s archive of news stories.

    “The arrangement sees OpenAI licensing part of AP’s text archive, while AP will leverage OpenAI’s technology and product expertise,” the two organizations said in a joint statement.

    Financial terms of the deal were not disclosed.

    The U.S. Federal Trade Commission has launched an investigation into ChatGPT creator OpenAI and whether the artificial intelligence company violated consumer protection laws by scraping public data and publishing false information through its chatbot.

    Google says it’s rolling out its AI-powered chatbot Bard across Europe and in Brazil, expanding its availability to hundreds of millions more users.

    Elon Musk is finally starting to talk about the artificial intelligence company he founded to compete with ChatGPT-maker OpenAI.

    Ask ChatGPT about comedian Sarah Silverman’s memoir “The Bedwetter” and the artificial intelligence chatbot can come up with a detailed synopsis of every part of the book.

    OpenAI and other technology companies must ingest large troves of written works, such as books, news articles and social media chatter, to improve their AI systems known as large language models. Last year’s release of ChatGPT has sparked a boom in “generative AI” products that can create new passages of text, images and other media.

    The tools have raised concerns about their propensity to spout falsehoods that are hard to notice because of the system’s strong command of the grammar of human languages. They also have raised questions about to what extent news organizations and others whose writing, artwork, music or other work was used to “train” the AI models should be compensated.

    This week, the U.S. Federal Trade Commission told OpenAI it had opened an investigation into whether the company had engaged in unfair or deceptive privacy or data security practices in scraping public data — or caused harm by publishing false information through its chatbot products. The FTC did not immediately reply to a request for comment on the investigation, which The Washington Post was first to report.

    Along with news organizations, book authors have sought compensation for their works being used to train AI systems. More than 4,000 writers — among them Nora Roberts, Margaret Atwood, Louise Erdrich and Jodi Picoult — signed a letter late last month to the CEOs of OpenAI, Google, Microsoft, Meta and other AI developers accusing them of exploitative practices in building chatbots that “mimic and regurgitate” their language, style and ideas. Some novelists and the comedian Sarah Silverman have also sued OpenAI for copyright infringement.

    “We are pleased that OpenAI recognizes that fact-based, nonpartisan news content is essential to this evolving technology, and that they respect the value of our intellectual property,” said a written statement from Kristin Heitmann, AP senior vice president and chief revenue officer. “AP firmly supports a framework that will ensure intellectual property is protected and content creators are fairly compensated for their work.”

    The two companies said they are also examining “potential use cases for generative AI in news products and services,” though didn’t give specifics. OpenAI and AP both “believe in the responsible creation and use of these AI systems,” the statement said.

    OpenAI will have access to AP news stories going back to 1985.

    The AP deal is valuable to a company like OpenAI because it provides a trove of material that it can use for training purposes, and is also a hedge against losing access to material because of lawsuits that have threatened its access to material, said Nick Diakopoulos, a professor of communications studies and computer science at Northwestern University.

    “In order to guard against how the courts may decide, maybe you want to go out and sign licensing deals so you’re guaranteed legal access to the material you’ll need,” Diakopoulos said.

    The AP doesn’t currently use any generative AI in its news stories, but has used other forms of AI for nearly a decade, including to automate corporate earnings reports and recap some sporting events. It also runs a program that helps local news organizations incorporate AI into their operations, and recently launched an AI-powered image archive search.

    The deal’s effects could reach far beyond the AP because of the organization’s size and its deep ties to other news outlets, said news industry analyst Ken Doctor.

    When AP decided to open up its content for free on the internet in the 1990s, it led many newspaper companies to do the same, which “turned out to be a very bad idea” for the news business, Doctor said.

    He said navigating “a new, AI-driven landscape is deeply uncertain” and presents similar risks.

    “The industry is far weaker today. AP is in OK shape. It’s stable. But the newspaper industry around it is really gasping for air,” Doctor said. “On the positive side, AP has the clout to do a deal like this and can work with local publishers to try to assess both the potential and the risk.”

    ___

    Associated Press writer David Bauder contributed to this report.

    Source link

  • FTC reportedly investigating ChatGPT creator OpenAI over consumer protection issues

    FTC reportedly investigating ChatGPT creator OpenAI over consumer protection issues

    The U.S. Federal Trade Commission has launched an investigation into ChatGPT creator OpenAI and whether the artificial intelligence company violated consumer protection laws by scraping public data and publishing false information through its chatbot, according to reports in the Washington Post and the New York Times.

    The agency sent OpenAI a 20-page letter requesting detailed information on its AI technology, products, customers, privacy safeguards and data security arrangements, according to the reports. An FTC spokesman had no comment.

    OpenAI founder Sam Altman tweeted disappointment that news of the investigation started as a “leak,” noting that the move would “not help build trust,” but added the company will work with the FTC.

    “It’s super important to us that out technology is safe and pro-consumer, and we are confident we follow the law,” he wrote. “We protect user privacy and design our systems to learn about the world, not private individuals.”

    The FTC’s move represents the most significant regulatory threat so far to the nascent but fast-growing AI industry, although it’s not the only challenge facing these companies. Comedian Sarah Silverman and two other authors have sued both OpenAI and Facebook parent Meta for copyright infringement, claiming that the companies’ AI systems were illegally “trained” by exposing them to datasets containing illegal copies of their works.

    On Thursday, OpenAI and The Associated Press announced a deal under which the AI company will license AP’s archive of news stories.

    Altman has emerged as a global AI ambassador of sorts following his testimony before Congress in May and a subsequent tour of European capitals where regulators were putting final touches on a new AI regulatory framework. Altman himself has called for AI regulation, although he has tended to emphasize difficult-to-evaluate existential threats such as the possibility that superintelligent AI systems could one day turn against humanity.

    Some argue that focusing on a far-off “science fiction trope” of superpowerful AI could make it harder to take action against already existing harms that require regulators to dig deep on data transparency, discriminatory behavior and potential for trickery and disinformation.

    “It’s the fear of these systems and our lack of understanding of them that is making everyone have a collective freak-out,” Suresh Venkatasubramanian, a Brown University computer scientist and former assistant director for science and justice at the White House Office of Science and Technology Policy, told the AP in May. “This fear, which is very unfounded, is a distraction from all the concerns we’re dealing with right now.”

    News of the FTC’s OpenAI investigation broke just hours after a combative House Judiciary Committee hearing in which FTC Chair Lina Khan faced off against Republican lawmakers who said she has been too aggressive in pursuing technology companies for alleged wrongdoing.

    Republicans said she has been harassing Twitter since its acquisition by Elon Musk, arbitrarily suing large tech companies and declining to recuse herself from certain cases. Khan pushed back, arguing that more regulation is necessary as the companies have grown and that tech conglomeration could hurt the economy and consumers.

    Source link

  • FTC reportedly investigating ChatGPT creator OpenAI over consumer protection issues

    FTC reportedly investigating ChatGPT creator OpenAI over consumer protection issues

    The U.S. Federal Trade Commission has launched an investigation into ChatGPT creator OpenAI and whether the artificial intelligence company violated consumer protection laws by scraping public data and publishing false information through its chatbot, according to reports in the Washington Post and the New York Times.

    The agency sent OpenAI a 20-page letter requesting detailed information on its AI technology, products, customers, privacy safeguards and data security arrangements, according to the reports. An FTC spokesman had no comment.

    OpenAI founder Sam Altman tweeted disappointment that news of the investigation started as a “leak,” noting that the move would “not help build trust,” but added the company will work with the FTC.

    “It’s super important to us that out technology is safe and pro-consumer, and we are confident we follow the law,” he wrote. “We protect user privacy and design our systems to learn about the world, not private individuals.”

    The FTC’s move represents the most significant regulatory threat so far to the nascent but fast-growing AI industry, although it’s not the only challenge facing these companies. Comedian Sarah Silverman and two other authors have sued both OpenAI and Facebook parent Meta for copyright infringement, claiming that the companies’ AI systems were illegally “trained” by exposing them to datasets containing illegal copies of their works.

    On Thursday, OpenAI and The Associated Press announced a deal under which the AI company will license AP’s archive of news stories.

    Altman has emerged as a global AI ambassador of sorts following his testimony before Congress in May and a subsequent tour of European capitals where regulators were putting final touches on a new AI regulatory framework. Altman himself has called for AI regulation, although he has tended to emphasize difficult-to-evaluate existential threats such as the possibility that superintelligent AI systems could one day turn against humanity.

    Some argue that focusing on a far-off “science fiction trope” of superpowerful AI could make it harder to take action against already existing harms that require regulators to dig deep on data transparency, discriminatory behavior and potential for trickery and disinformation.

    “It’s the fear of these systems and our lack of understanding of them that is making everyone have a collective freak-out,” Suresh Venkatasubramanian, a Brown University computer scientist and former assistant director for science and justice at the White House Office of Science and Technology Policy, told the AP in May. “This fear, which is very unfounded, is a distraction from all the concerns we’re dealing with right now.”

    News of the FTC’s OpenAI investigation broke just hours after a combative House Judiciary Committee hearing in which FTC Chair Lina Khan faced off against Republican lawmakers who said she has been too aggressive in pursuing technology companies for alleged wrongdoing.

    Republicans said she has been harassing Twitter since its acquisition by Elon Musk, arbitrarily suing large tech companies and declining to recuse herself from certain cases. Khan pushed back, arguing that more regulation is necessary as the companies have grown and that tech conglomeration could hurt the economy and consumers.

    Source link

  • AI and Humans Equally Effective in Engaging Education Content Now, Study by Rask AI

    AI and Humans Equally Effective in Engaging Education Content Now, Study by Rask AI

    Press Release


    Jul 13, 2023 10:15 EDT

    67% of the respondents didn’t mention the AI aspect as they were more interested in the content of the video itself.

    Does AI-generated content impact audience engagement? The Rask AI team transformed this question into a groundbreaking study on how AI transforms the online education market in 2023. Their research compares audience engagement in synthetic learning videos vs. human-created learning videos and evaluates the benefits of investing in new learning content creation and distribution technologies.

    Main insights:

    • The survey of more than 300 audience members showed that AI-generated content is equally as engaging as human-created content now. While a certain degree of FUD (fear, uncertainty, and doubt) remains – in addition to some technological limitations – what this research reveals is that AI is well-equipped to maintain the accessibility and personalization of educational content, without losing the audience engagement.
       
    • Even though participants recognized that one video was AI-generated, they were more focused on the topic of the content than how that content was created (67%). 
       
    • 13% showed great enthusiasm for AI after watching the synthetic video and expressed an interest in learning more about this field. 

    The study also covers the latest trends and data on the AI education market in 2023 with citations from AI experts as well as the practical guide on how to use AI in education: an overview of new AI tools to make learning more personalized, accessible and inclusive.

    Complete Study Results: https://www.rask.ai/research/ai-in-education

    Study Methodology

    The study surveyed 300 respondents and aimed to gain an understanding of participants’ perceptions, thoughts, feelings and behaviors during and after watching the educational videos. It has input from 30 AI experts and 12 data sources published between 2021 and 2023, including data from Statista, McKinsey Technology Trends Outlook, Straits Research, KPMG and others. 

    Rask AI is a brand of the company Brask Inc., an American company developing products and services for AI content creation and distribution.

    Source: Rask AI

    Source link