ReportWire

Tag: chatgpt

  • Unity Announces Big ‘AI’ Plans, Developers Have Concerns

    Unity Announces Big ‘AI’ Plans, Developers Have Concerns

    [ad_1]

    Video games engine provider Unity announced earlier today the introduction of two new machine-learning platforms, one of which in particular has developers and artists asking questions of the company that, at time of publishing, have yet to be answered.

    From Unity’s blog:

    Today we’re announcing two new AI products: Unity Muse, an expansive platform for AI-driven assistance during creation, and Unity Sentis, which allows you to embed neural networks in your builds to enable previously unimaginable real-time experiences.

    Muse is essentially just ChatGPT but for Unity specifically, and purports to let users ask questions about coding and resources and get instant answers. Sentis, however, is more concerning, as it “enables you to embed an AI model in the Unity Runtime for your game or application, enhancing gameplay and other functionality directly on end-user platforms.”

    Because “AI” is a technology that in many cases is utterly reliant on work stolen from artists without consent or compensation, Unity’s announcement led to a lot of questions about Sentis, with particular focus on the tech’s ability to create stuff like images, models and animation. Scroll down past the announcement tweet, for example, and you’ll see a ton of variations of the same query:

    just to jump on the train, which dataset y’all pull the art from???

    Unity needs to be fully transparent about what ML models will be implemented, including the data they have been trained on. I don’t see any possible way ML, in current iterations, can be effective without training on countless ill gotten data.

    REALLY concerning image generator stuff. What datasets?

    Hi, what dataset was this trained on? Is this using artwork from artists without their permission? Animations? Materials? How was this AI trained?

    You do realize that AI-created assets can’t be used commercially, so what was the rationale for adding this feature?

    Which datasets were used in development of this? Did you negotiate & acquire all relevant licenses directly from copyright holders?

    It’s a very specific question, one that at time of publishing Unity has yet to answer, either on Twitter or on the company’s forums (I’ve emailed the company asking the question specifically, and will update if I hear back). Those familiar with “AI”’s legal and copyright struggles can find the outline of an answer in this post by Unity employee TreyK-47, though, when he says you can’t use the tech as it exists today “for a current commercial or external project”.

    Note that while there are clear dangers to jobs and the quality of games inherent in this push, those dangers are for the future; for the now, this looks (and sounds) like dogshit.

    Experience the art of the possible | Unity AI

    [ad_2]

    Luke Plunkett

    Source link

  • Some Companies Are Paying Up To $800k For ChatGPT Experience | Entrepreneur

    Some Companies Are Paying Up To $800k For ChatGPT Experience | Entrepreneur

    [ad_1]

    ChatGPT, the AI prompt-driven chatbot created by OpenAI last year, instantly garnered worldwide attention. The chatbot’s ability to generate immediate content can help workers with anything from writing rental listings to computer code.

    The tool has also garnered criticism, especially with concerns over privacy, ethics (notably in law), and how AI could replace human jobs. When it comes to the latter, the answer is complicated. In May, the World Economic Forum estimated that nearly 14 million jobs — or 2% of current employment — could disappear by 2027 due to increased adoption of AI technology.

    Later that same month, OpenAI CEO Sam Altman told Congress that AI will replace jobs but can also create “much better” ones.

    Related: IBM Says 7,800 Jobs (or Nearly 30% of Its Workforce) Could Be Replaced By AI

    According to a recent report by ResumeBuilder, which surveyed 1,187 business leaders in the U.S., 92% are currently hiring — 91% of which are looking for workers with ChatGPT experience.

    They’re also willing to pay.

    Per the report, one in four survey respondents said the starting salary for AI “prompt engineers” will exceed $200,000, and 17% said it will exceed $300,000.

    “With this expertise not yet widely available in the hiring market, those candidates with ChatGPT and AI skills will be highly sought after from progressive companies,” said Chief Career Advisor at ResumeBuilder, Stacie Haller, in the report. “As this tech is still so new, there is a race to bring on employees with this skill in order for the company to stay cutting edge, and it looks like companies are willing to pay to do so.”

    And the race to recruit workers with AI skills has also prompted urgency: 30% of business leaders looking for ChatGPT skills said they are hiring “urgently” for the positions, 11% of which said it is “very urgent.”

    What Does Working with ChatGPT Mean, Exactly?

    A roundup by Business Insider found that the positions calling for ChatGPT skills range from titles like “machine learning engineer” and “AI data operations specialist” to more conventional roles like product managers and copywriters, except the posting mentions skills with the AI chatbot gives candidates a “competitive edge.”

    Many of the job postings examined paid six figures, including two for the remote job search engine, Crossover, which listed positions (now closed) for a senior director of product management and chief product officer — both of which mentioned ChatGPT experience in the listing. Each starting salary is $800,000.

    [ad_2]

    Madeline Garfinkle

    Source link

  • Discover How You Can Use AI in Your Business with This $29.99 Bundle | Entrepreneur

    Discover How You Can Use AI in Your Business with This $29.99 Bundle | Entrepreneur

    [ad_1]

    Disclosure: Our goal is to feature products and services that we think you’ll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners.

    According to Forbes, eight out of 10 small businesses are run by a single owner without employees. If you are an entrepreneur doing it all solo and wearing many hats, it might behoove you to master AI and see where technology can fill in some gaps.

    Fortunately, mastering AI and buzzy tools like ChatGPT can be done entirely from home with the help of some informative online courses. And right now, The Complete ChatGPT Artificial Intelligence OpenAI Training Bundle can give you a well-rounded education for just $29.99 — $22 off the usual $52 price tag.

    Packed with four informative courses, this bundle schools you on ways AI can make your life as an entrepreneur a whole lot easier. And you don’t have to have any prior knowledge.

    The course ChatGPT for Beginners is taught by Mike Wheeler and shows you the fundamentals of working with ChatGPT. This tool from OpenAI can be used to write poetry, short stories, computer code, and a whole lot more, and Mike will walk you through how it can help your workflow. By the end of the course, you’ll have a solid foundation on ChatGPT that should hopefully make your 9-to-5 a little easier.

    Courses like ChatGPT: Artificial Intelligence (AI) That Writes for You, taught by Alex Genadinik, shows you how to create blog, social media, and other content that can help your business and engage potential clients and customers. You’ll even see how to create a business using an AI tool for writing and copywriting. Create a ChatGPT AI Bot with Django and Python, and Create a ChatGPT AI Bot with Tkinter and Python to round out this bundle.

    The Complete ChatGPT Artificial Intelligence OpenAI Training Bundle schools you on AI and ChatGPT, and it’s on sale now for just $29.99 (reg. $52)

    Prices subject to change.

    [ad_2]

    Entrepreneur Store

    Source link

  • “Knowledge-based” jobs could be most at risk from AI boom

    “Knowledge-based” jobs could be most at risk from AI boom

    [ad_1]

    The boom in “generative” artificial intelligence may usher in the “next productivity frontier” in the workplace, but it could also cause job losses and disruption for some knowledge-based workers such as software developers and marketers, according to McKinsey. 

    Integrating generative AI tools into the workplace could theoretically automate as much as 70% of the time an employee spends completing tasks on the job, the consulting firm estimated. That could help many workers save time on routine tasks, which in turn will boost profitability for businesses, McKinsey said in a recent report.

    For the U.S. economy as a whole, meanwhile, the gains could be considerable, adding $4.4 trillion annually to the nation’s GDP.

    But such productivity gains could come with a downside, as some companies may decide to cut jobs since workers won’t need as many hours to complete their tasks. Most at risk from advanced forms of AI are knowledge-based workers, who tend to be employed in jobs that traditionally have had higher wages and more job security than blue-collar workers. 

    As a result, most knowledge workers will be changing what they do over time, McKinsey Global Partner Michael Chui told CBS MoneyWatch. 

    Generative AI will “give us superpowers” by allowing workers to be more productive, but employees will need to adapt, Chui said. This “will require reskilling, flexibility and learning how to learn new things.” 

    AI could replace half of workers’ daily work activities by 2045, which McKinsey said is eight years earlier than it had previously forecast. 

    Where AI will thrive

    To be sure, AI won’t transform every job, and it could impact some corporate fields more than others. At the top of the list are software development, customer service operations and marketing, according to Rodney Zemmel, a senior partner at McKinsey. 

    Software engineering teams are likely to rely on generative AI to reduce the time they spend generating code. Already, big tech firms are selling AI tools for software engineering, which is being used by 20 million coders, the firm found.

    Customer service operations could also undergo a transformation, with AI-powered chatbots creating quick, personalized responses to complex customer questions. Because generative AI can quickly retrieve data for a specific customer, it can reduce the time human sales representatives need to respond. 

    Marketers also could tap AI to help with creating content and assist in interpreting data and with search engine optimization. 

    Workers who are concerned about their jobs should stay on top of emerging technologies like generative AI and understand its place in their respective fields,the McKinsey experts recommended. 

    “Be on the early edge of adoption” to stay ahead in the job market, Zemmel advised. 


    The ChatGPT Revolution | CBS Reports

    22:38

    Still, most jobs won’t be transformed overnight, Zemmel said.

    “It’s worth remembering in customer service and marketing just how early this technology is and how much work needs to be put in to get it to work safely, reliably, at scale, and the way that most human professional enterprises are going to want to use it,” he noted. 

    Examining past technological advances provides a hint of how AI is likely to impact workers.

    “How many jobs were lost when Google came out?” Zemmel asked. “I’m sure the answer wasn’t zero, but companies didn’t dramatically restructure because of all the work that was no longer needed in document retrieval.”

    Zemmel said that when he asks corporate managers how they use AI technologies, the common answer is “writing birthday poems and toasts.” So AI “still has a way to go before it’s really transforming businesses,” he added.

    [ad_2]

    Source link

  • Save $60 on This Comprehensive Online Course on ChatGPT | Entrepreneur

    Save $60 on This Comprehensive Online Course on ChatGPT | Entrepreneur

    [ad_1]

    Disclosure: Our goal is to feature products and services that we think you’ll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners.

    With all the buzz around artificial intelligence, you may be wondering how it can benefit you as an entrepreneur. The technology may be intimidating, but one of the newest and hottest tools in AI, ChatGPT, is surprisingly easy to navigate.

    According to Statista, 55% of marketers are already using ChatGPT for marketing purposes. If you’d like to join the growing majority of workers employing ChatGPT for portions of their business, Introduction to ChatGPT can help. This helpful online course can guide you through the ins and outs of this new AI tool, and you can currently snag it on sale for just $19.99, $60 off the usual price and the best price available online, for a limited time.

    See how to make ChatGPT work for you from the comfort of your couch. One of the leaders in online learning, International Open Academy, guides you through this new technology with 25 hours of content. This information is covered in nine lectures that you can fit into your busy schedule, showing you all the ways to use artificial intelligence.

    International Open Academy walks through the seven ways AI can be used in daily life, both professionally and personally — through content creation, client services, market research, lead generation, data analytics, sales and marketing, and HR management. See how to use it for your current venture, or maybe discover a whole new field you can tap into with help from ChatGPT. You’ll be receiving a certificate of completion at the end of the course, granted if you score over 55% on each course module’s exam.

    Take the mystique out of ChatGPT with Introduction to ChatGPT courses, on sale for the best price online, just $19.99 (reg. $80), for a limited time.

    Prices subject to change.

    [ad_2]

    Entrepreneur Store

    Source link

  • Finance-specific LLMs promise workflow automation | Bank Automation News

    Finance-specific LLMs promise workflow automation | Bank Automation News

    [ad_1]

    The generative AI boom may be in its infancy, but financial institutions are already looking into how they can implement large language models to streamline workflows and increase productivity. LLMs — AI systems trained using massive amounts of data to produce human-sounding responses — have a staggering number of potential applications in banking. Use cases […]

    [ad_2]

    Victor Swezey

    Source link

  • Industry leaders warn of AI risks

    Industry leaders warn of AI risks

    [ad_1]

    Industry leaders warn of AI risks – CBS News


    Watch CBS News



    Dozens of industry leaders, including the CEO of ChatGPT creator OpenAI, are warning about the potential risks of artificial intelligence. They said it should be a “global priority” to mitigate the risks of extinction brought about by AI.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • Lawyer ‘Regrets’ Using ChatGPT in Legal Brief, Cites Fake Cases | Entrepreneur

    Lawyer ‘Regrets’ Using ChatGPT in Legal Brief, Cites Fake Cases | Entrepreneur

    [ad_1]

    As prompt-driven chatbots, such as ChatGPT, become mainstream tools used to save time on tasks in the U.S. workplace, there’s been concern over whether artificial intelligence will eventually replace human jobs.

    But while the controversy surrounding which jobs will be eliminated continues, one thing is for sure — in some industries, the chatbot is less of a time-saver and more of a liability.

    Steven A. Schwartz, a New York-based lawyer with over 30 years of experience, was ordered by the Southern District of New York to explain what the judge has called an “unprecedented case,” the New York Times first reported, or face possible sanctions for his actions.

    According to the court order, six cases Schwartz cited in a legal brief were “bogus.”

    Related: ChatGPT Could Cost You a Job Before You Even Have It, According to a New Report — Here’s How

    “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Judge P. Kevin Castel wrote in the court order.

    In an affidavit filed last week in response to the order, Schwartz admitted to using ChatGPT to do legal research despite never having used it prior to this instance and that he was “unaware of the possibility that its content could be false.”

    Schwartz added that he “greatly regrets” using artificial intelligence to “supplement” his legal research.

    A hearing is set for June 8 for Schwartz to further explain himself.

    Related: Mike Rowe Says the Dirtiest Jobs Are Safe From the AI Revolution: ‘I Haven’t Seen Any Plumbing Robots’

    [ad_2]

    Madeline Garfinkle

    Source link

  • How Marketers Can Ensure They’re Using AI Ethically | Entrepreneur

    How Marketers Can Ensure They’re Using AI Ethically | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    The technological advances we’ve seen over the past few decades have transformed how businesses communicate and market to consumers. From the emergence of the internet to the rise of social media, the landscape has continuously evolved. Now, AI is driving another major shift in the industry. As a Gen Xer, I’ve witnessed this transformation firsthand.

    Initially, I was skeptical of ChatGPT’s potential as a generative language model to replace human creativity in content creation. However, after trying it out for myself, I was amazed by the quality of content it produced — and I’m not the only one. In a CNN interview, media theorist and author of the book Program or Be Programmed, Douglas Rushkoff, acknowledges how ChatGPT can write better than his students. Thus, its ability to write well makes it a valuable tool for marketers looking to streamline their content creation process.

    As a result, 61.4% of marketers have already adopted AI or plan to use it, with 41.4% specifically using it for content marketing, according to the 2023 AI Marketing Benchmarking Report. While the time-saving and efficiency benefits of AI are clear, it’s crucial to consider the potential risks and ethical implications. With AI being relatively new and lacking clear guidelines and regulations, it’s currently the “wild west” of technology.

    Related: Why Artificial Intelligence is Revolutionizing Marketing

    Why marketers must be wary of AI

    When it comes to marketing, AI can be a game-changer. However, as with any new technology, there’s a learning curve, and we must be aware of the potential risks involved. Unfortunately, some marketers may prioritize the benefits of AI over these potential risks, leading to a lack of awareness and education on ethical considerations related to AI.

    One example of AI being used unethically in marketing is the creation of fake online reviews or social media posts. In 2019, researchers from the University of Chicago and the University of California, San Diego, created an AI system capable of generating fake Yelp reviews that were almost impossible to distinguish from real reviews. This practice can deceive consumers and harm them by leading them to make purchasing decisions based on false information.

    It’s crucial for marketers to recognize the ethical considerations surrounding the use of AI in marketing and to take steps to ensure they’re using it responsibly. By doing so, we can harness the benefits of AI without sacrificing the trust and goodwill of our customers. As AI continues to shape the marketing landscape, it’s up to us to ensure it’s used in a way that’s transparent, fair and beneficial for everyone involved.

    Ethical missteps with AI within marketing typically fall within the following areas:

    Inaccurate information

    While generative AI like ChatGPT is an impressive language model, it’s important to recognize that it’s not infallible. As with any technology, there are limitations to its capabilities that marketers need to be aware of. The AI is trained on a fixed dataset, which means it may not be aware of recent developments or events that have occurred since the cutoff. Additionally, natural language is often ambiguous, and the meaning of a statement can be dependent on contextual factors that may be misinterpreted and lead to inaccurate responses.

    To test the accuracy of ChatGPT, I asked it a couple of questions. First, I asked, “What was the first animated film?” ChatGPT responded with Fantasmagorie, a short animated film by French animator Emile Cohl in 1908. However, when I reworded the question and asked, “What was the first animated cartoon?” ChatGPT responded with “Gertie the Dinosaur” a short film created by American cartoonist Winsor McCay in 1914. So, which one is correct? This is just a fun example, but it highlights the potential for inaccuracies when using AI-generated content.

    As marketers and PR professionals, we often work closely with media outlets, and the content we produce isn’t always fact-checked. The use of AI-generated content may increase the potential for inaccuracies and unintentional misinformation. This underscores the importance of verifying and fact-checking all content, regardless of its origin. While AI can be a valuable tool, it’s crucial to exercise caution and not rely solely on AI-generated content without human oversight.

    Disclosure and transparency

    As the use of AI in content creation becomes more common, the question arises: Should the public be made aware when a content piece was produced by AI? While there are no specific laws or regulations that require disclosure of the use of AI-generated content, there are existing laws and regulations that may apply in certain contexts.

    For example, the FTC has issued guidelines for advertising and marketing that require disclosure of material connections between advertisers and endorsers. These guidelines also apply to AI-generated content in advertising or marketing, if the content is being used to promote a product or service. However, the issue of transparency in AI-generated content goes beyond legal requirements.

    For marketers and journalists, transparency is crucial to maintain trust with their audience. In January of this year, CNET paused AI-generated stories after The Verge reported that AI tools had been utilized for months without transparency or full disclosure. The lack of transparency was a problem not only for readers but also for CNET staff, who were sometimes left in the dark about how the company was using AI.

    As AI technology advances, it’s possible that new regulations will be developed to address transparency concerns. In the meantime, being transparent about the use of AI in content creation is a best practice to maintain trust and integrity with the audience. With AI-generated content being used more frequently, it’s important to consider the implications of its use and ensure that it’s used in a way that’s transparent, ethical and responsible.

    Copyright issues

    If the AI-generated content incorporates copyrighted material, the marketer could be infringing on the exclusive rights of the copyright holder. Also, AI systems are typically trained on large datasets of text, images and other content. If a marketer uses copyrighted materials as part of the training data for an AI system without permission, they could be infringing on a copyright. To avoid copyright infringement when using AI-generated content, marketers should ensure they have the necessary rights and permissions to use any copyrighted materials that may be included in content. This may involve obtaining permission from the copyright holder or using only content that is in the public domain.

    Related: Should You Trust Artificial Intelligence in Marketing?

    Racial and gender bias

    In 2016, Persado, a marketing technology company, made headlines when it used AI to generate marketing messages for Hillary Clinton’s presidential campaign. While the messages were designed to appeal to different demographic groups, an analysis found they contained gender biases. Specifically, the messages targeted toward women focused on emotions and relationships, while messages for men were focused on achievement and power.

    As marketers, it’s our responsibility to ensure AI systems are trained on diverse and representative datasets and audited regularly for bias. We must design AI systems with fairness and transparency in mind and ensure they reflect the ethics and values of our organization. Without proper oversight, AI-generated content may unintentionally perpetuate biases and stereotypes that could harm our reputation and relationships with our audience.

    To combat this, it’s crucial to have human oversight in the creation and deployment of AI-generated content. We must ensure the content created is free from bias and aligns with the values of our organization. By doing so, we can use AI as a tool to improve our marketing efforts and create more inclusive and ethical content.

    User privacy

    Respecting the privacy and security of our audience’s personal data is critical as a marketer. This means that if the AI-generated content involves the collection, processing or use of personal data, we must obtain user consent in accordance with data protection laws and regulations.

    To obtain this consent, we need to provide clear and transparent information on how data is being collected and used in the AI-generated content. This not only ensures compliance with legal requirements but also builds trust with our audience by demonstrating we value their privacy.

    However, data security is also a critical consideration. Marketers must take appropriate measures to ensure the security of the personal data that is collected and used in the AI-generated content. This may involve implementing technical measures to prevent unauthorized use, access or disclosure of the data.

    By prioritizing user privacy and data security in the creation and deployment of AI-generated content, we can foster trust with our audience and avoid the risks of potential data breaches or privacy violations. As we continue to integrate AI into our marketing strategies, it’s essential to remain vigilant and uphold ethical and legal standards to safeguard our audience’s personal information.

    Misleading information and manipulation

    Marketers must be attentive regarding the potential risks of using AI-generated content, especially when it comes to chatbots or virtual assistants. These tools could be programmed to provide misleading information, intentionally steering customers toward particular products or services, and even deceiving them.

    Furthermore, AI-generated social media posts or ads could be designed to manipulate customer behavior by eliciting emotional responses or creating a false sense of urgency to encourage purchases. These tactics are unethical and could damage the trust and reputation of a brand.

    Therefore, we must prioritize fairness and transparency when using AI-generated content. We must ensure these technologies are not used to deceive or manipulate customers, but rather to enhance their experience and provide them with accurate information. By using AI-generated content ethically and responsibly, we can build trust with our audience and achieve long-term success for our brand. Ultimately, it’s crucial to prioritize ethical considerations and avoid any tactics that could harm our customers or our reputation.

    Related: Artificial Intelligence May Add More Value to Marketing Than Human Brains

    How marketers can ensure they (or their companies) utilize AI ethically

    The use of AI in marketing and public relations has sparked important discussions around ethics and responsibility. However, it’s important to recognize that AI is a powerful tool that can be used for good. At my own marketing and public relations firm, we held a team meeting to discuss the ethical ramifications of AI and how we could utilize it to improve our client work while ensuring we remain ethical in our usage.

    As professionals in the field, we must assess the benefits of AI and determine how it can be utilized with the lowest possible risk. We can develop written guidelines our team can agree on regarding how we will use and not use AI, taking into account factors such as accuracy and potential bias. It’s crucial to decide how the use of AI will be communicated or disclosed to clients or customers and ensure cybersecurity measures are in place to protect personal data.

    Additionally, we must provide fact-checking for accuracy and monitor for bias, staying updated on the latest AI-related regulations and laws. By prioritizing transparency, accountability and responsibility, we can use AI as a tool to enhance our work and provide our clients with exceptional service.

    It’s important to recognize AI is not a one-size-fits-all solution and may not be appropriate for all marketing or public relations activities. However, with clear communication, guidelines and accountability, we can ensure we approach AI usage ethically and responsibly, aligned with our values and the best interests of our clients. Let’s embrace the benefits of AI while upholding the highest ethical standards in our work.

    But let’s get to the bottom line: You’re probably wondering if this article was written using AI. The answer is yes and no. I wrote this article based on my own and very human insights and experience, however, once written, AI was used for enhancement. I used AI to research examples, which I then fact-checked, and to edit my article for spelling and grammar mistakes and readability. Just to be transparent … and ethical.

    [ad_2]

    Kelly Fletcher

    Source link

  • A lawyer used ChatGPT to prepare a court filing. It went horribly awry.

    A lawyer used ChatGPT to prepare a court filing. It went horribly awry.

    [ad_1]

    A lawyer who relied on ChatGPT to prepare a court filing on behalf of a man suing an airline is now all too familiar with the artificial intelligence tool’s shortcomings —including its propensity to invent facts. 

    Roberto Mata sued Colombian airline Avianca last year, alleging that a metal food and beverage cart injured his knee on a flight to Kennedy International Airport in New York. When Avianca asked a Manhattan judge to dismiss the lawsuit based on the statute of limitations, Mata’s lawyer, Steven A. Schwartz, submitted a brief based on research done by ChatGPT, Schwartz, of the law firm Levidow, Levidow & Oberman, said in an affidavit

    While ChatGPT can be useful to professionals in numerous industries, including the legal profession, it has proved itself to be both limited and unreliable. In this case, the AI invented court cases that didn’t exist, and asserted that they were real. 

    The made-up decisions included cases titled Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines.

    The fabrications were revealed when Avianca’s lawyers approached the case’s judge, Kevin Castel of the Southern District of New York, saying they couldn’t locate the cases cited in Mata’s lawyers’ brief in legal databases.

    That’s when Schwartz filed his affidatvit last week, saying he had “consulted” ChatGPT to “supplement” his legal research, and the the AI tool was “a source that has revealed itself to be unreliable.”

    He added that it was the first time he’d used ChatGPT for work and “therefore was unaware of the possibility that its content could be false.” 

    He said he even pressed the AI to confirm that the cases it cited were real. ChatGPT confirmed it was. Schwartz then asked the AI for its source. 

    ChatGPT’s response? “I apologize for the confusion earlier,” it said. The AI then said the Varghese case could be located in the Westlaw and LexisNexis databases.

    Judge Castel has set a hearing regarding the legal snafu for June 8 and has ordered Schwartz and the law firm Levidow, Levidow & Oberman to argue why they should not be sanctioned.

    Attorneys for both parties did not immediately reply to CBS MoneyWatch’s request for comment.

    [ad_2]

    Source link

  • How ChatGPT Could Help or Hurt Students With Disabilities

    How ChatGPT Could Help or Hurt Students With Disabilities

    [ad_1]

    User-friendly artificial-intelligence tools like ChatGPT are new enough that professors aren’t yet sure how they will shape teaching and learning. That uncertainty holds doubly true for how the technology could affect students with disabilities.

    On the one hand, these tools can function like personal assistants: Ask ChatGPT to create a study schedule, simplify a complex idea, or suggest topics for a research paper, and it can do that. That could be a boon for students who have trouble managing their time, processing information, or ordering their thoughts.

    On the other hand, fears about cheating could lead professors to make changes in testing and assessment that could hurt students unable to do well on, say, an oral exam or in-class test. And instead of using it as a simple study aid, students who lack confidence in their ability to learn might allow the products of these AI tools to replace their own voices or ideas.

    Such scenarios can, of course, apply to a wide range of students. You don’t need to have attention-deficit hyperactivity disorder to struggle with ordered thinking. Nor are students with severe anxiety the only ones to stress out over an oral exam. But teaching experts worry that in the rush to figure out, or rein in, these tools, instructors may neglect to consider the ways in which they affect students with disabilities in particular.

    “People are really focused, for good reasons, on academic integrity and academic honesty, and trying to redefine what that means with these new tools,” says Casey Boyle, director of the Digital Writing and Research Lab at the University of Texas at Austin, who chairs a working group on digital-content accessibility. But people are just now starting to talk about the opportunities and challenges around AI and disability.

    Students with disabilities or students who require accommodations are already working uphill. When we overreact, what we’re doing is increasing the slope of those hills.

    Students with disabilities have long faced challenges in the classroom, starting with the difficulty of securing accommodations that can help them learn better, such as receiving note-taking assistance or extra time to take tests, or being allowed to type instead of writing by hand. Boyle says he has heard of instructors moving from take-home writing assignments to timed writing exercises in class to keep students from using ChatGPT. Students who struggle with cognitive loads, or dyslexia, or are unable to focus are not going to perform well under those conditions.

    “Students with disabilities or students who require accommodations are already working uphill,” Boyle says. “When we overreact, what we’re doing is increasing the slope of those hills.”

    Welcome Assistance

    While professors are understandably concerned that students may use AI tools inappropriately, some teaching experts caution against banning their use entirely because there are ways in which AI tools could assist students with disabilities.

    • Students with mobility challenges may find it easier to use generative AI tools — such as ChatGPT or Elicit — to help them conduct research if that means they can avoid a trip to the library.
    • Students who have trouble navigating conversations — such as those along the autism spectrum — could use these tools for “social scripting.” In that scenario, they might ask ChatGPT to give them three ways to start a conversation with classmates about a group project.
    • Students who have trouble organizing their thoughts might benefit from asking a generative AI tool to suggest an opening paragraph for an essay they’re working on — not to plagiarize, but to help them get over “the terror of the blank page,” says Karen Costa, a faculty-development facilitator who, among other things, focuses on teaching, learning, and living with ADHD. “AI can help build momentum.”
    • ChatGPT is good at productive repetition. That is a practice most teachers use anyway to reinforce learning. But AI can take that to the next level by allowing students who have trouble processing information to repeatedly generate examples, definitions, questions, and scenarios of concepts they are learning.

    “I really want you as a student to do that critical thinking and not give me content produced by an AI,” says Manjeet Rege, a professor and chair of the department of software engineering and data science at the University of St. Thomas. But because students may spend three hours in a lecture session, he says, “at the end of it, if you would like to take aspects of that, put it into a generative AI model and then look at analogies and help you understand that better, yes, absolutely, that is something that I encourage.”

    Teaching experts point out that instructors can use AI tools themselves to support students with disabilities. One way to do that might be to run your syllabus through ChatGPT to improve its accessibility, says Thomas Allen, an associate professor of computer science and data science at Centre College, in Kentucky.

    Allen, who has ADHD, is particularly aware of the ways that an overly complex syllabus can stymie students. A 20-page document, for example, with lots of graphics could trip up students with a range of disabilities, such as people with low vision or those who have dyslexia, autism, or ADHD. “That’s using AI to solve a problem that we created,” he says, “by not having an accessible classroom to start with.”

    Disability-rights advocates have long encouraged instructors to use an approach called universal design for learning, or UDL. In a nutshell, this method enables students to engage with material in many ways. A common example is putting captioning on videos. Another is to provide text explanations of graphics. These strategies can benefit all learners, advocates note, creating more-inclusive classrooms.

    “Professors who have designed their courses with UDL at the heart of their pedagogy are going to be better prepared and more adaptive, not only to AI but any other weird and challenging things,” says Costa.

    Teaching experts caution that these tools have to be used with care. In simplifying a syllabus, or lecture notes, ChatGPT could change the meaning of words or add things that were not said, Allen notes. And it will reflect biases in the human-generated ideas and language on which it was trained. “You can’t trust the output as it is,” says Allen.

    Risks and Challenges

    A more-subtle challenge, teaching experts say, is that because students with disabilities can lack confidence as learners, they may be more likely than others to replace their own words and ideas with AI output, rather than use it as an assistant.

    It’s not all on you to figure this out and have all the answers. Partner with your students and explore this together.

    Students have, for example, put first drafts of papers through ChatGPT to get feedback on the clarity of their language, the coherence of their arguments, and other measures of good writing. If the AI tools significantly change their words — and not necessarily in a way that an instructor would think is an improvement — a student who doesn’t have faith in their own work and sees the tool as an expert might defer to it. “The outputs I’ve been seeing are overly rational and overly linear and overly correct in a very unproductive way,” says Boyle.

    One way to mitigate that risk is to teach all students about the strengths and limitations of AI. That includes showing students how to write thoughtful and specific prompts to get the most useful feedback; discussing the ways that generative AI tools can produce confident-sounding, yet false or flat, writing; and reminding students that ChatGPT is a word predictor without actual intelligence, so it should not be treated as a replacement for a teacher, counselor, or tutor.

    “If you keep deferring to the technology, you won’t grow and develop because you’re leaning on this technology,” says S. Mason Garrison, an assistant professor of quantitative psychology at Wake Forest University. “This is a problem for anyone, but it could disproportionately impact folks who are genuinely worried their work isn’t good enough.”

    Disability-rights advocates point to two other challenges that could affect students with disabilities more than others.

    One is that if you use AI to help generate ideas or smooth out writing, your work may be more likely to get flagged by an AI detector. That’s a problem for a range of students, including those for whom English is not their first language. But a neurodivergent student might face particular issues in response, says Allen.

    “Sometimes we have difficulty looking people in the eye, and we fidget. It’s part of our social challenges,” he says. “If you get called in and some instructor or the dean says your writing has been flagged, tell me why you cheated. You’re fidgeting. You’re looking at your shoes. That may be interpreted as guilt. And maybe the student used it to take on the persona of a character and had a conversation but used that to inform their thinking. That’s a different use case from typing in the prompt, using what it spits out.”

    The other challenge is that many students don’t seek accommodations until they need them. And how many students have ever had to sit through an oral exam or write an essay by hand?

    “In all likelihood, the first time that happens to a student, they’re not going to be able to get the accommodation in time because they never thought they needed it,” says Garrison. “There’s probably going to be a lot of surprises like that. And for professors, it might not even occur to them that that’s something you put in your syllabus.”

    One central piece of advice teaching experts have is this: Include students, and particularly students with disabilities, when designing policies on AI use. It’s going to become more important as generative AI evolves and becomes embedded in other technologies.

    “It’s not all on you to figure this out and have all the answers,” says Costa. “Partner with your students and explore this together.”

    [ad_2]

    Beth McMurtrie

    Source link

  • JP Morgan Is Developing AI to Select Your Investments | Entrepreneur

    JP Morgan Is Developing AI to Select Your Investments | Entrepreneur

    [ad_1]

    The widespread popularity of ChatGPT has prompted other big companies like Google to launch their own AI services to compete. However, it’s not just tech companies entering the AI race.

    JP Morgan Chase is developing a “ChatGPT-like” service that will use artificial intelligence to help customers select investments, CNBC reported.

    According to a filing in New York, JP Morgan applied to trademark the product “IndexGPT.” The service will use artificial intelligence and cloud computing software to analyze and select securities “tailored to customer needs,” the filing states.

    Related: Microsoft Revealed Major AI Updates at Its Developer Conference — Here’s What You Need to Know

    While it’s unclear when the product will launch, the trademark could signal that it might be in the near future.

    “Companies like JP Morgan don’t just file trademarks for the fun of it,” Josh Gerben, a trademark attorney in Washington D.C. told CNBC. “The filing includes a sworn statement from a corporate officer essentially saying, ‘Yes, we plan on using this trademark.’”

    IndexGPT may also only be one of several AI products in development at JP Morgan. During the company’s annual investor conference on Monday, Lori Beer, global technology chief at the company, said that the bank is testing “a number of use cases” for AI technology.

    Related: Mike Rowe Says the Dirtiest Jobs Are Safe From the AI Revolution: ‘I Haven’t Seen Any Plumbing Robots’

    “We are actively evaluating opportunities with large language models and see great potential in that space,” she added.

    Entrepreneur has reached out to JP Morgan for comment.

    [ad_2]

    Madeline Garfinkle

    Source link

  • Silicon Valley is knowingly violating A.I. ethical principles. Society can’t respond if we let disagreements poison the debate

    Silicon Valley is knowingly violating A.I. ethical principles. Society can’t respond if we let disagreements poison the debate

    [ad_1]

    With criticism of ChatGPT much in the news, we are also increasingly hearing about disagreements among thinkers who are critical of A.I. While debating about such an important issue is natural and expected, we can’t allow differences to paralyze our very ability to make progress on A.I. ethics at this pivotal time. Today, I fear that those who should be natural allies across the tech/business, policy, and academic communities are instead increasingly at each other’s throats. When the field of A.I. ethics appears divided, it becomes easier for vested interests to brush aside ethical considerations altogether.

    Such disagreements need to be understood in the context of how we reached the current moment of excitement around the rapid advances in large language models and other forms of generative A.I.

    OpenAI, the company behind ChatGPT, was initially set up as a non-profit amid much fanfare about a mission to solve the A.I. safety problem. However, as it became clear that OpenAI’s work on large language models was lucrative, OpenAI pivoted to become a public company. It deployed ChatGPT and partnered with Microsoft–which has consistently sought to depict itself as the tech corporation most concerned about ethics.

    Both companies knew that ChatGPT violates, for example, the globally endorsed UNESCO AI ethical principles. OpenAI even refused to publicly release a previous version of GPT, citing worry about much the same kinds of potential for misuse we are now witnessing. But for OpenAI and Microsoft, the temptation to win the corporate race trumped ethical considerations. This has nurtured a degree of cynicism about relying on corporate self-governance or even governments to put in place necessary safeguards.

    We should not be too cynical about the leadership of these two companies, which are trapped between their fiduciary responsibility to shareholders and a genuine desire to do the right thing. They remain people of good intent, as are all raising concerns about the trajectory of A.I.

    This tension is perhaps best exemplified in a recent tweet by U.S. Senator Chris Murphy (D-CT) and the response by the A.I. community. In discussing ChatGPT, Murphy tweeted: “Something is coming. We aren’t ready.” And that’s when the A.I. researchers and ethicists piled on. They proceeded to criticize the Senator for not understanding the technology, indulging in futuristic hype, and focusing attention on the wrong issues. Murphy hit back at one critic: “I think the effect of her comments is very clear, to try to stop people like me from engaging in conversation, because she’s smarter and people like her are smarter than the rest of us.”

    I am saddened by disputes such as these. The concerns that Murphy raised are valid, and we need political leaders who are engaged in developing legal safeguards. His critic, however, is not wrong in questioning whether we are focusing attention on the right issues.

    To help us understand the different priorities of the various critics and, hopefully, move beyond these potentially damaging divisions, I want to propose a taxonomy for the plethora of ethical concerns raised about the development of A.I. I see three main baskets: 

    The first basket has to do with social justice, fairness, and human rights. For example, it is now well understood that algorithms can exacerbate racial, gender, and other forms of bias when they are trained on data that embodies those biases.

    The second basket is existential: Some in the A.I. development community are concerned that they are creating a technology that might threaten human existence. A 2022 poll of A.I. experts found that half expect A.I. to grow exponentially smarter than humans by 2059, and recent advances have prompted some to bring their estimates forward.

    The third basket relates to concerns about placing A.I. models in decision-making roles. Two technologies have provided focal points for this discussion: self-driving vehicles and lethal autonomous weapons systems. However, similar concerns arise as A.I. software modules become increasingly embedded in control systems in every facet of human life.

    Cutting across all these baskets is the potential misuse of A.I., such as spreading disinformation for political and economic gain, and the two-century-old concern about technological unemployment. While the history of economic progress has primarily involved machines replacing physical labor, A.I. applications can replace intellectual labor.

    I am sympathetic to all these concerns, though I have tended to be a friendly skeptic towards the more futuristic worries in the second basket. As with the above example of Senator Murphy’s tweet, disagreements among A.I. critics are often rooted in the fear that existential arguments will distract from addressing pressing issues about social justice and control.

    Moving forward, individuals will need to judge for themselves who they believe to be genuinely invested in addressing the ethical concerns of A.I. However, we cannot allow healthy skepticism and debate to devolve into a witch hunt among would-be allies and partners.

    Those within the A.I. community need to remember that what brings us together is more important than differences in emphasis that set us apart.

    This moment is far too important.

    Wendell Wallach is a Carnegie-Uehiro Fellow at Carnegie Council for Ethics in International Affairs, where co-directs the Artificial Intelligence & Equality Initiative (AIEI). He is Emeritus Chair of the Technology and Ethics study group at the Yale University Interdisciplinary Center for Bioethics.

    The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

    More must-read commentary published by Fortune:

    [ad_2]

    Wendell Wallach

    Source link

  • I Asked ChatGPT How to Recession-Proof My Business | Entrepreneur

    I Asked ChatGPT How to Recession-Proof My Business | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    Entrepreneurs everywhere always have to look out for the dreaded “r-word.” They come around every so often and wreak havoc on businesses by reducing sales, dropping revenues and cutting employment. Of course, we’re talking about recessionsa natural, but certainly painful, part of the economic cycle.

    While there’s no way to completely insulate a company from the effects of recessions, there are steps you can take to help mitigate them.

    As a marketing and technology entrepreneur, I was curious to learn more about how to “recession-proof” my businesses. That’s why I asked ChatGPT, the world’s leading large language model (LLM) and the artificially intelligent darling of Silicon Valley.

    Below, I’ll share my conversation with ChatGPT about how entrepreneurs can protect their businesses from recessions and, ultimately, share my own thoughts on these ideas.

    Related: 9 Smart Ways to Recession-Proof Your Business (Fast)

    The prompt

    I opened our conversation by asking the following question in the form of a written prompt:

    How can I make my business recession-proof?

    Then, ChatGPT responded with the following steps after providing a brief disclaimer that no business can completely protect itself from inflation.

    ChatGPT’s “recession-proof” entrepreneurship formula

    Below are, verbatim, the seven recommendations offered by ChatGPT to help businesses weather the storm during recessions:

    1. Build a strong cash reserve.

    2. Diversify your offerings.

    3. Focus on efficiency.

    4. Maintain good customer relationships.

    5. Keep an eye on your finances.

    6. Prepare for the worst.

    7. Stay flexible.

    My thoughts on ChatGPT’s formula

    Personally, I think ChatGPT’s advice is excellent, and I generally agree with each point. However, I have slight qualifications for some. Below, I’ll share my thoughts on each:

    1. Build a strong cash reserve:

    To make it through down periods, you need to have cash saved for a rainy day. This is as true for businesses as it is for your personal finances. However, I’d go a step further and recommend holding non-cash savings as well to protect against inflationary effects. An asset such as gold and other precious metals, or even real estate, can serve as highly resilient stores of wealth during recessions — although they’re far less liquid than cash on hand.

    2. Diversify your offerings:

    This is a big one. Ensure you don’t count on a single product or service to carry your business. Diversify your revenue streams by offering several products or services so that if one gets hit badly by the recession, another can keep your business afloat.

    For example, a car dealership could diversify its offerings by adding commercial vehicles and trucks to its preexisting lineup of passenger vehicles.

    3. Focus on efficiency

    This one deserves a caveat. Prepare for a lean, hyper-efficient operation if economic circumstances require it, but don’t single-mindedly focus on efficiency by automating, downsizing and streamlining each and every task. Sometimes customer satisfaction and product refinement require a larger crew and more time dedicated to non-core functions, so allow space for that as well.

    4. Maintain good customer relationships

    This one is a given. Longstanding, loyal customers are far more likely to stick around during recessionary periods if you offer friendly, high-quality service. I suggest adding deal-sweeteners and discounts to repeat customers to keep them coming back.

    5. Keep an eye on your finances

    Create a budget, and stick to it. ChatGPT emphasizes the importance of monitoring your cash flow, and it’s right. If cash inflows aren’t leaving enough left over to cover all expenses while saving for a rainy day, you need to reevaluate your expenses and re-budget accordingly.

    6. Prepare for the worst

    Actively plan for an upcoming recession. In modern history, recessions have occurred every 3.25 years on average. Good entrepreneurs should use this as a baseline for when they should anticipate periodic business slowdowns, and contingency plans should account for these. This way, you can respond quickly if economic events lead to decreased sales.

    7. Stay flexible

    Always be willing to adapt. Market conditions can change suddenly, and savvy business owners need to be prepared for that by being flexible and able to pivot when necessary.

    Related: 5 Ways to Protect Your Business From a Recession

    Overall, ChatGPT presents a great set of principles to abide by if you want your business to be more resilient to recessions. But it’s worth reiterating that no business strategy is “recession-proof” as deep, economy-wide events can and will have unmitigable effects on businesses of all kinds.

    Yet, keeping a flexible and responsible approach to business management — as ChatGPT suggests above — would certainly make your company more likely to survive an economic downturn than one that doesn’t.

    [ad_2]

    Amine Rahal

    Source link

  • Why People Fear Generative AI — and What to Do About It | Entrepreneur

    Why People Fear Generative AI — and What to Do About It | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    People are scared of generative AI, but the future is safe and bright if you prepare now.

    I recently published an expert roundup on the benefits of generative AI. Some people worried about bias and political agendas, while others thought jobs would disappear and technocrats would hoard all wealth. Fortunately, we can mitigate risks through transparency, corporate governance and educational transformation.

    Below, I’ll discuss the fears and dangers of generative AI and potential solutions for each:

    Biased algorithms can shape public opinion

    Bias is inherent in every system. Editors have always selected stories to publish or ignore. With the advent of the internet, search engines rewarded publishers for optimized content and advertising, empowering a class of search engine marketers. Then, social media platforms developed subjective quality standards and terms of service. Additionally, bias can arise from algorithm training with disproportionate demographic representation. As such, we’ll face the same problems, solutions and debates over safety and privacy with generative AI that we already face in other systems.

    Some people believe in legislative solutions, but those are influenced by lobbyists and ideologues. Instead, consider competition among ChatGPT, Bard, Llama and other generative AIs. Competition sparks innovation, where profits and market share drive unique approaches. As demand increases, the job market will explode with demand for algorithm bias auditors, similar to the growth of diversity training in human resources.

    It’s challenging to find the source of bias in a black-box algorithm, where users only see the inputs and outputs of the system. However, open-source code bases and training sets will enable users to test for bias in the public space. Coders may develop transparent white-box models, and the market will decide a winner.

    Related: The 3 Principals of Building Anti-Bias AI

    Generative AI could destroy jobs and concentrate wealth

    Many people fear that elite technocrats will replace workers with robots and accumulate wealth while society suffers. Consider how technology replaced jobs for decades. The cotton gin replaced field workers who toiled in the hot sun. Movable type replaced scribes who hand-wrote books, and ecommerce websites displaced many physical stores.

    Some workers and businesses suffered from these transformations. But people learned new skills, and employers hired them to fill talent gaps. We will need radically different education and training to survive. Some people won’t upskill in time, and we have an existing social safety net for them.

    Historically, we valued execution over ideas. Today, ideation may set humans apart from machines, where “ideators” replace knowledge workers. Our post-AI world will require critical thinkers, creatives and others to innovate and define ideas for AIs to execute. Quality assurance professionals, algorithm trainers and “prompt engineers” will have a vibrant future, too.

    There will also be a market for “human-made” products and services. People will hunger for a uniquely human touch informed by emotional intelligence, especially in the medical and hospitality industries. An episode of 60 Minutes ended with “100% human-generated content,” and others will follow.

    Generative AI may create an influx of spam

    Many marketers saw ChatGPT as a shortcut to content creation, publishing articles verbatim. The risky technique is just a cheap, fast, low-quality form of ghostwriting.

    In contrast, generated content may make digital marketing more equitable by reducing ghostwriting costs for bootstrapped entrepreneurs. The key is understanding Google E-E-A-T, which stands for Experience, Expertise, Authoritativeness and Trustworthiness. Your Google reputation and ranking hinge on your published work. So, people who improve and customize generated content will prosper, while Google flags purveyors of “copy-paste” as spammers.

    Rogue AI could pose cybersecurity risks

    A rogue coder could create harmful directives for an AI to damage individuals, software, hardware and organizations. Threats include malware, phishing schemes and other cybersecurity threats. But that’s already happening. Before the internet, we battled computer viruses targeting people, organizations and equipment. For-profit antivirus providers have served this market need to keep us safer.

    Zero-trust platforms like blockchain may detect anomalies and mitigate cybersecurity risks. In addition, companies will create standard operating procedures (SOPs) to protect their systems — and profits. Therefore, new jobs will materialize to develop new processes, governance, ethics and software.

    Related: Why Are So Many Companies Afraid of Generative AI?

    Stolen identities and reputation attacks could be imminent

    People already create deepfake videos of celebrities and politicians. Many are parodies, but some are malicious. Soon, humans will be unable to detect them. Historically, we’ve had this capability since PhotoShop was released, and teams are already in place to address misinformation and fake images at social media companies and news outlets.

    Regulations and policing will never prevent the creation of fake content. Nefarious characters will find tools on the black market and the dark web. Fortunately, there are solutions in the private sector already.

    Social media platforms will continue to block presumably fake content and stolen identities. And more solutions will come to fruition. Tools can already detect generated content and continue to improve. Some may become integrated with internet browsers that start issuing fake content warnings. Or celebrities may wear timestamped, dynamic QR codes for authentication when filming.

    The singularity may finally arrive

    The thought of a conscious AI megalomaniac crosses sci-fi geek minds everywhere. Find comfort knowing that it may already exist. After all, we can’t detect biological or technological consciousness. Yet, consciousness may emerge from complex systems like generative AI. Indeed, the simulation hypothesis suggests we’re in a simulation that an AI controls already.

    Related: Addressing the Undercurrent of Fear Towards AI in the Workforce

    History is full of dangerous technology. Warren Buffet compared AI to the atom bomb. If he’s right, then we’re as safe as we have been since 1945, when the U.S. government dropped a nuclear bomb for the first and last time. Systems are in place to mitigate that risk, and new systems will arise to keep AI safe, too. Our future will remain bright if enough people pursue cybersecurity and related fields. With that in mind, learn to use this technology and prepare for the shift towards AGI.

    [ad_2]

    Dennis Consorte

    Source link

  • Apple Bans Employee ChatGPT Use Over Data, Privacy Concerns | Entrepreneur

    Apple Bans Employee ChatGPT Use Over Data, Privacy Concerns | Entrepreneur

    [ad_1]

    Apple has prohibited employees from using ChatGPT and other artificial intelligence tools over fears of leaking confidential information, The Wall Street Journal reported.

    According to an internal document viewed by the outlet as well as individuals familiar with the matter, Apple has restricted the use of the prompt-driven chatbot along with Microsoft’s GitHub Copilot (which uses AI to automate software code).

    The company fears that the AI programs could release confidential data from Apple, per the outlet.

    OpenAI (the creator of ChatGPT) stores all chat history from interactions between the chatbot and users as a way to train the system and improve accuracy over time, as well as be subjected to OpenAI moderators for review over any possible violations of the company’s terms of service.

    Related: Walmart Leaked Memo Warns Against Employees Sharing Corporate Information With ChatGPT

    While OpenAI released an option last month where users can turn off chat history, the new feature still allows OpenAI to monitor conversations for “abuse,” retaining conversations for up to 30 days before deleting them permanently.

    A spokesperson for Apple told the WSJ that employees who want to use ChatGPT should use its own internal AI tool instead.

    Apple is not the first big company to ban the use of ChatGPT. Earlier this year, JP Morgan Chase, Goldman Sachs, and Verizon all banned the use of the AI-powered chatbot for employees over similar fears of data leakage.

    Earlier this week, OpenAI CEO Sam Altman spoke before Congress about the pressing need for government regulation of AI development, calling it “crucial.”

    Related: ‘If This Technology Goes Wrong, It Can Go Quite Wrong’: OpenAI CEO Sam Altman Speaks to Lawmakers About AI Risks, Says Government Intervention Is ‘Crucial’

    [ad_2]

    Madeline Garfinkle

    Source link

  • Why Entrepreneurs Should Embrace Generative AI | Entrepreneur

    Why Entrepreneurs Should Embrace Generative AI | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    ChatGPT’s prototype launched less than half a year ago, shocking though it may seem. In the short time since, the platform has transformed how marketers, creators and inventive students view content creation. Depending on who you ask, and depending on how they are embraced, generative AI platforms like ChatGPT and Google’s newly released Bard will either improve the way we do business or put us out of business.

    One year ago, generative AI art approximating colleagues’ facial characteristics littered our LinkedIn feeds. Now we’re discussing the future of work as we know it … and it will likely be determined by those same behind-the-scenes algorithms.

    I encourage you to consider not what generative AI can take away but what it can provide. While it’s only natural for a technology that solves the unsolvable to cause a certain amount of trepidation, it’s not a reason to try to ignore or suppress its capabilities. In fact, consider for a moment the major technological advancements of past centuries and the significant impact they had on GDP (Gross Domestic Product) over the years.

    For example, the industrial revolution in the late 18th and early 19th centuries led to a massive increase in productivity and output in manufacturing industries, which contributed to the growth of GDP. Similarly, the advent of the internet and digital technologies in the late 20th century has revolutionized communication and information sharing, which has led to the growth of many new industries and businesses, contributing to an increase in GDP. As we once again face a revolutionary technology, I contend that we should spend less time on what it will take away, and more time on how we will leverage it to create and accomplish more.

    Related: Why Are So Many Companies Afraid of Generative AI?

    AI has already enabled remarkable innovations

    Enterprises handle a massive amount of data. From CRMs to APIs to consumer-facing technologies and beyond — everything in the enterprise tech stack generates an enormous volume of insight-rich data. Globally, The World Economic Forum predicts we will generate about 463 exabytes of data daily as early as 2025. For context, that’s 1,000 bytes multiplied by a factor of six. In other words, it’s more data than we as humans know what to do with or how to maximize the value of.

    While our species is incapable of fully understanding data at this scale, much less extracting value from it, non-generative AI applications have been heavily leveraged to categorize, analyze, correlate and draw conclusions from such data at a phenomenal rate. Generally speaking, this level of efficiency-driving data science and machine learning was quickly embraced and adopted across numerous industries from business to healthcare. Because such non-generative AI was not seen as a threat to human competencies, but rather as a tool to help us achieve more, we have made tremendous progress by leveraging such technologies.

    Such is not the case with this new form of generative AI that has now emerged, the capabilities of which are deemed to overlap a lot more with those of humans. In fact, generative AI is causing citizens and business leaders alike to be seemingly much more cautious about its applications and the threat that it may pose to jobs, industries and our current way of life than they were about its non-generative predecessor. While some of these concerns are certainly founded, it is imperative that we look not only at what we may lose but what we stand to gain by embracing this new technology — and even what we stand to lose by not embracing it.

    Related: How ChatGPT and Generative AI Can Transform the Way You Run Your Business

    AI has the potential to transform workplace tech

    One example of how generative AI can change our lives, or at least the course of our workday, is its ability to transform our relationship with the everyday software around us.

    Today, humans use software to accomplish or be more efficient at certain tasks. In most cases, we are required to physically interact with the software, digest information it gives us, make decisions on the tasks and strategies we will implement using said software, then, of course, use the software itself to execute those tasks. While ultimately the software is helpful, there is no ignoring that to reap its full benefit, we must invest time and effort into it which is taken away from other core parts of our day.

    But consider for a moment a world where such a relationship is antiquated, and that software is no longer a tool that we have to spend time using, but rather a partner that will give us time back by doing things for us. Generative AI is one of the keys to realizing this new relationship with software. Time-consuming decisions and tasks across organizations and society are now automatically completed on our behalf, giving us time back to do more of what we are passionate about and great at. The efficiencies gained, not to mention the optimizations leveraged, will not only transform our outputs as a society, but the learnings and further innovations that will result will transform our economies, technologies and ways of life. This is where we at SOCi see software going and where we are investing our time and leveraging new AI technologies.

    Related: The Perfect Blend: How to Successfully Combine AI and Human Approaches to Business

    How do we move toward an AI-based future?

    My answer to this question is simple: optimistically, but cautiously. Although I’ve discussed the positives of AI maturity at length, I must also emphasize what AI can’t accomplish.

    AI, generative or otherwise, is a powerful tool that can be leveraged, but it is seldom the end product. It is our responsibility to train the tool to be effective, to integrate it into workflows and processes that we need to achieve our goals and to continue to consider the needs of our customers. While the AI models flooding the market today are powerful, they still need direction, application and that “human touch” to be rendered into specific solutions suitable for our businesses.

    It is also important to note that while AI may be leveraged to provide insights and complete certain tasks, it does not (yet) “think” as humans do. AI models are built to process data and deliver outputs but not to produce original thoughts and complex solutions. For the time being, humans will still be at the helm of crafting such strategies and solutions to larger societal or organizational challenges.

    In the end, it will be the innovators amongst us that accept these challenges and embrace the benefits of AI that will dictate the advancements that we make and the transformations that our way of life will undergo. Our creators at SOCi are deeply passionate about being at the forefront of this movement and specifically about authoring the transition of the relationship our customers have with marketing software — from a tool that they use to accomplish meaningful tasks to a co-marketer who can execute on thousands of data-driven decisions and tasks — for them to deliver real-world results and give them time back to do what they are passionate about.

    [ad_2]

    Afif Khoury

    Source link

  • Congress holds hearing on artificial intelligence

    Congress holds hearing on artificial intelligence

    [ad_1]

    Congress holds hearing on artificial intelligence – CBS News


    Watch CBS News



    The operator of ChatGPT spoke before a Senate hearing on artificial intelligence Tuesday. He acknowledged AI could pose risks, and encouraged Congress to pass laws creating effective guardrails for the emerging technology. Scott MacFarlane reports.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • Father of ChatGPT: AI could

    Father of ChatGPT: AI could

    [ad_1]

    Sam Altman, the CEO of the company behind ChatGPT, expressed concern that artificial intelligence could “go quite wrong” at a Senate committee hearing on Tuesday focusing on how to regulate the rapidly developing field of AI.

    Altman, who leads San Francisco-based OpenAI, said in response to a question about his greatest fear regarding AI that the technology and industry could “cause significant harm to the world” unless it is properly regulated. 

    “If this technology goes wrong, it can go quite wrong,” he told the Senate Judiciary’s Subcommittee on Privacy, Technology and the Law. “We want to be vocal about that. We want to work with the government to prevent that happening. But we have to be clear-eyed about it.”

    Asked by Sen. Josh Hawley, R-Mo., about the risk that so-called large language models like ChatGPT, which can already predict public opinion with accuracy, could be used to manipulate people, such as undecided voters, Altman replied, “I’m nervous about it.” He also drew a parallel with the emergence of Photoshop in the late 1990s and early 2000s, when many people were initially fooled by photoshopped images before developing an understanding of image manipulation. 

    Sam Altman behind a mic in blue suit
    “If this technology goes wrong, it can go quite wrong,” OpenAI CEO Sam Altman said Tuesday, May 16, 2023, in a Senate hearing on how to regulate artificial intelligence. 

    Nathan Posner/Anadolu Agency via Getty Images


    “This will be like that on steroids,” he said.

    AI a threat to democracy?

    The congressional hearing covered a range of concerns, and senators from both parties broadly agreed that AI needed regulation, without reaching firm conclusions on how to do that. Sen. Chris Coons, Democrat of Delaware, fretted that AI models developed in China would promote a pro-China “point of view,” and pushed for the creation of AI that would promote “open markets and open societies.”

    Hawley later rattled off a list of potential negative effects from AI: “Loss of jobs, loss of privacy, manipulation of personal behavior, manipulation of personal opinion and destabilization of elections in America,” he said.

    But Altman expressed optimism that AI would create more jobs than it destroys, saying, “We’re very optimistic that there will be fantastic jobs in the future and that current jobs can be much better,” and said that ChatGPT was “good at doing tasks, not jobs.” 

    IBM Chief Privacy and Trust Officer, Christina Montgomery, who also testified in the hearing, used herself as an example of AI creating new jobs, noting that she heads a team of AI governance professionals. 

    Indeed, the technology is already disrupting some fields. Earlier this month, IBM’s chief executive told Bloomberg the company would pause hiring for jobs that could be done by AI, affecting roughly a third of the company’s headcount, or 7,800 positions.  


    Writers strike focuses on whether AI could take jobs from screenwriters

    05:40

    Altman and AI researcher Gary Marcus expressed support for government regulations on AI. That could could include potentially creating a new agency to oversee the technology, requiring companies to make AI models and their underlying data public, requiring AI creators to have a license to publicly release products or demonstrate their safety before public release, and have independent auditing of AI models. Montgomery advocated for a more narrowly focused approach where the government would regulate only certain “use cases” for artificial intelligence.

    “These things will have learned from us”

    The rapid emergence of “generative AI” — tools that can put out reams of writing or visual images, helping doctors communicate with their patients and real estate pros quickly write listings, for example — has heightened public concerns about the tech. AI pioneer Geoffrey Hinton, who recently left his job at Google to speak freely about the technology, recently told a conference that AI could pose a range of threats.

    “These things will have learned from us, by reading all the novels that ever were and everything Machiavelli ever wrote, how to manipulate people,” Hinton said, according to the Associated Press. “Even if they can’t directly pull levers, they can certainly get us to pull levers.” 

    In March, a number of prominent CEOs and researchers signed a letter asking for a six-month moratorium on developing major AI models. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” they asked.

    At Tuesday’s hearing, however, the consensus was that the explosion in AI would continue apace, as companies and investors pour billions of dollars into the technology.

    “There’s no way to stop this moving forward,” said Sen. Cory Booker, D-N.J. “There will be no pause. There’s no enforcement body to enforce a pause. Forgive me for being skeptical, nobody’s pausing.”

    [ad_2]

    Source link

  • AI Regulation Is ‘Crucial,’ Says OpenAI ChatGPT CEO | Entrepreneur

    AI Regulation Is ‘Crucial,’ Says OpenAI ChatGPT CEO | Entrepreneur

    [ad_1]

    Sam Altman, CEO of OpenAI (creator of the prompt-driven chatbot ChatGPT) knows that the acceleration of artificial intelligence and its potential risks is unsettling — to some.

    Altman spoke to the Senate Judiciary subcommittee on Tuesday in his first appearance before Congress, and said it is “crucial” that lawmakers implement safety standards and regulations for AI to “mitigate the risks of increasingly powerful models.”

    “We understand that people are anxious about how it can change the way we live. We are, too,” Altman said. “If this technology goes wrong, it can go quite wrong.”

    During the nearly three-hour hearing, Altman, along with two other witnesses (Professor Emeritus Gary Marcus and IBM’s Chief Privacy and Trust Officer, Christina Montgomery), spoke with nearly 60 lawmakers about the potential AI dangers when left unchecked — from job disruption to intellectual-property theft.

    “My worst fear is we cause significant harm to the world,” he said.

    One suggested move for lawmakers, Altman said, is to implement a licensing system for companies developing powerful AI systems. Lawmakers would outline a series of safety standards that companies need to abide by to grant them a license, and then also have the power to revoke it should they not comply with the standards.

    As far as the looming question of how AI will disrupt the job market, Altman agreed that the technology has the potential to eliminate many positions. However, he doesn’t think it means there won’t be new jobs created as well.

    Related: Goldman Sachs Says AI Could Replace The Equivalent of 300 Million Jobs — Will Your Job Be One of Them? Here’s How to Prepare.

    “I think, [AI can] entirely automate away some jobs,” he said. “And it will create new ones that we believe will be much better.”

    In March, tech magnates like Elon Musk called for a six-month pause on AI in an open letter. On Tuesday, in response to subcommittee member Sen. Josh Hawley’s question to the witnesses about the letter, Altman began by saying that the “frame of the letter is wrong,” and that what is important is audits and safety standards that need to pass before training the technology. He then added, “If we pause for six months, I’m not sure what we do then, do we pause for another six?”

    Altman also stated that before OpenAI deployed GPT4, the company waited more than six months to release it to the public and that the standards that OpenAI has developed and used before deploying technology is the direction the company “wants to go in” rather than “a calendar clock pause.”

    The chair of the subcommittee, Sen. Richard Blumenthal, also weighed in and said that implementing a moratorium and “sticking our head in the sand” is not a viable solution. “The world won’t wait,” he said, adding that “safeguards and protections, yes, but a flat stop sign? I would be very worried about that.”

    It remains to be seen what actions, if any, the government will take on AI, but in closing remarks, Blumenthal said that “hard decisions” will need to be made, but, for now, companies developing AI should take a “do no harm” approach.

    Related: Google CEO Sundar Pichai Says There Is a Need For Governmental Regulation of AI: ‘There Has To Be Consequences’

    [ad_2]

    Madeline Garfinkle

    Source link