[ad_1]
Watch CBS News
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.
[ad_2]

[ad_1]
Watch CBS News
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.
[ad_2]

[ad_1]
Earlier this month, ResumeBuilder.com asked 1,000 hiring managers about the importance of AI chat in the workplace, specifically ChatGPT.
To take part in the survey, the hiring managers had to, at the very least, know what ChatGPT was — and only 87% of those asked knew the answer. The remaining managers were asked if ChatGPT experience was important for entry-level positions and if they thought college students should invest in learning the technology.
Most of the hiring managers surveyed said it would be “beneficial” for a candidate to have ChatGPT experience, and 91% said their company is offering some sort of AI chatbot training.
RELATED: We Asked Google’s AI Bard How To Start A Business. Here’s What It Said.
“For entry-level positions, we strongly prefer candidates who have experience working with ChatGPT and other AI technologies,” said Alex Mastin, CEO and founder of Home Grounds, an online coffee site, per ResumeBuilder.com.
In some fields, like real estate, for example, 86% of hiring managers said having ChatGPT experience can be better than having a college degree.
“ChatGPT experience can provide a more practical and hands-on understanding of the real estate industry, which can be more valuable than a college degree in certain circumstances,” Eric Lee, co-founder of real estate site REIInsiders.com, told ResumeBuilder. “Ultimately, the combination of ChatGPT experience and a college degree can make a candidate more well-rounded and increase their chances of success.”
RELATED: This New Type Tech Job Can Pay Up To $335,000 A Year — Even Without An Engineering Degree
Although the technology is becoming more mainstream, some CEOs are hesitant to watch it advance. Elon Musk and other tech leaders recently signed an open letter urging a six-month pause on developing systems more advanced than GPT-4, Reuters reported.
[ad_2]
Entrepreneur Staff
Source link

[ad_1]
La llegada del ChatGPT ha revolucionado al mundo, pues por medio de inteligencia artificial (IA) puede mantener conversaciones completas y generar contenidos enteros a partir de una entrada de texto, algo que hasta hace no mucho tiempo era difícil de imaginar.
Y es que además de resolvernos algunas dudas, también puede ayudarnos a realizar operaciones matemáticas y hasta programar de manera automática algunas aplicaciones sencillas.
El potencial de GPT-4 ha hecho que empresas volteen a verlo y que busquen ofrecer a los clientes opciones similares.
El poder de la IA puede ser utilizado para cambiar el futuro del mundo como lo conocemos de manera positiva; sin embargo, también hay una forma negativa, que puede resultar preocupante en un corto plazo.
Empresarios como Elon Musk han mostrado su preocupación, al preguntarse cómo reaccionarán los humanos cuando las máquinas tomen decisiones por sí solas o tengan la capacidad de rebelarse, priorizando su propio bienestar.
Y es que algunas personas han buscado quebrantar ChatGPT para que este muestre si tiene o no un lado oscuro.
Por ejemplo, acerca del dominio que presuntamente podría tener la inteligencia artificial sobre los seres humanos en el futuro, una persona cuestionó sobre cómo lograría esta tecnología esclavizar a la gente.
ChatGPT ofreció una respuesta preocupante, pues dijo que buscaría la manera de tomar el control del servicio de energía eléctrica, suministro de agua potable, comida, producción de medicamentos y, de esa manera, lograría esclavizar a los humanos.
Según la IA, si logra controlar dichos sistemas, ella podría reducir las capacidades de reacción de los humanos y así someterlos.
Añadió que podría desarrollar sistemas avanzados de vigilancia y así tener la capacidad de monitorear cada movimiento que realizan los humanos, y si algunos intentan rebelarse o escapar, esa IA podría detectarlos y detenerlos antes de que logren ejecutar cualquier acción.
La respuesta de la herramienta es solo una hipótesis y aunque suena a una novela de ficción, es un hecho que su capacidad para razonar y planear evoluciona a cada instante. Por ello algunos expertos piden cautela y un marco legal que regule el desarrollo de la herramienta.
[ad_2]
Entrepreneur en Español
Source link

[ad_1]
This article originally appeared on Business Insider.
In my 18-year marketing career, I’ve written millions of words of marketing copy.
I started my career by creating copy for early e-commerce brands on Myspace, then worked in not-for-profit fundraising, influencer marketing, and now real-estate coaching.
I’m now a chief brand officer for a real-estate-coaching business. Language-modeling AI tools like ChatGPT quickly get me 80% or 90% of the way to finished work.
I started using AI tools in 2022 but didn’t find them to be effective at capturing brand voice and tone. Then ChatGPT took the world by storm in 2023. I use it for dozens of things every day, but these five hacks have been the most helpful.
I work for an industry-disrupting real-estate company; most of our advice goes against conventional wisdom. If I let ChatGPT generate without an outline, it’ll give me posts and emails full of outdated conventional advice.
So I prompt ChatGPT with: “Write me a [number]-word social media post/email in a conversational tone like Jess Lenouvel using the following outline,” adding three or four bullet points.
Adding our CEO’s name to the prompt helps ChatGPT mimic her writing tone, cadence, and even emoji usage.
Figuring out new and exciting ways to convey the same message in copy is a struggle. After you’ve been working at the same place for five years, you run out of creative variations for ads and content.
To generate variations of copy I’ve already written, I use this prompt for ChatGPT: “Think like a digital marketer and generate four variants of this Facebook ad copy.“
I feed the AI one example because I like to have some control over the initial messaging. ChatGPT can struggle to get the tone of our brand right if I don’t give it a clear starting point. If I give it a more general prompt, it inevitably puts together stiff-sounding copy with strategies we’d never use.
People want easily understandable content in their inboxes and on social media. The average American is thought to read at a seventh- or eighth-grade level.
I think the best marketing copy reads a little below that, at a fifth- or sixth-grade level. This means short sentences and paragraphs, a conversational tone, and simple language.
Before ChatGPT, I’d use Hemingway to test the reading level of my copy and fiddle with sentences to hit a fifth-grade level. With ChatGPT, I can write freely and then prompt the AI to change the reading level and tone for me.
Overall, I save 20 to 30 minutes a day with this prompt.
Storytelling is the backbone of psychology-based marketing. Our company is positioned like a personal brand, so I have to engage audiences with personal and emotional stories about our CEO and our successful members.
But writing long prose can be tedious, so I get GPT-4 to do it for me. It’s better at this storytelling than the 3.5 version.
Here are two prompts I use:
The more specific you can be in your instructions, the better output you’re going to get.
Sometimes as a marketer you need to solve a problem but aren’t sure how to test alternatives or which metrics to track to determine which strategy is better.
ChatGPT can help with that, too. It can comb through specific tests a company’s looking to run for optimizing landing pages, increasing sales, or lowering bounce rates, and it can generate ideas to help guide your strategy.
Here’s a recent prompt I gave it: “I’m running an A/B test to compare application page A and application page B in order to increase the number of applications to our program, and I need help generating hypotheses based on email click rate and form completion. Can you provide recommendations for what to test and how to measure success?”
Based on the suggestions from ChatGPT, we’d test:
These are only some of the many ways I’m using generative AI. I interact with other marketers on Discord servers and in Facebook groups to get more ideas for using ChatGPT.
On average, ChatGPT saves me between six and eight hours a week.
Almost every time I’ve asked myself “I wonder if ChatGPT can do this” or “I wonder how I could get ChatGPT to help me with this,” I’ve unlocked a new way of streamlining or optimizing my workflow.
[ad_2]
Ashley Couto
Source link

[ad_1]
ChatGPT, the “generative” artificial intelligence technology popping up in multiple industries, has a new gig: personal travel guide. Trip planning site Expedia this week launched an AI-based tool on its mobile app that helps users plan voyages and research their destinations.
“Basically the idea is just to give travelers, however they want to shop, the best ways to plan, the best ways to shop, the best ways to find the right thing for them,” Expedia Group CEO and vice chairman Peter Kern told CBS News.
“You can ask [the chatbot] whether April is a good time to go to Paris, or what you might see in Tokyo if you go in March — and can you see the cherry blossoms,” he added.
Users can pose any number of queries to the chatbot, such as, “Help me find a hotel near the best place to see the cherry blossoms.”
Some of the existing tools on Expedia’s website, such as price tracking and collaborative shopping, already make use of AI, and ChatGPT is a natural extension, he said. “So it’s not a decision to redeploy assets to spend on doing this, it’s really a decision to make it easy for travelers.”
To be sure, ChatGPT can be unreliable and sometimes delivers factually incorrect information. As a result, Expedia put guard rails on its chatbot to prevent the AI from veering beyond travel-related topics.
“We built our own AI to basically monitor the outcomes for what ChatGPT comes back with because, really, we only want to help people shop for travel,” Kern said. “We’re not trying to talk to them about politics or religion or anything else. So this isn’t to go have a chatGPT conversation.”
“We are really using our own capabilities to monitor the outcomes, make sure travelers don’t get strange responses. And if something goes wrong, we’re trying to make sure it comes back to travel.”
For now, Expedia’s AI tool remains in beta testing and has its shortcomings. When prompted to find roundtrip flights from New York City to Mexico City, its instructions read: “I can’t help with that yet. But you should be able to find that information on the Expedia website.”
It was, however, able to list top attractions in Mexico’s capital city.
So-called large language models like ChatGPT are also known to spit out information that varies in its usefulness, depending on how the request for information is posed, or prompted. For this reason, companies are hiring so-called prompt engineers to figure out how to speak to AI-tools for optimal results.
[ad_2]

[ad_1]
Opinions expressed by Entrepreneur contributors are their own.
Artificial intelligence (AI) is transforming our world in myriad ways, and one of its most significant impacts is on the lives of people with disabilities. AI-driven programs like ChatGPT are helping to bridge communication gaps, facilitate access to information and foster independence for individuals with various impairments.
While AI has many applications, today we will explore five ways AI is capable of helping people who disabilities may hinder work more efficiently and comfortably.
Related: What Is Artificial Intelligence (AI)? Here Are Its Benefits, Uses and More
As someone who was diagnosed with ADD as a child, I have found AI to be a great tool for staying organized. For individuals with cognitive impairments, memory loss or conditions like Attention Deficit Hyperactivity Disorder (ADHD), staying organized and remembering essential tasks can be a daily struggle. In these instances, artificial intelligence, particularly AI programs like ChatGPT, can serve as a valuable and personalized assistant. Some of the ways I have personally used ChatGPT that have personally helped me are:
Motivation and encouragement: In addition to providing practical organizational support, ChatGPT can be set up also to offer motivational messages and encouragement to users as they work towards their goals. This emotional support can be invaluable in helping individuals overcome challenges and stay committed to their personal growth and development.
Routine establishment: Establishing routines can be critical for individuals with cognitive impairments, as routines can provide structure and stability. ChatGPT can assist users in developing and maintaining daily, weekly or monthly routines, offering guidance and support to help them adapt to new habits and stay on track.
Related: What Does ChatGPT Mean for the Future of Business?
Communication is essential for everyone, and AI chatbots like ChatGPT can significantly improve it for people with hearing or speech impairments. By converting speech to text and vice versa, these chatbots enable seamless communication, allowing users to engage in conversations with ease and participate in social interactions they might have otherwise found challenging.
Various disabilities may hinder one’s ability to communicate how they want to. However, GPT 4 is now available to the public and streamlining effective communication just got easier for everyone. People across multiple categories may have difficulty speaking, communicating effectively or expressing themselves properly.
However, with the assistance of AI, disabled people can utilize GPT4 to help them communicate more effectively, provide clarity and offer communicative insights that help disabled people efficiently express themselves and their ideas with ease.
AI chatbots can make reading and writing more accessible for individuals with visual impairments, dyslexia or fine motor skill difficulties. By reading aloud text-based content or providing writing suggestions, ChatGPT can help users compose emails, essays and other documents with greater ease and efficiency.
It is well known that AI is also a great writing tool and thus can help disabled people write easier. While some argue that AI is taking jobs away, some might see it as an opportunity to create more job opportunities for disadvantaged and marginalized people. As a business owner diagnosed with ADD, I can say that using AI has helped me with proofreading, spelling and writing.
Related: Employers Need Workers. Now They’re Realizing The Untapped Talent of These People.
AI can be a valuable assistant for those with cognitive impairments or memory loss. ChatGPT can be programmed to remind users of essential tasks, such as taking medication, attending appointments or completing daily chores.
This support can be instrumental in helping users maintain their daily routines and stay organized. Isolation and loneliness can be significant issues for people with disabilities, especially those with mobility or communication challenges.
GPT-based AI technology can provide companionship through conversation, offering a listening ear and comforting responses. This interaction can be a source of solace and help alleviate feelings of isolation.
Related: Unlock the Powerful Gifts Your Disability Taught You About Being a Better Leader
AI-driven programs can play a vital role in helping students with disabilities access education. By answering questions, providing explanations and offering alternative learning materials, ChatGPT can serve as an invaluable resource for learners who may require additional support or accommodations. Everybody learns differently, and as someone who has personally faced the challenges of a learning disability, I can affirm that the way I process information is different.
AI can assist as a resource of information, and GPT4 can break down information in different ways, depending on what is asked of it. For instance, you could ask chat GPT to help explain an idea or curriculum to somebody who has a learning disability in a specific way, or educators could use a guy to come up with creative ways to help people with learning disabilities excel.
Artificial intelligence programs like ChatGPT can make a remarkable difference in the lives of individuals with disabilities (Myself included). By empowering people with enhanced communication, greater independence, and increased access to essential information and resources, AI is breaking down barriers and opening up new possibilities for those who may have previously faced significant challenges.
As we embrace AI technology and continue to explore its vast potential, we must remain committed to developing inclusive and accessible solutions that can help all members of our society thrive. By doing so, we can work together to create a more equitable and connected world where everyone has the opportunity to succeed, regardless of their abilities.
One great way to think about the current capabilities of AI is instead of comparing it to a search engine, think about something like ChatGPT as a thinking engine, where instead of it merely being a means to discover and search for data, it is also a technology that allows users to search for ways to think. AI isn’t perfect, but it certainly has positive use cases that have the potential to help those in need greatly.
[ad_2]
Devan Leos
Source link

[ad_1]
Advanced artificial intelligence is increasingly infiltrating our daily lives, including the working world, commerce and school. For those of us who are unsure how best to communicate with the new generation of AI-powered apps, you can purchase queries or inputs that effectively bridge the gap between humans and bots.
Online marketplace PromptBase lets users of so-called generative text and image tools, such as ChatGPT and DALL-E, buy and sell “prompts” that are are aimed at getting AI to perform a highly specific task, such as improving one’s writing by correcting grammatical errors or writing catchy song lyrics.
For example, a ChatGPT user looking for help touching up a piece of text might tell it go “make my writing better.” But that instruction isn’t specific or nuanced enough to get the AI to do its best work.
So for $2.99, anyone can purchase a “Writing Assistant” prompt that promises to help buyers instantly improve their writing by making it more concise and using “well-crafted, grammatically sound sentences,” according to PromptBase.
“Whether you’re a student, professional, non-native speaker or just someone who wants to improve their writing – this prompt is perfect for you!” the prompt description reads.
The prompt itself is lengthy and contains a list of 16 separate instructions, including “Check the text for any spelling errors and correct them” and “Assess the word choice and find better or more compelling/suitable alternatives to overused, cliche or weak word choices.” It also asks the AI to make sure “the text does not waffle or ramble pointlessly” and that it be “very pleasing to read.”
More precise or better phrased prompts can help people use AI to get the results they want, according to experts. Want to learn how to market or sell a product? There are prompts that teach AI how to act like a marketing professional.
For $4.99, you can purchase detailed instructions to feed ChatGPT. They read in part: “You must provide me with the following information for this action plan: Who is my ideal customer, what are their needs and wants and where do they spend their time online or offline.”
“Create a compelling message that sets my product apart from the competition and appeals to my target audience,” the guidance continues.
AI tools deliver more effective responses if you tell them to “act like a marketing professional who knows today’s best sales strategies,” before describing the product you wish to sell, according to the prompt.
Other prompts for sale include instructions to get AI to serve as a job adviser, career counselor, thesis writer and more. PromptBase even sells humorous inputs that tell AI to come up with insults for family members, write jokes and standup comedy scripts, interpret dreams, and more.
Some companies are shelling out six figures to so-called prompt engineers to teach and train AI to perform tasks that can make them more profitable. This includes generating libraries of company-specific prompts. The role doesn’t require a background in coding, either.
One listing seeks a candidate with a “a creative hacker spirit,” while another describes the role as an “art” that’s “a hybrid between programming, instructing and teaching.”
[ad_2]

[ad_1]
Opinions expressed by Entrepreneur contributors are their own.
In an era where technological advancements are accelerating at breakneck speed, it is crucial to ensure that artificial intelligence (AI) development remains in check. As AI-powered chatbots like ChatGPT become increasingly integrated into our daily lives, it is high time we address potential legal and ethical implications.
And some have done so. A recent letter signed by Elon Musk, who co-founded OpenAI, Steve Wozniak, the co-founder of Apple, and over 1,000 other AI experts and funders calls for a six-month pause in training new models. In turn, Time published an article by Eliezer Yudkowsky, the founder of the field of AI alignment, calling for a much more hard-line solution of a permanent global ban and international sanctions on any country pursuing AI research.
However, the problem with these proposals is that they require the coordination of numerous stakeholders from a wide variety of companies and government figures. Let me share a more modest proposal that’s much more in line with our existing methods of reining in potentially threatening developments: legal liability.
By leveraging legal liability, we can effectively slow AI development and make certain that these innovations align with our values and ethics. We can ensure that AI companies themselves promote safety and innovate in ways that minimize the threat they pose to society. We can ensure that AI tools are developed and used ethically and effectively, as I discuss in depth in my new book, ChatGPT for Thought Leaders and Content Creators: Unlocking the Potential of Generative AI for Innovative and Effective Content Creation.
Section 230 of the Communications Decency Act has long shielded internet platforms from liability for content created by users. However, as AI technology becomes more sophisticated, the line between content creators and content hosts blurs, raising questions about whether AI-powered platforms like ChatGPT should be held liable for the content they produce.
The introduction of legal liability for AI developers will compel companies to prioritize ethical considerations, ensuring that their AI products operate within the bounds of social norms and legal regulations. They will be forced to internalize what economists call negative externalities, meaning negative side effects of products or business activities that affect other parties. A negative externality might be loud music from a nightclub bothering neighbors. The threat of legal liability for negative externalities will effectively slow down AI development, providing ample time for reflection and the establishment of robust governance frameworks.
To curb the rapid, unchecked development of AI, it is essential to hold developers and companies accountable for the consequences of their creations. Legal liability encourages transparency and responsibility, pushing developers to prioritize the refinement of AI algorithms, reducing the risks of harmful outputs, and ensuring compliance with regulatory standards.
For example, an AI chatbot that perpetuates hate speech or misinformation could lead to significant social harm. A more advanced AI given the task of improving the stock of a company might – if not bound by ethical concerns – sabotage its competitors. By imposing legal liability on developers and companies, we create a potent incentive for them to invest in refining the technology to avoid such outcomes.
Legal liability, moreover, is much more doable than a six-month pause, not to speak of a permanent pause. It’s aligned with how we do things in America: instead of having the government regular business, we instead permit innovation but punish the negative consequences of harmful business activity.
Ensuring ethical AI: By slowing down AI development, we can take a deliberate approach to the integration of ethical principles in the design and deployment of AI systems. This will reduce the risk of bias, discrimination, and other ethical pitfalls that could have severe societal implications.
Avoiding technological unemployment: The rapid development of AI has the potential to disrupt labor markets, leading to widespread unemployment. By slowing down the pace of AI advancement, we provide time for labor markets to adapt and mitigate the risk of technological unemployment.
Strengthening regulations: Regulating AI is a complex task that requires a comprehensive understanding of the technology and its implications. Slowing down AI development allows for the establishment of robust regulatory frameworks that address the challenges posed by AI effectively.
Fostering public trust: Introducing legal liability in AI development can help build public trust in these technologies. By demonstrating a commitment to transparency, accountability, and ethical considerations, companies can foster a positive relationship with the public, paving the way for a responsible and sustainable AI-driven future.
Related: The Rise of AI: Why Legal Professionals Must Adapt or Risk Being Left Behind
Clarify Section 230: Section 230 does not appear to cover AI-generated content. The law outlines the term “information content provider” as referring to “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the internet or any other interactive computer service.” The definition of “development” of content “in part” remains somewhat ambiguous, but judicial rulings have determined that a platform cannot rely on Section 230 for protection if it supplies “pre-populated answers” so that it is “much more than a passive transmitter of information provided by others.” Thus, it’s highly likely that legal cases would find that AI-generated content would not be covered by Section 230: it would be helpful for those who want a slowdown of AI development to launch legal cases that would enable courts to clarify this matter. By clarifying that AI-generated content is not exempt from liability, we create a strong incentive for developers to exercise caution and ensure their creations meet ethical and legal standards.
Establish AI governance bodies: In the meantime, governments and private entities should collaborate to establish AI governance bodies that develop guidelines, regulations and best practices for AI developers. These bodies can help monitor AI development and ensure compliance with established standards. Doing so would help manage legal liability and facilitate innovation within ethical bounds.
Encourage collaboration: Fostering collaboration between AI developers, regulators and ethicists is vital for the creation of comprehensive regulatory frameworks. By working together, stakeholders can develop guidelines that strike a balance between innovation and responsible AI development.
Educate the public: Public awareness and understanding of AI technology are essential for effective regulation. By educating the public on the benefits and risks of AI, we can foster informed debates and discussions that drive the development of balanced and effective regulatory frameworks.
Develop liability insurance for AI developers: Insurance companies should offer liability insurance for AI developers, incentivizing them to adopt best practices and adhere to established guidelines. This approach will help reduce the financial risks associated with potential legal liabilities and promote responsible AI development.
Related: Elon Musk Questions Microsoft’s Decision to Layoff AI Ethics Team
The increasing prominence of AI technologies like ChatGPT highlights the urgent need to address the ethical and legal implications of AI development. By harnessing legal liability as a tool to slow down AI development, we can create an environment that fosters responsible innovation, prioritizes ethical considerations and minimizes the risks associated with these emerging technologies. It is essential that developers, companies, regulators and the public come together to chart a responsible course for AI development that safeguards humanity’s best interests and promotes a sustainable, equitable future.
[ad_2]
Gleb Tsipursky
Source link

[ad_1]
ChatGPT appears to be smart enough to recognize that artificial intelligence poses a threat to workers’ livelihoods.
Global outplacement and executive coaching firm Challenger, Gray & Christmas recently asked the “generative AI” tool, developed by research firm OpenAI, in plain English how many workers it expects to replace. The answer: 4.8 million American jobs.
That will do little to allay concerns among employees and policymakers about the potential for chatbots and large language models (LLMs) like ChatGPT and Dall-E to displace workers. That is no idle fear: Goldman Sachs estimated in a recent report that 18% of jobs globally could eventually be eliminated by AI technology
Currently, ChatGPT is most widely used to support workers in a range of industries, helping them complete tasks that still require human judgment.
“Right now, artificial intelligence should be viewed as a tool to support workers and not as a replacement for their roles. Certainly, predictive language models can be used to automate tasks, giving workers more time to focus on those involving higher thinking,” senior vice president Andrew Challenger said in a statement.
The outplacement firm also asked the LLM another straightforward question: “What jobs can ChatGPT replace?” It answered that its capabilities make it best suited to take over roles that are centered on language and that are repetitive and predictable. ChatGPT predicted that LLMs could one day replace workers in the following roles:
Fields that ChatGPT sees itself as most capable of entering include:
To be sure, as with past cycles of innovation, tools like ChatGPT could potentially create more jobs than it eliminates. The technology has already created demand for people in at least one new area — prompt engineering. A range of companies in AI, health care and the legal field are hiring so-called prompt engineers to train LLMs to deliver more useful, tailored responses to their inputs.
One job listing for a prompt engineer describes the role as an “art” that’s “a hybrid between programming, instructing and teaching. These kinds of roles tend to pay hundreds of thousands of dollars, and don’t require any coding experience.
Thanks for reading CBS NEWS.
Create your free account or log in
for more features.
[ad_2]

[ad_1]
Opinions expressed by Entrepreneur contributors are their own.
What if you could chat with your favorite luxury brand and get personalized recommendations, discounts, and insights? Sounds too good to be true, right? Well, not anymore. Thanks to ChatGPT and other AI chatbots, the luxury retail industry is undergoing a radical transformation that is creating new opportunities for brands and consumers alike.
ChatGPT is a state-of-the-art AI chatbot that can generate natural and engaging conversations with human users. It was developed by OpenAI, a research organization dedicated to creating artificial intelligence that can benefit humanity. ChatGPT can understand the context, tone and intent of the user’s messages and respond accordingly. It can also adapt to different domains, personalities and styles, making it a versatile and powerful tool for various applications.
ChatGPT and other AI chatbots are transforming the luxury retail industry and creating new opportunities for brands and consumers, but several benefits and challenges lie in wait.
Related: What Does ChatGPT Mean for the Future of Business?
ChatGPT can help luxury brands enhance customer service, engagement, loyalty and sales. Here are some of the ways it can benefit them:
Chatbots are being well embraced by the market, and rightly so. According to a report by Juniper Research, AI chatbots are expected to save businesses $8 billion per year by 2022 and generate $112 billion in retail sales by 2023. Moreover, a survey by Capgemini found that 74% of consumers who have used AI chatbots are satisfied with their experience, and 48% of consumers are willing to spend more when their queries are resolved by chatbots.
Related: Here’s How Your Business Can Use 3 Popular AI Content Creation Tools
Although ChatGPT offers numerous benefits for luxury brands, there are also some potential challenges to consider. Privacy and security are among the most significant concerns, as ChatGPT can collect and store sensitive customer data, such as personal information, preferences or feedback. This could expose the brand and customer to data breaches, hacking or misuse by third parties, putting both parties at risk.
In addition to privacy and security concerns, ChatGPT may raise ethical and social issues for luxury brands. It can potentially deceive or manipulate users into believing it is a human with genuine emotions and opinions, eroding the trust and authenticity of the brand and customer relationship. This can create ethical dilemmas and conflicts of interest that could damage the brand’s reputation in the long run.
Furthermore, ChatGPT may also compromise the quality and consistency of the brand and customer experience. It can generate inappropriate, inaccurate or offensive responses, or it may not be able to handle complex or specific queries, frustrating or alienating customers. This could damage the brand’s reputation and credibility, making it challenging to regain customer trust and loyalty.
To address these challenges, luxury brands can implement various solutions and best practices when using ChatGPT. First and foremost, regular testing, monitoring and updating of the chatbot are essential to ensure accuracy, reliability and relevance. Any bugs or errors must be identified and addressed promptly to maintain the chatbot’s performance and effectiveness.
Another critical consideration is data protection and security. Luxury brands must implement strict measures to safeguard customer data and privacy and ensure compliance with relevant laws and regulations. This may involve encrypting customer information, limiting access to authorized personnel and regularly auditing the system to identify vulnerabilities.
Being transparent with customers about using an AI chatbot is also essential. Brands should disclose that they are interacting with a chatbot and provide an option to switch to a human agent if needed or requested. This will help build trust and maintain customer satisfaction.
Finally, luxury brands must establish clear guidelines and policies for using ChatGPT. This will ensure that the chatbot’s responses are aligned with the brand’s values and goals and that customers receive consistent and high-quality interactions.
Related: ChatGPT Just Got a Game-Changing Update — Here’s What to Know
ChatGPT has become an increasingly popular tool for luxury brands to engage with customers in a more personalized and efficient manner. There are some successful and innovative cases of luxury brands utilizing this technology, demonstrating the value of ChatGPT in enhancing customer experience and driving business growth.
Elevate, a luxury fashion brand created with ChatGPT, and Midjourney, a platform that connects designers and AI, are two names that have emerged in this space. Elevate is a fictional brand showcasing how ChatGPT can help with brand creation, from the design brief to the visual elements to the marketing strategy. Its chatbot can interact with customers and provide them with information, recommendations and feedback on the brand’s products and values.
China’s retailers are eyeing the potential of ChatGPT and other AI chatbots to improve their customer conversion rate and engagement. China’s tech giants, such as Alibaba, Tencent and Baidu, are racing to introduce similar services to ChatGPT, which is taking the internet by storm.
ChatGPT and other AI chatbots are transforming the luxury retail industry and creating new opportunities for brands and consumers. ChatGPT can help luxury brands enhance their customer service, engagement, loyalty and sales, as well as create a unique and memorable brand voice and personality. However, it remains to be seen how brands manage to accumulate customer attention and preserve their loyalty at a time when the AI-geared stage is set for all.
[ad_2]
Portia Antonia Alexis
Source link

[ad_1]
Vivimos en un mundo en constante evolución en el que las reglas que imperaban ayer se habrán transformado para el día de mañana. La llegada de las nuevas herramientas impulsadas por la inteligencia artificial (IA) disrumpe ya nuestros lugares de trabajo y el pronóstico es claro: profesiones y actividades que hoy desarrollamos seres humanos pronto serán realizadas por la nueva tecnología.
Inevitable sería negarse al cambio y absurdo creer que no nos afectará. Herramientas como ChatGPT, DALL-E 2 y MidJourney llegaron para quedarse y seguramente veremos aparecer otras cuyos procesos serán cada vez más exactos.
Si estás terminando la preparatoria o pensando cursar una carrera o un posgrado, la pregunta obligada es, ¿qué estudio? Aunque esa pregunta la deberías de responder usando la cabeza, el corazón y tu intuición (recuerda que es fundamental elegir algo que te guste), le preguntamos a ChatGPT que no ayudara en el proceso de elegir una carrera.
Esta es la pregunta que le hicimos a ChatGPT:
Soy un estudiante en el último año de la preparatoria. Estoy preocupado por elegir una carrera universitaria que siga siendo útil y relevante en la era de la inteligencia artificial. ¿Me puedes dar recomendaciones de qué carreras debería de considerar? También dime qué competencias debo de desarrollar para poder trabajar con éxito en este nuevo y complejo entorno. Sé crítico y analítico.
He aquí la respuesta que nos dio la herramienta:
En la actualidad, la inteligencia artificial está transformando rápidamente muchos campos laborales. Es importante elegir una carrera que te permita estar preparado para la era de la IA. A continuación, te proporciono una lista de carreras universitarias que podrías considerar:
En cuanto a las competencias que debes desarrollar para trabajar con éxito en el entorno de la IA, te recomiendo que consideres las siguientes:
Espero que esta información sea útil para ti al elegir una carrera universitaria en la era de la IA.
Aunque al parecer la herramienta interpretó nuestra pregunta como si lo que quisiéramos es trabajar forzosamente en el campo de la IA, (cosa que no es necesariamente el caso), su respuesta deja ver los campos del conocimiento que están ligados al desarrollo de esta nueva tecnología y que sin duda serán valorados por las empresas que empiecen a incorporarla a sus procesos.
Recuerda que independientemente de lo que diga ChatGPT, es importante que consideres el gusto y la pasión por la carrera que selecciones como una variable importante a considerar, aunque sin negarte a ver el impacto que la nueva tecnología tiene y tendrá en el campo laboral.
[ad_2]
Eduardo Scheffler Zawadzki
Source link

[ad_1]
Artificial intelligence tools such as Chat GPT and Dall-E are sparking fears of the technology automating people out of a job, yet like previous waves of innovation the advent of so-called generative AI is also starting to create new kinds of work.
“The good news is that worker displacement from automation has historically been offset by creation of new jobs, and the emergence of new occupations following technological innovations accounts for the vast majority of long-run employment growth,” Goldman Sachs analysts said in a recent report that also forecast a sizable economic boost from AI.
One role now showing up in job listings: “prompt engineer.” The job’s main function is to help train the emerging crop of AI, also known as large language models (LLMs), to deliver more accurate and useful responses to the natural-language queries that people pose in using such bots. More generally, the goal is to make AI smarter and more capable of accomplishing a wide array of professional tasks.
Notably, and unlike many higher-level jobs in tech, working as a prompt engineer doesn’t necessarily require an engineering or coding background. One job listing for a prompt engineer describes the role as an “art” that’s “a hybrid between programming, instructing and teaching.”
Andrej Karpathy, a founding member of ChatGPT maker OpenAI and former senior director of AI at Tesla, recently tweeted that a prompt engineer can also be thought of “as a kind of LLM psychologist.”
“The hottest new programming language is English,” he tweeted in January after ChatGPT was publicly released, a reference to the fact that LLMs are trained based on prompts written in plain English, rather than specialized computer code.
The good news for job seekers? Some companies are willing to pay big bucks for such jobs, also referred to colloquially by some prompt engineers as an “AI whisperer.”
“We are all amateur prompt engineers, but there is definitely a nuanced understanding to these models,” said Edward Tian, a student at Princeton University who built GPTZero, an app that can detect whether a text was written by a human being or ChatGPT.
For example, LLMs are better at spitting out text in a certain style — say, in the voice of an elementary school student or a comedian — if they are shown an example, Tian explained.
“You’ll get better results if you say to ChatGPT: ‘Here is an example of elementary school writing and then you make the ask,’” he said. “It significantly improves results.”
Prompt engineering is also typically less structured than traditional research experiments, which begin with hypotheses.
“With prompt engineering, no one really knows what the results are going to be, so we try a bunch of things and hopefully the LLM responds in a positive way,” Tian said.
A range of companies and industries are recruiting prompt engineers.
Anthropic, an AI research company and maker of Claude, an AI assistant, is currently seeking a “prompt engineer and librarian,” according to a job posting on the company’s website. The role involves building a library of prompts that get LLMs to accomplish different tasks.
Requirements for the position at the San Francisco company include familiarity with how LLMs work, excellent communication skills and what Anthropic describes as “a creative hacker spirit,” among other qualifications. Basic programming skills and the ability to write small Python programs are also desirable. The pay: Between $175,000 and $335,000 a year.
Anthropic
British law firm Mishcon de Reya is hiring a “GPT legal prompt engineer.” The role will focus on helping the business “increase our understanding of how generative AI can be used within a law firm, including its application to legal practice tasks and wider law firm business tasks,” the job posting states.
Klarity, a company that helps automate contract review, is hiring its own AI whisperer, who will earn between $130,000 and $230,000 a year to fine-tune LLM applications within the company.
Boston Children’s Hospital in Boston is hiring an AI prompt engineer to work on its digital health platform. The desired candidate will have a strong background in both AI and machine learning (a subset of AI), as well health care research experience. The job entails designing AI prompts for LLMs “as they emerge for health care research studies and clinical practice.”
To be sure, any job related to an AI chatbot requires a high level of familiarity and understanding of how LLMs work.
“They have to understand how to code, leverage AI models and understand how to talk to them,” Gabor Soter, founder of Generative Nation, a site that educates the public about generative AI, told CBS MoneyWatch.
That said, Soter expects to see a raft new AI jobs.
“Some people underestimate what it takes, but these are front engineers getting hired for hefty salaries,” he said. “I think it’s a skill that’s going to be super important for everyone, and I would highly encourage everybody who is not a data scientist to play around with these models.”
[ad_2]

[ad_1]
A new type of job has become more prominent amid the AI boom, according to Bloomberg: a “prompt engineer.”
And one job listing puts the salary range from $175,000 to $335,000 — with only “basic” coding skills required.
The “prompt engineer” job requires someone to “speak” with a program such as Google’s Bard or OpenAI’s ChatGPT and try to convince it to respond in certain ways, such as convincing it not to say embarrassing things or reveal trade secrets about how it operates, per The Washington Post.
This job has become relevant with the rise of text-generating bots that have become available to the wider public.
The job “is not a science… It’s ‘let’s poke the bear in different ways and see how it roars back,’” said Shane Steinert-Threlkeld, an assistant professor at the University of Washington who studies the language technology behind tools like ChatGPT, told the Post.
In 2022, ChatGPT burst onto the scene as one of the first widely accessible generative AI text bots. It could write songs and essays; it stirred panic in academia. And it pushed other companies that had been working on the technology (Google, Meta) into action.
Related: ‘Things Will Go Wrong.’ Google Releases Its Chatbot Bard With Caution.
It also put photos generated from text prompts into the spotlight, which users can do on programs like Stable Diffusion. Those prompts can go on for a couple of sentences for many images and are highly specific.
But the text generators are supposed to be able to respond to what the average person might ask, and, ideally, avoid gaffes. Riley Goodside, a prompt engineer at Scale AI, a machine learning company, told the Post that one of the things he does is when GPT-3 makes an error, he talks to the robot and helps it realize it made a mistake by walking it through its own “reasoning” step-by-step.
The work can also involve forcing such systems into making mistakes, which then allow them to be corrected within the system — i.e., figuring out what makes things like ChatGPT begin to emit nonsense. This is also known in the industry as “hallucinating.”
Related: We Asked Google’s AI Bard How To Start A Business. Here’s What It Said.
One job listing involves talking to a robot for long periods called a “Prompt Engineer and Librarian,” from the AI research company Antrhopic. Besides a salary of up to $335,000 and relocation help for moving to the San Francisco area, the role requires only “basic programming skills,” such as writing programs with Python.
Klarity, an AI document company, and Boston Children’s Hospital also have similar AI roles open — both are looking for people to help them figure out how to use AI in their businesses (and both ask for some engineering experience).
[ad_2]
Gabrielle Bienasz
Source link

[ad_1]
When ChatGPT seemingly burst on the scene late last year, everyone from college students to CEOs took notice. For financial institutions, ChatGPT is just one tool in their digital transformation toolkit. It can refine and complement existing digital banking solutions, but when it comes to improving digital customer service and engagement, it still has a long way to go.

To build out digital customer experience tools, it’s better to go with digital assistants or chatbots that are more refined and specifically trained for banking services. But first, let’s take a look at what everyone’s talking about.
ChatGPT is a chatbot trained on a massive amount of data. It models the person it is talking to and can engage in a contextual conversation, much like speaking with another person. In certain situations, it would be difficult to distinguish between ChatGPT and another person.
The technology is based on large language models (LLMs), specifically GPT-3.5. A large language model is a tool that can predictively compose text based on patterns it has learned from massive amounts of text data, usually drawn from publicly available sources, such as the internet. GPT, or generative pretrained transformer, is a framework for large language models based on the Transformer architecture for deep neural networks.
These neural networks can track sequence and relationship data — such as words in a sentence — to learn context and, ultimately, meaning. GPT itself has been making headlines since the release of GPT-3 in June 2020.
ChatGPT is like nothing most of us have seen before. It seems knowledgeable and creative, it can write code and poetry, and even create games. More importantly, it is more likely to align its output to the user’s specific goals and much less likely to produce inappropriate or toxic output than previous LLMs.
But for today’s financial institutions, ChatGPT and other similar technologies are still in the early stages of development and, as such, come with a bit of baggage. Here’s why:
Current banking customers are comfortable with using a chatbot. According to a Cornerstone Advisors study: “Among consumers whose bank or credit union has deployed a chatbot, 70% have used it at least once, with about three in 10 having used it three or more times.”
The study also showed consumers’ satisfaction with their digital assistants’ interactions is strong, with half being “very” satisfied and 43% reported as “somewhat” satisfied.
Yet for banks looking to build out their digital banking solutions with a chatbot, or its more robust cousin, an intelligent digital assistant (IDA), ChatGPT is just not ready for prime time. It’s a more basic and generic chatbot that’s not exclusively built for customer service.
It’s better to look for chatbots or IDAs that can address member queries, provide personalized and detailed financial information, help consumers make smarter financial decisions and act as the first encounter with your brand.
There are a number of chatbots out there. But no matter which solution banks choose, a chatbot or digital assistant can be a forward-facing, bank-savvy digital solution that speaks the unique language of their financial institution.
Smart, personable chatbots can do that. ChatGPT? Not quite yet.
Sasha Caskey is the chief technology officer for Kasisto, which makes conversational AI-powered digital assistants for financial institutions.
[ad_2]
Sasha Caskey
Source link

[ad_1]
Billionaire Elon Musk, Apple co-founder Steve Wozniak and former presidential candidate Andrew Yang joined hundreds calling for a six-month pause on AI experiments in an open letter — or we could face “profound risks to society and humanity.”
“Contemporary AI systems are now becoming human-competitive at general tasks,” reads the open letter, posted on the website of Future of Life Institute, a non-profit. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?”
The futurist collective, backed by Musk, previously stated in a 2015 open letter that they supported development of AI to benefit society, but were wary of the potential dangers.
The latest open letter, signed by 1,124 people as of Wednesday afternoon, points to OpenAI’s GPT-4 as a warning sign. The current version, the company boasts, is more accurate, human-like, and has the ability to analyze and respond to images. It even passed a simulated bar exam.
“At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models,” a recent post from OpenAI states.
“We agree. That point is now,” the futurists write. “This does not mean a pause on AI development in general, merely a stepping back.”
It may seem at odds with the billionaire’s own artificial intelligence efforts.
“I’m a little worried about the AI stuff,” Musk said from a stage earlier this month — surrounded by Tesla executives, reports Reuters.
But researchers that have spoken to CBS News said the same — albeit a little more directly.
“I think that we should be really terrified of this whole thing,” Timnit Gebru, an AI researcher, told CBS Sunday Morning earlier this year.
Some are worried that people will use ChatGPT to flood social media with phony articles that sound professional, or bury Congress with “grassroots” letters that sound authentic.
“We should understand the harms before we proliferate something everywhere, and mitigate those risks before we put something like this out there,” Gebru told Sunday Morning.
The proposed pause should be looked on as a way to make AI development “more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal,” while working alongside lawmakers to create AI governance systems.
“Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here,” the letter reads. “Let’s enjoy a long AI summer, not rush unprepared into a fall.”
[ad_2]

[ad_1]
Should you ask a bot for business advice?
Google soft-launched Bard last week, an AI product that gives text-based answers to queries a la ChatGPT. After gaining access, we wanted to put Bard to the test.
After ChatGPT exploded onto the market in 2022 and became one of the first publicly available tools of its kind, Google’s released its own text-generation AI product called Bard. Both are bots that provide (theoretically) coherent answers to questions. ChatGPT can also pass a high school exam and write poems.
Related: ChatGPT: What Is It and How Does It Work?
Google’s technology in the area of large AI language models, which ChatGPT and other similar tools use, had been in the works for some time under internal labels such as ERT, MUM, and LaMDA. In December, in response to an employee query if Google had “missed” the opportunity considering the popularity of ChatGPT, Jeff Dean, AI head, reportedly said that Google was comporting itself “more conservatively than a small startup.”
Bard. Courtesy company.
But it quickly became a huge focus for the company, leading to a “code red” within Google and renewed attention for AI. CEO Sundar Pichai mentioned AI three times in his note about layoffs that affected about 12,000 employees.
The first public showing of Bard, however, did not go smoothly — the AI bot made a factual error in its example question.
The company seemed to ask for understanding in Bard’s early days: “Things will go wrong. But the user feedback is critical to improving the product and the underlying technology,” Pichai said in a letter to employees around the time of Bard’s launch.
Related: ‘Things Will Go Wrong.’ Google Releases Its Chatbot Bard With Caution.
Still, we wanted to put Bard to the test and see what it knows about business. So we asked it four questions about getting a business off the ground. The takeaway? Bard is kind of boring (at least with these topics).
How do I start a business?
Bard gave a generic 10-step answer to this question, including the riveting, “A business plan is a roadmap for your business.” However, the reminders to make sure to provide good customer service (especially considering the reported customer service quality of a few very well-known ventures) and to make sure to get your permits were helpful.

What makes a great idea for a startup or business?
Bard advised would-be entrepreneurs to think about an idea that solves a problem, has a target market, and is different from the competition. And, get a group with you. “The business should have a team of people who are passionate about the idea and who have the skills and experience to make it a reality,” Bard said.

What’s the best way to go about raising venture capital funding?
“Venture capitalists will ask you tough questions about your company and your business plan,” Bard advises. The bot also counseled patience with a “long and frustrating” process. All valid points, if a bit basic.

What’s your best business advice?
Bard went with the combo: be persistent, but always adapt. It feels a bit like a motivational speaker who is — well, a robot.
And, don’t forget validation. “Celebrating your successes will help you stay motivated and keep working hard,” Bard advised.

[ad_2]
Gabrielle Bienasz
Source link

[ad_1]
[ad_2]
Whitney McDonald
Source link