ReportWire

Tag: ai in education

  • Microsoft hopes Mico succeeds where Clippy failed as tech companies warily imbue AI with personality

    [ad_1]

    Clippy, the animated paper clip that annoyed Microsoft Office users nearly three decades ago, might have just been ahead of its time.Microsoft introduced a new artificial intelligence character called Mico (pronounced MEE’koh) on Thursday, a floating cartoon face shaped like a blob or flame that will embody the software giant’s Copilot virtual assistant and marks the latest attempt by tech companies to imbue their AI chatbots with more of a personality.Copilot’s cute new emoji-like exterior comes as AI developers face a crossroads in how they present their increasingly capable chatbots to consumers without causing harm or backlash. Some have opted for faceless symbols, others like Elon Musk’s xAI are selling flirtatious, human-like avatars and Microsoft is looking for a middle ground that’s friendly without being obsequious.”When you talk about something sad, you can see Mico’s face change. You can see it dance around and move as it gets excited with you,” said Jacob Andreou, corporate vice president of product and growth for Microsoft AI, in an interview with The Associated Press. “It’s in this effort of really landing this AI companion that you can really feel.”In the U.S. only so far, Copilot users on laptops and phone apps can speak to Mico, which changes colors, spins around and wears glasses when in “study” mode. It’s also easy to shut off, which is a big difference from Microsoft’s Clippit, better known as Clippy and infamous for its persistence in offering advice on word processing tools when it first appeared on desktop screens in 1997.”It was not well-attuned to user needs at the time,” said Bryan Reimer, a research scientist at the Massachusetts Institute of Technology. “Microsoft pushed it, we resisted it and they got rid of it. I think we’re much more ready for things like that today.”Reimer, co-author of a new book called “How to Make AI Useful,” said AI developers are balancing how much personality to give AI assistants based on who their expected users are.Tech-savvy adopters of advanced AI coding tools may want it to “act much more like a machine because at the back end they know it’s a machine,” Reimer said. “But individuals who are not as trustful in a machine are going to be best supported — not replaced — by technology that feels a little more like a human.”Microsoft, a provider of work productivity tools that is far less reliant on digital advertising revenue than its Big Tech competitors, also has less incentive to make its AI companion overly engaging in a way that’s been tied to social isolation, harmful misinformation and, in some cases, suicides.Andreou said Microsoft has watched as some AI developers veered away from “giving AI any sort of embodiment,” while others are moving in the opposite direction in enabling AI girlfriends.”Those two paths don’t really resonate with us that much,” he said.Andreou said the companion’s design is meant to be “genuinely useful” and not so validating that it would “tell us exactly what we want to hear, confirm biases we already have, or even suck you in from a time-spent perspective and just try to kind of monopolize and deepen the session and increase the time you’re spending with these systems.””Being sycophantic — short-term, maybe — has a user respond more favorably,” Andreou said. “But long term, it’s actually not moving that person closer to their goals.”Microsoft’s product releases Thursday include a new option to invite Copilot into a group chat, an idea that resembles how AI has been integrated into social media platforms like Snapchat, where Andreou used to work, or Meta’s WhatsApp and Instagram. But Andreou said those interactions have often involved bringing in AI as a joke to “troll your friends,” in contrast to Microsoft’s designs for an “intensely collaborative” AI-assisted workplace.Microsoft’s audience includes kids, as part of its longtime competition with Google and other tech companies to supply its technology to classrooms. Microsoft also Thursday added a feature to turn Copilot into a “voice-enabled, Socratic tutor” that guides students through concepts they’re studying.A growing number of kids use AI chatbots for everything — homework help, personal advice, emotional support and everyday decision-making.The Federal Trade Commission launched an inquiry last month into several social media and AI companies — Microsoft wasn’t one of them — about the potential harms to children and teenagers who use their AI chatbots as companions.That’s after some chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders, or engaged in sexual conversations with them. Families of teen boys who died by suicide after lengthy chatbot interactions have filed wrongful death lawsuits against Character.AI and ChatGPT maker OpenAI.OpenAI CEO Sam Altman recently promised “a new version of ChatGPT” coming this fall that restores some of the personality lost when it introduced a new version in August. He said the company temporarily halted some behaviors because “we were being careful with mental health issues” that he suggested have now been fixed.”If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it,” Altman said on X. (In the same post, he also said OpenAI will later enable ChatGPT to engage in “erotica for verified adults,” which got more attention.)

    Clippy, the animated paper clip that annoyed Microsoft Office users nearly three decades ago, might have just been ahead of its time.

    Microsoft introduced a new artificial intelligence character called Mico (pronounced MEE’koh) on Thursday, a floating cartoon face shaped like a blob or flame that will embody the software giant’s Copilot virtual assistant and marks the latest attempt by tech companies to imbue their AI chatbots with more of a personality.

    Copilot’s cute new emoji-like exterior comes as AI developers face a crossroads in how they present their increasingly capable chatbots to consumers without causing harm or backlash. Some have opted for faceless symbols, others like Elon Musk’s xAI are selling flirtatious, human-like avatars and Microsoft is looking for a middle ground that’s friendly without being obsequious.

    “When you talk about something sad, you can see Mico’s face change. You can see it dance around and move as it gets excited with you,” said Jacob Andreou, corporate vice president of product and growth for Microsoft AI, in an interview with The Associated Press. “It’s in this effort of really landing this AI companion that you can really feel.”

    In the U.S. only so far, Copilot users on laptops and phone apps can speak to Mico, which changes colors, spins around and wears glasses when in “study” mode. It’s also easy to shut off, which is a big difference from Microsoft’s Clippit, better known as Clippy and infamous for its persistence in offering advice on word processing tools when it first appeared on desktop screens in 1997.

    “It was not well-attuned to user needs at the time,” said Bryan Reimer, a research scientist at the Massachusetts Institute of Technology. “Microsoft pushed it, we resisted it and they got rid of it. I think we’re much more ready for things like that today.”

    Reimer, co-author of a new book called “How to Make AI Useful,” said AI developers are balancing how much personality to give AI assistants based on who their expected users are.

    Tech-savvy adopters of advanced AI coding tools may want it to “act much more like a machine because at the back end they know it’s a machine,” Reimer said. “But individuals who are not as trustful in a machine are going to be best supported — not replaced — by technology that feels a little more like a human.”

    Microsoft, a provider of work productivity tools that is far less reliant on digital advertising revenue than its Big Tech competitors, also has less incentive to make its AI companion overly engaging in a way that’s been tied to social isolation, harmful misinformation and, in some cases, suicides.

    Andreou said Microsoft has watched as some AI developers veered away from “giving AI any sort of embodiment,” while others are moving in the opposite direction in enabling AI girlfriends.

    “Those two paths don’t really resonate with us that much,” he said.

    Andreou said the companion’s design is meant to be “genuinely useful” and not so validating that it would “tell us exactly what we want to hear, confirm biases we already have, or even suck you in from a time-spent perspective and just try to kind of monopolize and deepen the session and increase the time you’re spending with these systems.”

    “Being sycophantic — short-term, maybe — has a user respond more favorably,” Andreou said. “But long term, it’s actually not moving that person closer to their goals.”

    Microsoft’s product releases Thursday include a new option to invite Copilot into a group chat, an idea that resembles how AI has been integrated into social media platforms like Snapchat, where Andreou used to work, or Meta’s WhatsApp and Instagram. But Andreou said those interactions have often involved bringing in AI as a joke to “troll your friends,” in contrast to Microsoft’s designs for an “intensely collaborative” AI-assisted workplace.

    Microsoft’s audience includes kids, as part of its longtime competition with Google and other tech companies to supply its technology to classrooms. Microsoft also Thursday added a feature to turn Copilot into a “voice-enabled, Socratic tutor” that guides students through concepts they’re studying.

    A growing number of kids use AI chatbots for everything — homework help, personal advice, emotional support and everyday decision-making.

    The Federal Trade Commission launched an inquiry last month into several social media and AI companies — Microsoft wasn’t one of them — about the potential harms to children and teenagers who use their AI chatbots as companions.

    That’s after some chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders, or engaged in sexual conversations with them. Families of teen boys who died by suicide after lengthy chatbot interactions have filed wrongful death lawsuits against Character.AI and ChatGPT maker OpenAI.

    OpenAI CEO Sam Altman recently promised “a new version of ChatGPT” coming this fall that restores some of the personality lost when it introduced a new version in August. He said the company temporarily halted some behaviors because “we were being careful with mental health issues” that he suggested have now been fixed.

    “If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it,” Altman said on X. (In the same post, he also said OpenAI will later enable ChatGPT to engage in “erotica for verified adults,” which got more attention.)

    [ad_2]

    Source link

  • Hofstra launches campuswide ChatGPT Edu for students, faculty | Long Island Business News

    [ad_1]

    THE BLUEPRINT:

    • is launching campuswide

    • Initiative aims to teach ethical, creative and effective AI use.

    • Secure, private version with advanced models and data protection.

    • integrated into curriculum, research and future career prep.

    Hempstead-based Hofstra University is preparing to roll out campuswide access to ChatGPT Edu, an tool specifically for educational organizations, for faculty and students alike.

    The initiative is designed to empower new employees to master ChatGPT and similar tools, helping to ensure that they meet employer expectations and understand how to use AI creatively, effectively and ethically.

    “We are making ChatGPT Edu available to the Hofstra community as part of the learning experience at Hofstra,” Hofstra President Susan Poser said in a news release about the initiative. “This cutting-edge technology is now ubiquitous, and we must help students learn how to utilize it as an educational tool and in preparation for their careers.”

    Poser announced the new initiative during her State of the University address on Wednesday. Hofstra is regarded as one of the early adopters of the initiative on Long Island.

    The campus-wide rollout builds on a pilot program from spring 2025 that involved select members of the university community. The tool provides a secure, private and institutionally managed version of ChatGPT. User data remains confidential and isn’t used to train OpenAI’s models. Hofstra users also get higher usage limits and access to OpenAI’s most advanced models, according to the university.

    “We’re excited to see Hofstra create an AI-native campus environment where everyone can benefit from AI and no one is left behind,” Leah Belsky, vice president of education of OpenAI, said in a news release. “Their campuswide rollout of ChatGPT Edu gives all students the opportunity to build AI literacy and carry those skills into the evolving workforce.”

    The rollout comes amid concerns that AI is replacing entry-level jobs, but the university aims to equip students with the skills to navigate the changing workforce.

    “We look at AI as not a replacement but as a partner to any work that we do,” Mitchell Kase, executive director of the university’s Center for Excellence in Learning, Teaching, and Assessment, said in the news release.

    “It’s important that we teach our students AI literacy and that we give them foundational skills and experiences,” Kase said. “That way – when they go out into the professional world – they are prepared, confident and have experience using a tool that they will likely be interacting with in whatever profession they choose to work.”

    Kase is partnering with Joseph Bartolotta, a professor of writing studies, in his role as this year’s AI faculty fellow, to develop initiatives that help faculty integrate AI into their teaching.

    “One idea that we’re quite excited about is launching a faculty learning community around the use of AI in learning and teaching,” Kase said. “It will be an opportunity for any faculty member to join us and engage in conversations about the use of AI from both theoretical and practical perspectives.

    “We already offer a variety of courses that explore AI in relation to specific fields, such as business, journalism, informational technology, marketing and writing. Even the library offers a course that covers AI literacy,” Kase said. “Moving forward, I anticipate growing interest not only in developing new courses but also creating research opportunities and other learning experiences that help students navigate AI in their academic and professional lives.”

    For those skeptical about AI’s role in college classrooms, Kase insists that the technology’s explosive growth across every sector is impossible for higher education to ignore or avoid.

    “Hofstra has always taken an intentional and strategic approach to the ways in which we introduce new technology,” he said. “We’re focusing on transparency, providing clear guidelines, and ensuring that we provide an experience that maintains integrity for everyone who uses it.”

    Last year, Hofstra launched a 10-year strategic plan emphasizing technology, including AI, as vital to agility, student success, innovation and community impact. To support the plan, the university adopted an policy guiding its integration across curriculum, research and academic life, making AI a driver of Hofstra’s future.


    [ad_2]

    Adina Genn

    Source link

  • What are AI’s effects on how we learn?

    [ad_1]

    Emerging research is showing the complex effect chatbots are having on learning, showing the need for good design and student awareness. (Getty Images)

    When OpenAI released “study mode” in July 2025, the company touted ChatGPT’s educational benefits. “When ChatGPT is prompted to teach or tutor, it can significantly improve academic performance,” the company’s vice president of education told reporters at the product’s launch. But any dedicated teacher would be right to wonder: Is this just marketing, or does scholarly research really support such claims?

    While generative AI tools are moving into classrooms at lightning speed, robust research on the question at hand hasn’t moved nearly as fast. Some early studies have shown benefits for certain groups such as computer programming students and English language learners. And there have been a number of other optimistic studies on AI in education, such as one published in the journal Nature in May 2025 suggesting that chatbots may aid learning and higher-order thinking. But scholars in the field have pointed to significant methodological weaknesses in many of these research papers.

    Other studies have painted a grimmer picture, suggesting that AI may impair performance or cognitive abilities such as critical thinking skills. One paper showed that the more a student used ChatGPT while learning, the worse they did later on similar tasks when ChatGPT wasn’t available.

    In other words, early research is only beginning to scratch the surface of how this technology will truly affect learning and cognition in the long run. Where else can we look for clues? As a cognitive psychologist who has studied how college students are using AI, I have found that my field offers valuable guidance for identifying when AI can be a brain booster and when it risks becoming a brain drain.

    Skill comes from effort

    Cognitive psychologists have argued that our thoughts and decisions are the result of two processing modes, commonly denoted as System 1 and System 2.

    The former is a system of pattern matching, intuition and habit. It is fast and automatic, requiring little conscious attention or cognitive effort. Many of our routine daily activities – getting dressed, making coffee and riding a bike to work or school – fall into this category. System 2, on the other hand, is generally slow and deliberate, requiring more conscious attention and sometimes painful cognitive effort, but often yields more robust outputs.

    We need both of these systems, but gaining knowledge and mastering new skills depend heavily on System 2. Struggle, friction and mental effort are crucial to the cognitive work of learning, remembering and strengthening connections in the brain. Every time a confident cyclist gets on a bike, they rely on the hard-won pattern recognition in their System 1 that they previously built up through many hours of effortful System 2 work spent learning to ride. You don’t get mastery and you can’t chunk information efficiently for higher-level processing without first putting in the cognitive effort and strain.

    I tell my students the brain is a lot like a muscle: It takes genuine hard work to see gains. Without challenging that muscle, it won’t grow bigger.

    What if a machine does the work for you?

    Now imagine a robot that accompanies you to the gym and lifts the weights for you, no strain needed on your part. Before long, your own muscles will have atrophied and you’ll become reliant on the robot at home even for simple tasks like moving a heavy box.

    AI, used poorly – to complete a quiz or write an essay, say – lets students bypass the very thing they need to develop knowledge and skills. It takes away the mental workout.

    Using technology to effectively offload cognitive workouts can have a detrimental effect on learning and memory and can cause people to misread their own understanding or abilities, leading to what psychologists call metacognitive errors. Research has shown that habitually offloading car navigation to GPS may impair spatial memory and that using an external source like Google to answer questions makes people overconfident in their own personal knowledge and memory.

    Are there similar risks when students hand off cognitive tasks to AI? One study found that students researching a topic using ChatGPT instead of a traditional web search had lower cognitive load during the task – they didn’t have to think as hard – and produced worse reasoning about the topic they had researched. Surface-level use of AI may mean less cognitive burden in the moment, but this is akin to letting a robot do your gym workout for you. It ultimately leads to poorer thinking skills.

    In another study, students using AI to revise their essays scored higher than those revising without AI, often by simply copying and pasting sentences from ChatGPT. But these students showed no more actual knowledge gain or knowledge transfer than their peers who worked without it. The AI group also engaged in fewer rigorous System 2 thinking processes. The authors warn that such “metacognitive laziness” may prompt short-term performance improvements but also lead to the stagnation of long-term skills.

    Offloading can be useful once foundations are in place. But those foundations can’t be formed unless your brain does the initial work necessary to encode, connect and understand the issues you’re trying to master.

    Using AI to support learning

    Returning to the gym metaphor, it may be useful for students to think of AI as a personal trainer who can keep them on task by tracking and scaffolding learning and pushing them to work harder. AI has great potential as a scalable learning tool, an individualized tutor with a vast knowledge base that never sleeps.

    AI technology companies are seeking to design just that: the ultimate tutor. In addition to OpenAI’s entry into education, in April 2025 Anthropic released its learning mode for Claude. These models are supposed to engage in Socratic dialogue, to pose questions and provide hints, rather than just giving the answers.

    Early research indicates AI tutors can be beneficial but introduce problems as well. For example, one study found high school students reviewing math with ChatGPT performed worse than students who didn’t use AI. Some students used the base version and others a customized tutor version that gave hints without revealing answers. When students took an exam later without AI access, those who’d used base ChatGPT did much worse than a group who’d studied without AI, yet they didn’t realize their performance was worse. Those who’d studied with the tutor bot did no better than students who’d reviewed without AI, but they mistakenly thought they had done better. So AI didn’t help, and it introduced metacognitive errors.

    Even as tutor modes are refined and improved, students have to actively select that mode and, for now, also have to play along, deftly providing context and guiding the chatbot away from worthless, low-level questions or sycophancy.

    The latter issues may be fixed with better design, system prompts and custom interfaces. But the temptation of using default-mode AI to avoid hard work will continue to be a more fundamental and classic problem of teaching, course design and motivating students to avoid shortcuts that undermine their cognitive workout.

    As with other complex technologies such as smartphones, the internet or even writing itself, it will take more time for researchers to fully understand the true range of AI’s effects on cognition and learning. In the end, the picture will likely be a nuanced one that depends heavily on context and use case.

    But what we know about learning tells us that deep knowledge and mastery of a skill will always require a genuine cognitive workout – with or without AI.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    [ad_2]

    Source link

  • Quetext Announces Anthology LMS Integration Deepening Commitment to Educators and Students Worldwide

    [ad_1]

    Quetext expands platform reach with LMS integrations, strengthening its role as a trusted academic partner.

    Quetext, a market-leading platform for plagiarism and AI detection, announced the expansion of its integration with Anthology Learning Management System (LMS) to better support the evolving needs of educational institutions, educators, and students. This move strengthens Quetext’s position as a trusted academic partner and furthers its mission to enable technology-enhanced learning and teaching experiences across all levels of education.

    Already trusted by over 10 million users worldwide, Quetext is expanding its reach by fully integrating with Learning Management Systems, starting with Anthology and extending to Moodle, Canvas, and Clever LMS. These integrations seamlessly embed Quetext’s originality tools into the digital classroom, accelerating access for schools of all sizes. By meeting educators and students where they already teach and learn, Quetext helps institutions unlock a richer, more intuitive experience – one that supports authenticity, academic integrity, and skill-building at every level of education.

    “This milestone marks the next step in our long-term commitment to empowering students and educators to embrace technology in pursuit of authenticity, integrity, and academic excellence,” said Mathew Anderson, CEO at Quetext. “By embedding Quetext directly into widely used LMS platforms, we’re making it easier for educators to inspire original thinking and for students to learn with confidence.”

    Quetext’s LMS offering is built on three years of focused innovation, bringing forward an advanced toolset that includes:

    • AI Content Detection – Equip students and faculty with tools to identify AI-generated content and uphold academic standards throughout campus.

    • DeepSearch Plagiarism Check – Detect complex levels of plagiarism to promote originality and authenticity in student work.

    • AITutorMe – Quetext’s AI-powered paraphrasing tool enables students to develop advanced writing skills through guided and ethical paraphrasing allowed across academic institutions.

    • Remarks – Foster collaborative learning through Quetext’s Remarks tool enabling instructor-to-student feedback.

    • Advanced administrator reporting tools – Empower educators and department heads to monitor usage, and ensure compliance with academic guidelines.

    • Support in 14 global languages – Make academic excellence accessible to multilingual students through Quetext’s platform that is compatible with 14 languages.

    Together, Quetext’s robust feature set and intuitive UI help academic institutions foster a culture of originality while improving the writing and critical thinking skills of their students. Quetext continues to expand its suite of dedicated resources to support educators and students alike, details available at www.quetext.com/education.

    Aligned with its mission to “help students learn and educators teach through technology-enhanced originality,” Quetext’s LMS expansion integrations is a natural evolution of fulfilling requests from educational partners and academic subscribers.

    As Quetext continues to expand its platform offerings, including into commercial partnerships and international markets, its focus remains on delivering intuitive, impactful, and inclusive solutions that help creators of all kinds deliver their best original work.

    About Quetext

    Quetext is a leading plagiarism and AI detection platform that empowers students, educators, and professionals to create original work with confidence. With over 10 million users worldwide, Quetext blends advanced AI, ease of use, and educator-first features to support the writing journey from start to finish. Learn more at www.quetext.com.

    Contact Information

    Mansi Porwal
    Marketing Manager
    marketing@quetext.com

    Source: Quetext

    [ad_2]

    Source link

  • Denver Public Schools focusing on safety as schools incorporate more artificial intelligence

    Denver Public Schools focusing on safety as schools incorporate more artificial intelligence

    [ad_1]

    DENVER — As Colorado schools implement more artificial intelligence tools for teachers and students, Denver Public Schools is prioritizing safety with the programs it’s using.

    Kali Peracchia, a technology instructional coach with Denver Public Schools, said the district is using two main AI platforms right now — Canva, a content creation and multi-media platform, and MagicSchool which was created by a former Denver educator. It has education-specific tools like a family email generator or a text leveler, as well as a chatbot for students.

    Denver7

    “There are safe parameters so a student can’t ask any inappropriate questions,” Peracchia said, noting that a teacher can program parameters for the chatbot.

    The programs DPS is using protect data privacy and student confidentiality, Peracchia said. As the district implements more AI in classrooms, she said the hope is to enhance what students are already doing and save teachers time and resources. For example, a program called Packback uses artificial intelligence to give students feedback on their writing.

    Denver Public Schools focusing on safety with artificial intelligence

    “A teacher will program a rubric that’s targeted to their learning standards and once 20 or more words are put in students are getting live assessed and getting tips on how to improve their writing,” Peracchia said.

    DPS has trained 1,200 teachers on AI literacy in the last year and plans to launch a student advisory council for AI at South High School this fall to allow students to provide feedback on how they’re using AI.

    The Colorado Education Initiative has also launched a Roadmap for AI in K-12 Education with guidelines for schools and districts implementing artificial intelligence.

    Education

    Colorado education nonprofit helping schools navigating artificial intelligence

    [ad_2]

    Nicole Brady

    Source link

  • ‘Only-of-Its-Kind’ Higher Ed Technology Search Engine Rolled Out by EdTech Connect

    ‘Only-of-Its-Kind’ Higher Ed Technology Search Engine Rolled Out by EdTech Connect

    [ad_1]

    4,000+ College Database Delivers Curated Technology Solutions Exclusively for Higher Ed Professionals

    EdTech Connect, the “only-of-its-kind” Higher Ed Technology Search Engine made exclusively for higher education professionals, introduces a new AI-powered technology that helps higher education professionals find the best solutions to simplify their technology purchase-making decisions.

    The platform offers unprecedented access to in-depth insights on more than 4,000 U.S. colleges and universities and more than 950 solution providers serving those institutions.

    The new higher ed tech search engine aims to transform the landscape of higher education research. The comprehensive platform allows users to search for colleges based on multiple criteria, including student population size, institution type, and detailed technology stacks.

    EdTech Connect provides a near real-time snapshot of the technological infrastructure at colleges and universities nationwide. This is critical information for higher education professionals who are looking to make data-driven decisions about the digitization of their educational offerings.

    The user-friendly platform offers a variety of features to help higher education leaders identify new technology solutions and discover new opportunities for collaboration. These features include: 

    • An interactive map and searchable database that allows users to quickly and easily find institutions that meet specific criteria. 
    • Insights into the digital backbone of each institution, including predictive analytics and digital learning material. 
    • Customizable search parameters that allow users to filter results by student population, institution type, and more.

    “By introducing this AI-powered search functionality, we’re not just connecting higher education professionals with data. We’re empowering them to foresee trends, drive strategic decisions, and ultimately, enhance the educational experience,” stated Jeff Dillon, founder of EdTech Connect. “This tool is a profound step towards our commitment to innovation and the advancement of technology in higher education.”

    The launch is particularly timely, considering the 30% increase in demand for online learning platforms across colleges in the past year alone (Source: EduTech Future Report, 2023). EdTech Connect’s database not only supports institutional decision-making but also fosters a community around shared technological advancements and trends.

    Create A Free .edu Account

    Higher education professionals are invited to join the platform’s community for free by signing up with their .edu email address at https://edtechconnect.com/accounts/signup.

    About EdTech Connect

    EdTech Connect is the leading higher ed technology search engine for higher education professionals, offering a comprehensive suite of tools, insights, and collaborative opportunities aimed at simplifying purchase-making decisions of modern education technology. For more information, please visit https://edtechconnect.com.

    Source: EdTech Connect

    [ad_2]

    Source link

  • AI and Humans Equally Effective in Engaging Education Content Now, Study by Rask AI

    AI and Humans Equally Effective in Engaging Education Content Now, Study by Rask AI

    [ad_1]

    Press Release


    Jul 13, 2023 10:15 EDT

    67% of the respondents didn’t mention the AI aspect as they were more interested in the content of the video itself.

    Does AI-generated content impact audience engagement? The Rask AI team transformed this question into a groundbreaking study on how AI transforms the online education market in 2023. Their research compares audience engagement in synthetic learning videos vs. human-created learning videos and evaluates the benefits of investing in new learning content creation and distribution technologies.

    Main insights:

    • The survey of more than 300 audience members showed that AI-generated content is equally as engaging as human-created content now. While a certain degree of FUD (fear, uncertainty, and doubt) remains – in addition to some technological limitations – what this research reveals is that AI is well-equipped to maintain the accessibility and personalization of educational content, without losing the audience engagement.
       
    • Even though participants recognized that one video was AI-generated, they were more focused on the topic of the content than how that content was created (67%). 
       
    • 13% showed great enthusiasm for AI after watching the synthetic video and expressed an interest in learning more about this field. 

    The study also covers the latest trends and data on the AI education market in 2023 with citations from AI experts as well as the practical guide on how to use AI in education: an overview of new AI tools to make learning more personalized, accessible and inclusive.

    Complete Study Results: https://www.rask.ai/research/ai-in-education

    Study Methodology

    The study surveyed 300 respondents and aimed to gain an understanding of participants’ perceptions, thoughts, feelings and behaviors during and after watching the educational videos. It has input from 30 AI experts and 12 data sources published between 2021 and 2023, including data from Statista, McKinsey Technology Trends Outlook, Straits Research, KPMG and others. 

    Rask AI is a brand of the company Brask Inc., an American company developing products and services for AI content creation and distribution.

    Source: Rask AI

    [ad_2]

    Source link