ReportWire

Tag: Data and research

  • A decade of data in one state shows an unexpected result when colleges drop remedial courses

    A decade of data in one state shows an unexpected result when colleges drop remedial courses

    [ad_1]

    Fifteen years ago, the Obama administration and philanthropic foundations encouraged more Americans to get a college degree. Remedial classes were a big barrier. Two-thirds of community college students and 40 percent of four-year college students weren’t academically prepared for college-level work and were forced to take prerequisite “developmental” courses that didn’t earn them college credits. Many of these college students never progressed to college-level courses. They racked up student loan debts and dropped out. Press reports, including my own, called it a “remedial ed trap.”

    One controversial but popular solution was to eliminate these prerequisite classes and let weaker students proceed straight to college-level courses, called “corequisite courses,” because they include some remedial support at the same time. In recent years, more than 20 states, from California to Florida, have either replaced remedial classes at their public colleges with corequisites or given students a choice between the two. 

    In 2015, Tennessee’s public colleges were some of the first higher education institutions to eliminate stand-alone remedial courses. A 10-year analysis of how almost 100,000 students fared before and after the new policy was conducted by researchers at the University of Delaware, and their draft paper was made public earlier this year. It has not yet been published in a peer-reviewed journal and may still be revised, but it is the first longer term study to look at college degree completion for tens of thousands of students who have taken corequisites, and it found that the new supports haven’t worked as well as many hoped, especially for lower achieving students .

    First the good news. Like earlier research, this study of Tennessee’s two-year community colleges found that after the elimination of remedial classes, students passed more college courses, both introductory courses in English and math, and also more advanced courses in those subjects.

    However, the extra credit accumulation effect quickly faded. Researchers tracked each student for three years, and by the end of their third year, students had racked up about the same number of total credits as earlier students had under the old remedial education regime. The proportion of students earning either two-year associate degrees or four-year bachelor’s degrees did not increase after the corequisite reform. Lower achieving college students, defined as those with very low ACT exam scores in high school, were more likely to drop out of college and less likely to earn a short-term certificate degree after the switch to corequisites.

    “The evidence is showing that these reforms are not increasing graduation rates,” said Alex Goudas, a higher education researcher and a community college professor at Delta College in Michigan, who was not involved in this study. “Some students are benefiting a little bit – only temporarily – and other students are harmed permanently.”

    It seems like a paradox. Students are initially passing more courses, but are also more likely to drop out and less likely to earn credentials. Florence Xiaotao Ran, an assistant professor at the University of Delaware and the lead researcher on the Tennessee study, explained to me that the dropouts appear to be different types of students than the ones earning more credits. Students with somewhat higher ACT test scores in high school, who were close to the old remedial ed cutoff of 19 points (out of 36) and scoring near the 50th percentile nationally, were more likely to succeed in passing the new corequisite courses straight away. Some students who were far below this threshold also passed the corequisite courses, but many more failed. Students below the 10th percentile (13 and below on the ACT) dropped out in greater numbers and were less likely to earn a short-term certificate. 

    Data from other states shows a similar pattern. In California, which largely eliminated remedial education in 2019, failure rates in introductory college-level math courses soared, even as more students also succeeded in passing these courses, according to a study of an Hispanic-serving two-year college in southern California

    Ran’s Tennessee analysis has two important implications. The new corequisite courses – as they currently operate – aren’t working well for the lowest achieving students. And the change isn’t even helping students who are now able to earn more college credits during the first year or two of college. They’re still struggling to graduate and are not earning a college degree any faster.

    Some critics of corequisite reforms, such as Delta College’s Goudas, argue that some form of remedial education needs to be reintroduced for students who lack basic math, reading and writing skills. 

    Meanwhile, supporters of the reforms believe that corequisite courses need to be improved. Thomas Brock, director of the Community College Research Center (CCRC) at Teachers College, Columbia University, described the higher dropout rates and falling number of credentials in the Tennessee study as “troubling.” But he says that the old remedial ed system failed too many students. (The Hechinger Report is an independent news organization, also based at Teachers College but is unaffiliated with CCRC.)

    “The answer is not to go back,” said Brock, “but to double down on corequisites and offer students more support,” acknowledging that some students need more time to build the skills they lack. Brock believes this skill-building can happen simultaneously as students earn college credits and not as a preliminary stepping stone. “No student comes to college to take remedial courses,” he added.

    One confounding issue is that corequisite classes come in so many different forms. In some cases, students get a double dose of math or English with three credit hours of a remedial class taken concurrently with three credit hours of a college-level course. A more common approach is to tack on an extra hour or so to the college class. In her analysis, Ran discovered that instructional time was cut in half for the weakest students, who received many more hours of math or writing instruction under the old remedial system.

    “In the new scenario, everyone gets the same amount of instruction or developmental material, regardless if you are just one point below the cutoff or 10 points below the cutoff,” said Ran.

    There are also big differences in what takes place during the extra support time that’s built into a corequisite course. Some colleges offer tutoring centers to help students fill in their knowledge gaps. Others schedule computer lab time where students practice math problems on educational software. Another option is extended class time, where the main professor teaches the same material that’s in the college level course only more slowly, spread across four hours a week instead of the usual three.  

    Overcoming weak foundational skills is not the only obstacle that community college students face. The researchers I interviewed emphasized that these students are struggling to juggle work and family responsibilities along with their classes, and they need more support – academic advising, career counseling and sometimes therapy and financial help.  Without additional support, students get derailed.  This may explain why the benefits of early credit accumulation fade out and are not yet translating into higher graduation rates. 

    Even before the pandemic, the vast majority of community college students arrived on campus without a strong enough foundation for regular college-credit bearing classes and were steered to either remedial or new corequisite classes. High school achievement levels have deteriorated further since 2020, when the data in Ran’s study ended. “It’s not their fault,” said Ran. “It’s the K-12 system that failed them.”

    That’s why it’s more important now than ever to figure out how to help under-prepared college students if we want to improve post-secondary education. 

    Contact staff writer Jill Barshay at (212) 678-3595 or barshay@hechingerreport.org.

    This story about corequisite courses was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters. 

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Jill Barshay

    Source link

  • An AI tutor helped Harvard students learn more physics in less time

    An AI tutor helped Harvard students learn more physics in less time

    [ad_1]

    A student’s view of PS2 Pal, the AI tutor used in a learning experiment inside Harvard’s physics department. (Screenshot courtesy of Gregory Kestin)

    We are still in the early days of understanding the promise and peril of using generative AI in education. Very few researchers have evaluated whether students are benefiting, and one well-designed study showed that using ChatGPT for math actually harmed student achievement

    The first scientific proof I’ve seen that ChatGPT can actually help students learn more was posted online earlier this year. It’s a small experiment, involving fewer than 200 undergraduates.  All were Harvard students taking an introductory physics class in the fall of 2023, so the findings may not be widely applicable. But students learned more than twice as much in less time when they used an AI tutor in their dorm compared with attending their usual physics class in person. Students also reported that they felt more engaged and motivated. They learned more and they liked it. 

    A paper about the experiment has not yet been published in a peer-reviewed journal, but other physicists at Harvard University praised it as a well-designed experiment. Students were randomly assigned to learn a topic as usual in class, or stay “home” in their dorm and learn it through an AI tutor powered by ChatGPT. Students took brief tests at the beginning and the end of class, or their AI sessions, to measure how much they learned. The following week, the in-class students learned the next topic through the AI tutor in their dorms, and the AI-tutored students went back to class. Each student learned both ways, and for both lessons – one on surface tension and one on fluid flow –  the AI-tutored students learned a lot more. 

    To avoid AI “hallucinations,” the tendency of chatbots to make up stuff that isn’t true, the AI tutor was given all the correct solutions. But other developers of AI tutors have also supplied their bots with answer keys. Gregory Kestin, a physics lecturer at Harvard and developer of the AI tutor used in this study, argues that his effort succeeded while others have failed because he and his colleagues fine-tuned it with pedagogical best practices. For example, the Harvard scientists instructed this AI tutor to be brief, using no more than a few sentences, to avoid cognitive overload. Otherwise, he explained, ChatGPT has a tendency to be “long-winded.”

    The tutor, which Kestin calls “PS2 Pal,” after the Physical Sciences 2 class he teaches, was told to only give away one step at a time and not to divulge the full solution in a single message. PS2 Pal was also instructed to encourage students to think and give it a try themselves before revealing the answer. 

    Unguided use of ChatGPT, the Harvard scientists argue, lets students complete assignments without engaging in critical thinking. 

    Kestin doesn’t deliver traditional lectures. Like many physicists at Harvard, he teaches through a method called “active learning,” where students first work with peers on in-class problem sets as the lecturer gives feedback. Direct explanations or mini-lectures come after a bit of trial, error and struggle. Kestin sought to reproduce aspects of this teaching style with the AI tutor. Students toiled on the same set of activities and Kestin fed the AI tutor the same feedback notes that he planned to deliver in class.

    Kestin provocatively titled his paper about the experiment, “AI Tutoring Outperforms Active Learning,” but in an interview he told me that he doesn’t mean to suggest that AI should replace professors or traditional in-person classes. 

    “I don’t think that this is an argument for replacing any human interaction,” said Kestin. “This allows for the human interaction to be much richer.”

    Kestin says he intends to continue teaching through in-person classes, and he remains convinced that students learn a lot from each other by discussing how to solve problems in groups. He believes the best use of this AI tutor would be to introduce a new topic ahead of class – much like professors assign reading in advance. That way students with less background knowledge won’t be as behind and can participate more fully in class activities. Kestin hopes his AI tutor will allow him to spend less time on vocabulary and basics and devote more time to creative activities and advanced problems during class.

    Of course, the benefits of an AI tutor depend on students actually using it. In other efforts, students often didn’t want to use earlier versions of education technology and computerized tutors. In this experiment, the “at-home” sessions with PS2 Pal were scheduled and proctored over Zoom. It’s not clear that even highly motivated Harvard students will find it engaging enough to use regularly on their own initiative. Cute emojis – another element that the Harvard scientists prompted their AI tutor to use – may not be enough to sustain long-term interest. 

    Kestin’s next step is to test the tutor bot for an entire semester. He’s also been testing PS2 Pal as a study assistant with homework. Kestin said he’s seeing promising signs that it’s helpful for basic but not advanced problems. 

    The irony is that AI tutors may not be that effective at what we generally think of as tutoring. Kestin doesn’t think that current AI technology is good at anything that requires knowing a lot about a person, such as what the student already learned in class or what kind of explanatory metaphor might work.

    “Humans have a lot of context that you can use along with your judgment in order to guide a student better than an AI can,” he said. In contrast, AI is good at introducing students to new material because you only need “limited context” about someone and “minimal judgment” for how best to teach it. 

    Contact staff writer Jill Barshay at (212) 678-3595 or barshay@hechingerreport.org.

    This story about an AI tutor was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Jill Barshay

    Source link

  • Kids who use ChatGPT as a study assistant do worse on tests

    Kids who use ChatGPT as a study assistant do worse on tests

    [ad_1]

    Does AI actually help students learn? A recent experiment in a high school provides a cautionary tale. 

    Researchers at the University of Pennsylvania found that Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT. Those with ChatGPT solved 48 percent more of the practice problems correctly, but they ultimately scored 17 percent worse on a test of the topic that the students were learning. 

    A third group of students had access to a revised version of ChatGPT that functioned more like a tutor. This chatbot was programmed to provide hints without directly divulging the answer. The students who used it did spectacularly better on the practice problems, solving 127 percent more of them correctly compared with students who did their practice work without any high-tech aids. But on a test afterwards, these AI-tutored students did no better. Students who just did their practice problems the old fashioned way — on their own — matched their test scores.

    The researchers titled their paper, “Generative AI Can Harm Learning,” to make clear to parents and educators that the current crop of freely available AI chatbots can “substantially inhibit learning.” Even a fine-tuned version of ChatGPT designed to mimic a tutor doesn’t necessarily help.

    The researchers believe the problem is that students are using the chatbot as a “crutch.” When they analyzed the questions that students typed into ChatGPT, students often simply asked for the answer. Students were not building the skills that come from solving the problems themselves. 

    ChatGPT’s errors also may have been a contributing factor. The chatbot only answered the math problems correctly half of the time. Its arithmetic computations were wrong 8 percent of the time, but the bigger problem was that its step-by-step approach for how to solve a problem was wrong 42 percent of the time. The tutoring version of ChatGPT was directly fed the correct solutions and these errors were minimized.

    A draft paper about the experiment was posted on the website of SSRN, formerly known as the Social Science Research Network, in July 2024. The paper has not yet been published in a peer-reviewed journal and could still be revised. 

    This is just one experiment in another country, and more studies will be needed to confirm its findings. But this experiment was a large one, involving nearly a thousand students in grades nine through 11 during the fall of 2023. Teachers first reviewed a previously taught lesson with the whole classroom, and then their classrooms were randomly assigned to practice the math in one of three ways: with access to ChatGPT, with access to an AI tutor powered by ChatGPT or with no high-tech aids at all. Students in each grade were assigned the same practice problems with or without AI. Afterwards, they took a test to see how well they learned the concept. Researchers conducted four cycles of this, giving students four 90-minute sessions of practice time in four different math topics to understand whether AI tends to help, harm or do nothing.

    ChatGPT also seems to produce overconfidence. In surveys that accompanied the experiment, students said they did not think that ChatGPT caused them to learn less even though they had. Students with the AI tutor thought they had done significantly better on the test even though they did not. (It’s also another good reminder to all of us that our perceptions of how much we’ve learned are often wrong.)

    The authors likened the problem of learning with ChatGPT to autopilot. They recounted how an overreliance on autopilot led the Federal Aviation Administration to recommend that pilots minimize their use of this technology. Regulators wanted to make sure that pilots still know how to fly when autopilot fails to function correctly. 

    ChatGPT is not the first technology to present a tradeoff in education. Typewriters and computers reduce the need for handwriting. Calculators reduce the need for arithmetic. When students have access to ChatGPT, they might answer more problems correctly, but learn less. Getting the right result to one problem won’t help them with the next one.

    This story about using ChatGPT to practice math was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Jill Barshay

    Source link

  • Researchers combat AI hallucinations in math

    Researchers combat AI hallucinations in math

    [ad_1]

    Two University of California, Berkeley, researchers documented how they tamed AI hallucinations in math by asking ChatGPT to solve the same problem 10 times. Credit: Eugene Mymrin/ Moment via Getty Images

    One of the biggest problems with using AI in education is that the technology hallucinates. That’s the word the artificial intelligence community uses to describe how its newest large language models make up stuff that doesn’t exist or isn’t true. Math is a particular land of make-believe for AI chatbots. Several months ago, I tested Khan Academy’s chatbot, which is powered by ChatGPT. The bot, called Khanmigo, told me I had answered a basic high school Algebra 2 problem involving negative exponents wrong. I knew my answer was right. After typing in the same correct answer three times, Khanmigo finally agreed with me. It was frustrating.

    Errors matter. Kids could memorize incorrect solutions that are hard to unlearn, or become more confused about a topic. I also worry about teachers using ChatGPT and other generative AI models to write quizzes or lesson plans. At least a teacher has the opportunity to vet what AI spits out before giving or teaching it to students. It’s riskier when you’re asking students to learn directly from AI. 

    Computer scientists are attempting to combat these errors in a process they call “mitigating AI hallucinations.” Two researchers from University of California, Berkeley, recently documented how they successfully reduced ChatGPT’s instructional errors to near zero in algebra. They were not as successful with statistics, where their techniques still left errors 13 percent of the time. Their paper was published in May 2024 in the peer-reviewed journal PLOS One.

    In the experiment, Zachary Pardos, a computer scientist at the Berkeley School of Education, and one of his students, Shreya Bhandari, first asked ChatGPT to show how it would solve an algebra or statistics problem. They discovered that ChatGPT was “naturally verbose” and they did not have to prompt the large language model to explain its steps. But all those words didn’t help with accuracy. On average, ChatGPT’s methods and answers were wrong a third of the time. In other words, ChatGPT would earn a grade of a D if it were a student. 

    Current AI models are bad at math because they’re programmed to figure out probabilities, not follow rules. Math calculations are all about rules. It’s ironic because earlier versions of AI were able to follow rules, but unable to write or summarize. Now we have the opposite.

    The Berkeley researchers took advantage of the fact that ChatGPT, like humans, is erratic. They asked ChatGPT to answer the same math problem 10 times in a row. I was surprised that a machine might answer the same question differently, but that is what these large language models do.  Often the step-by-step process and the answer were the same, but the exact wording differed. Sometimes the methods were bizarre and the results were dead wrong. (See an example in the illustration below.)

    Researchers grouped similar answers together. When they assessed the accuracy of the most common answer among the 10 solutions, ChatGPT was astonishingly good. For basic high-school algebra, AI’s error rate fell from 25 percent to zero. For intermediate algebra, the error rate fell from 47 percent to 2 percent. For college algebra, it fell from 27 percent to 2 percent. 

    ChatGPT answered the same algebra question three different ways, but it landed on the right response seven out of 10 times in this example

    Source: Pardos and Bhandari, “ChatGPT-generated help produces learning gains equivalent to human tutor-authored help on mathematics skills,” PLOS ONE, May 2024

    However, when the scientists applied this method, which they call “self-consistency,” to statistics, it did not work as well. ChatGPT’s error rate fell from 29 percent to 13 percent, but still more than one out of 10 answers was wrong. I think that’s too many errors for students who are learning math. 

    The big question, of course, is whether these ChatGPT’s solutions help students learn math better than traditional teaching. In a second part of this study, researchers recruited 274 adults online to solve math problems and randomly assigned a third of them to see these ChatGPT’s solutions as a “hint” if they needed one. (ChatGPT’s wrong answers were removed first.) On a short test afterwards, these adults improved 17 percent, compared to less than 12 percent learning gains for the adults who could see a different group of hints written by undergraduate math tutors. Those who weren’t offered any hints scored about the same on a post-test as they did on a pre-test.

    Those impressive learning results for ChatGPT prompted the study authors to boldly predict that “completely autonomous generation” of an effective computerized tutoring system is “around the corner.” In theory, ChatGPT could instantly digest a book chapter or a video lecture and then immediately turn around and tutor a student on it.

    Before I embrace that optimism, I’d like to see how much real students – not just adults recruited online – use these automated tutoring systems. Even in this study, where adults were paid to do math problems, 120 of the roughly 400 participants didn’t complete the work and so their results had to be thrown out. For many kids, and especially students who are struggling in a subject, learning from a computer just isn’t engaging

    This story about AI hallucinations was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Jill Barshay

    Source link

  • PROOF POINTS: A little parent math talk with kids might really add up, a new body of education research suggests

    PROOF POINTS: A little parent math talk with kids might really add up, a new body of education research suggests

    [ad_1]

    Parents know they should talk and read to their young children. Dozens of nonprofit organizations have promoted the research evidence that it will help their children do better in school.

    But the focus has been on improving literacy. Are there similar things that parents can do with their children to lay the foundation for success in math? 

    That’s important because Americans struggle with math, ranking toward the bottom on international assessments. Weak math skills impede a child’s progress later in life, preventing them from getting through college, a vocational program or even high school. Math skills, or the lack of them, can open or close the doors to lucrative science and technology fields.

    A new wave of research over the past decade has looked at how much parents talk about numbers and shapes with their children, and whether these spontaneous and natural conversations help children learn the subject. Encouraging parents to talk about numbers could be a cheap and easy way to improve the nation’s dismal math performance. 

    A team of researchers from the University of Pittsburgh and the University of California, Irvine, teamed up to summarize the evidence from 22 studies conducted between 2010 and 2022. Their meta-analysis was published in the July 2024 issue of the Journal of Experimental Child Psychology. 

    Here are four takeaways:

    There’s a link between parent math talk and higher math skills

    After looking at 22 studies, researchers found that the more parents talked about math with their children, the stronger their children’s math skills. In these studies, researchers typically observed parents and children interacting in a university lab, a school, a museum or at home and kept track of how often parents mentioned numbers or shapes. Ordinary sentences that included numbers counted. An example could be: “Hand me three potato chips.” Researchers also gave children a math test and found that children who scored higher tended to have parents who talked about math more during the observation period. 

    The link between parents’ math talk and a child’s math skills was strongest between ages three and five. During these preschool years, parents who talked more about numbers and shapes tended to have children with higher math achievement. Parents who didn’t talk as much about numbers and shapes tended to have children with lower math achievement. 

    With older children, the amount of time that parents spent talking about math was not as closely related to their math achievement. Researchers speculated that this was because once children start school, their math abilities are influenced more by the instruction they receive from their teachers. 

    None of these studies proves that talking to your preschooler about math causes their math skills to improve. Parents who talk more about math may also have higher incomes and more education. Stronger math skills could be the result of all the other things that wealthier and more educated parents are giving their kids –  nutritious meals, a good night’s sleep, visits to museums and vacations –  and not the math talk per se. So far, studies haven’t been able to disentangle math talk from everything else that parents do for their children.

    “What the research is showing at this point is that talking more about math tends to be associated with better outcomes for children,” said Alex Silver, a psychologist at the University of Pittsburgh who led the meta-analysis. “It’s an easy way to bring math concepts into your day to day life that doesn’t require buying special equipment, or setting aside time to tutor your child and try to teach them arithmetic.” 

    Keep it natural

    The strongest link between parent talk about math and a child’s math performance was detected when researchers didn’t tell parents to do a math activity. Parents who naturally brought up numbers or shapes in a normal conversation had children who scored higher on math assessments. When researchers had parents do a math exercise with children, the amount of math-related words that a parent used wasn’t as strongly associated with better math performance for their children. 

    Silver, a postdoctoral research associate at the University of Pittsburgh’s Learning Research & Development Center, recommends bringing math into something that the child is paying attention to, rather than doing flashcards or workbooks. It could be as simple as asking  “How many?” Here’s an example Silver gave me:  “Oh, look, you have a whole lot of cars. How many cars do you have? Let’s count them. You have one, two, three. There’s three cars there.”

    When you’re doing a puzzle together, turn the shape in a different direction and talk about what it looks like. Setting the dinner table, grocery shopping and keeping track of money are opportunities to talk about numbers or shapes.

    “The idea is to make it fun and playful,” said Silver. “As you’re cooking, say, ‘We need to add two eggs. Oh wait, we’re doubling the recipe, so we need two more eggs. How many is that all together?’ ”

    I asked Silver about the many early childhood math apps and exercises on the market, and whether parents should be spending time doing them with their children. Silver said they can be helpful for parents who don’t know where to start, but she said parents shouldn’t feel guilty if they’re not doing math drills with their kids. “It’s enough to just talk about it naturally, to find ways to bring up numbers and shapes in the context of what you’re already doing.”

    Quality may matter more than quantity

    In the 22 studies, more math talk was associated with higher math achievement. But researchers are unable to advise parents on exactly how much or how often to talk about math during the day. Silver said 10 utterances a day about math is probably more beneficial than just one mention a day. “Right now the evidence is that more is better, but at some point it’s so much math, you need to talk about something else now,” she said. The point of diminishing returns is unknown.

    Ultimately, the quantity of math talk may not be as important as how parents talk about math, Silver said. Reading a math textbook to your child probably wouldn’t be helpful, Silver said. It’s not just about saying a bunch of math words. Still, researchers don’t know if asking questions or just talking about numbers is what makes a difference. It’s also not clear how important it is to tailor the number talk to where a child is in his math development. These are important areas of future research.

    Technology may help. The latest studies are using wearable audio recorders, enabling researchers to “listen” to hours of conversations inside homes, and analyzing these conversations with natural language processing algorithms to get a more accurate understanding of parents’ math talk. The 22 studies in this meta-analysis captured as little as three minutes and as much as almost 14 hours of parent-child interactions, and these snippets of life, often recorded in a lab setting, may not reflect how parents and children talk about math in a typical week.

    Low-income kids appear to benefit as much from math talk as high-income kids

    Perhaps the most inspiring conclusion from this meta-analysis is that the association between a parent’s math talk and a child’s math performance was as strong for a low-income child as it was for a high-income child. 

    “That’s a happy thing to see that this transcends other circumstances,” said Silver. “Targeting the amount of math input that a child receives is hopefully going to be easier, and more malleable than changing broader, systemic challenges.”

    While there are many questions left to answer, Silver is already putting her research into practice with her own three-year old son. She’s asked counting questions so many times that her little one has begun to tease her. Every time he sees a group of things, he pretends to be Mommy and asks, “How many? Let’s count them!” 

     “It’s very funny,” Silver said. “I’m like, ‘Wow, Mommy really drilled that one into you, huh?’ Buddy knows what you’re up to.”

    This story about math with preschoolers was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Jill Barshay

    Source link

  • OPINION: Urban school districts must make dramatic changes to survive

    OPINION: Urban school districts must make dramatic changes to survive

    [ad_1]

    Urban school districts are in crisis. Student and teacher absenteeism, special education referrals, mental health complications and violence within and outside schools are all on the rise as student enrollment and state funding are in free fall. Morale is low for teachers, principals and district leaders. 

    Compounding these challenges, federal pandemic relief education funding (known as ESSER) ends in September 2024. Recent in-depth case studies of Chicago and Baltimore City Public Schools and my own research, including candid conversations with current and former big-city superintendents, have convinced me of a stark reality: States and cities must either empower bold leaders to make dramatic changes or step in to make those changes themselves. 

    It was impossible not to be moved by the courage the school leaders I spoke with displayed. Yet it was also obvious that the powers these district leaders possess are narrower than the challenges they face — and that they will need support from governors, state school chiefs, mayors and other leaders. 

    One superintendent lamented the incessant political scrutiny and media criticism he’s encountered, noting, “You can’t make an error without it being spread all over social media.”

    Meanwhile, principals are also under pressure; many are now serving not only as instructional leaders but also as food bank organizers and mental health crisis counselors. “This job is becoming unsustainable for people to be able to have a healthy life,” one superintendent said. 

    Another superintendent emphasized the challenge of finding math teachers proficient enough to teach their subject, a problem exacerbated by state hiring regulations and union rules that prevent the assessment of candidates’ knowledge. “Most teachers are not even two grade levels above students in their math content knowledge,” she said.

    Related: Become a lifelong learner. Subscribe to our free weekly newsletter to receive our comprehensive reporting directly in your inbox.

    The best big-city district leaders know that their jobs now include resetting how public education operates. “What’s happening in schools is not just incompatible with what we want kids to do but also with the outside workforce,” a former superintendent said. “Everything outside of schools is getting more modern, hybrid, etc. Yet schools are still the same.”

    These district leaders believe that learning must now be a 12-month enterprise, especially for the kids who fell behind during the pandemic.

    Several leaders pointed to data showing that advances in teaching strategies are starting to work and noted that innovations in generative AI and team-based staffing could make teachers’ jobs easier, and partnerships with community services could help students with mental health challenges. 

    But superintendents cannot make these changes alone: Their only route to survival is with support from their cities and states. 

    When the fiscal cliff collides with enrollment declines, many states may be forced to put urban districts into receivership. Here are five ways state and city leaders can help urban superintendents and students now:

    1. Provide political protection and regulatory relief for bold leaders.

    States should provide financial relief, political cover and regulatory flexibility for districts that demonstrate solid plans and strong leadership. Superintendents must not be hamstrung by local rules preventing them from, for example, screening new teachers for math knowledge or insisting that teachers use evidence-based instructional materials. 

    2. Update old policies to meet new challenges.

    States can help by updating their assessment and accountability systems so they better measure and incentivize career-linked skills and credentials. As one leader said, “I do see a lot of potential” for more “paid apprenticeships, etc., but none of them fit in the state and federal accountability systems.”

    3. Stay in the game.

    State leaders cannot expect to intervene briefly and then return to serene detachment. Improving urban districts takes fortitude, vision and a willingness to persist through objections from entrenched interest groups. New York City and New Orleans demonstrated significant gains under state and city intervention, but status quo forces and flagging state support upended their progress. 

    4. Help districts forge new alliances to adopt new strategies.

    States can facilitate partnerships with employers, social services and higher education institutions by providing tax incentives and grants. They can encourage new, more sustainable staffing models, such as working in teams, and the use of AI to ease teacher workloads. They can bring in nonprofit transformation experts. 

    5. Have a Plan B.

    Not all urban school districts have bold leadership that can help them overcome the odds, even with strong state-level support. State leaders must be willing to make alternative provisions for students, such as authorizing the establishment of high-performing public charter schools, mandating tutoring and supporting community-led initiatives to address student needs.

    Related: New superintendents need ‘a fighting chance for success’

    Millions of young people are leaving high school without being ready for college. Generational poverty and its accompanying social ills are being hardwired into our cities. Inaction is not an option. State and city leaders must recognize that urban districts can and must be transformed — and it will not happen without their help. 

    Governors, mayors, state legislators and state school chiefs must back courageous urban district leadership. And they must prepare to intervene when urban district leaders cannot overcome the overwhelming odds stacked against them. 

    Robin J. Lake is director of the Center on Reinventing Public Education, a nonpartisan research and policy center at Arizona State University’s Mary Lou Fulton Teachers College. 

    This story about urban school districts was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.  

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Robin J. Lake

    Source link

  • OPINION: What teachers call AI cheating, leaders in the workforce might call progress – The Hechinger Report

    OPINION: What teachers call AI cheating, leaders in the workforce might call progress – The Hechinger Report

    [ad_1]

    As the use of artificial intelligence grows, teachers are trying to protect the integrity of their educational practices and systems. When we see what AI can do in the hands of our students, it’s hard to stay neutral about how and if to use it.

    Of course, we worry about cheating; AI can be used to write essays and solve math problems.

    But we also have deeper concerns regarding learning. When our students use AI, they may not be engaging as deeply with our assignments and coursework.

    They have discovered ways AI can be used to create essay outlines and help with project organization and other such tasks that are key components of the learning process.

    Some of this could be good. AI is a fabulous tool for getting started or unstuck. AI puts together old ideas in new ways and can do this at scale: It will make creativity easier for everyone.

    But this very ease has teachers wondering how we can keep our students motivated to do the hard work when there are so many new shortcuts. Learning goals, curriculums, courses and the way we grade assignments will all need to be reevaluated.

    Related: Interested in innovations in the field of higher education? Subscribe to our free biweekly Higher Education newsletter.

    The new realities of work also must be considered. A shift in employers’ job postings rewards those with AI skills. Many companies report already adopting generative AI tools or anticipate incorporating them into their workflow in the near future.

    A core tension has emerged: Many teachers want to keep AI out of our classrooms, but also know that future workplaces may demand AI literacy.

    What we call cheating, business could see as efficiency and progress.

    The complexities, opportunities and decisions that lie between banning AI and teaching AI are significant.

    It is increasingly likely that using AI will emerge as an essential skill for students, regardless of their career ambitions, and that action is required of educational institutions as a result.

    Integrating AI into the curriculum will require change. The best starting point is a better understanding of what AI literacy looks like in our current landscape.

    In our new book, we make it clear that the specifics of AI literacy will vary somewhat from one subject to the next, but there are some AI capacities that everyone will now need.

    Before even writing a prompt, the AI user should develop an understanding of the following:

    • the role of human / AI collaborations
    • how to navigate the ethical implications of using AI for a given purpose
    • which AI tool to use (when and why)
    • how to use their selected AI tool fully and successfully
    • the limitations of generative AI systems and how to work around them
    • prompt engineering and all of its nuances

    This knowledge will help our students write successful prompts, but additional skills and AI literacy will be required once AI returns a response. These include the abilities to:

    • review and evaluate AI-produced content, including how to determine its accuracy and recognize bias
    • edit AI content for its intended audience and purpose
    • follow up with AI to refine the output
    • take responsibility for the quality of the final work

    The development of AI literacy mirrors the development of other key skills, such as critical thinking. Teaching AI literacy begins by teaching the capacities above, as well as others specific to your own subject.

    While the inclination may be to start teaching AI literacy by opening a browser, faculty should begin by providing an ethical and environmental context regarding the use of AI and the responsibilities each of us has when working with AI.

    Amazon Web Services recently surveyed employers from all business sectors about what skills employees need to use AI well. In ranked order, their answers included the following:

    1. critical thinking and problem solving
    2. creative thinking and design competence
    3. technical proficiency
    4. ethics and risk management
    5. communication
    6. math
    7. teamwork
    8. management
    9. writing

    Higher education is quite adept at teaching such skills, and many of those noted are among the American Association of Colleges and Universities’ (AAC&U) list of “essential learning outcomes” for higher education.

    Related: TEACHER VOICE: My students are afraid of AI

    Faculty will need to improve their own AI literacy and explore the most advanced generative AI tools (currently ChatGPT 4o, Gemini 1.5 and Claude 3.5). A good way to begin is to ask AI to perform assignments and projects that you typically ask your students to complete — and then try to improve the AI’s response.

    Understanding what AI can and cannot do well within the context of your course will be key as you contemplate revising your assignments and teaching.

    Faculty should also find out if their college has an advisory board comprised of past students and/or employers. Reach out to them for firsthand insight on how AI is shifting the landscape — and keep that conversation going over time. That information will be essential as you think about AI literacy within your subjects and courses.

    These actions will ultimately position you to be able to navigate the complexities and decisions that lie between ban and teach.

    C. Edward Watson is vice president for digital innovation with the American Association of Colleges and Universities (AAC&U). José Antonio Bowen is a former president of Goucher College and co-author with Watson of “Teaching with AI: A Practical Guide to a New Era of Human Learning.”

    This story about AI literacy was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for our Higher Education newsletter.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    C. Edward Watson and José Antonio Bowen

    Source link

  • PROOF POINTS: New studies of online tutoring highlight troubles with attendance and larger tutoring groups

    PROOF POINTS: New studies of online tutoring highlight troubles with attendance and larger tutoring groups

    [ad_1]

    Ever since the pandemic shut down schools in the spring of 2020, education researchers have pointed to tutoring as the most promising way to help kids catch up academically. Evidence from almost 100 studies was overwhelming for a particular kind of tutoring, called high-dosage tutoring, where students focus on either reading or math three to five times a week.  

    But until recently, there has been little good evidence for the effectiveness of online tutoring, where students and tutors interact via video, text chat and whiteboards. The virtual version has boomed since the federal government handed schools nearly $190 billion of pandemic recovery aid and specifically encouraged them to spend it on tutoring. Now, some new U.S. studies could offer useful guidance to educators.

    Online attendance is a struggle

    In the spring of 2023, almost 1,000 Northern California elementary school children in grades 1 to 4 were randomly assigned to receive online reading tutoring during the school day. Students were supposed to get 20 to 30 sessions each, but only one of five students received that much. Eighty percent didn’t and they didn’t do much better than the 800 students in the comparison group who didn’t get tutoring, according to a draft paper by researchers from Teachers College, Columbia University, which was posted to the Annenberg Institute website at Brown University in April 2024. (The Hechinger Report is an independent news organization based at Teachers College, Columbia University.)

    Researchers have previously found that it is important to schedule in-person tutoring sessions during the school day, when attendance is mandatory. The lesson here with online tutoring is that attendance can be rocky even during the school day. Often, students end up with a low dose of tutoring instead of the high dose that schools have paid for.

    However, online tutoring can be effective when students participate regularly. In this Northern California study, reading achievement increased substantially, in line with in-person tutoring, for the roughly 200 students who got at least 20 sessions across 10 weeks.

    The students who logged in regularly might have been more motivated students in the first place, the researchers warned, indicating that it could be hard to reproduce such large academic benefits for all. During the periods when children were supposed to receive tutoring, researchers observed that some children – often ones who were slightly higher achieving –  regularly logged on as scheduled while others didn’t. The difference in student behavior and what the students were doing instead wasn’t explained. Students also seemed to log in more frequently when certain staff members were overseeing the tutoring and less frequently with others. 

    Small group tutoring doesn’t work as well online

    The large math and reading gains that researchers documented in small groups of students with in-person tutors aren’t always translating to the virtual world. 

    Another study of more than 2,000 elementary school children in Texas tested the difference between one-to-one and two-to-one online tutoring during the 2022-23 school year. These were young, low-income children, in kindergarten through 2nd grade, who were just learning to read. Children who were randomly assigned to get one-to-one tutoring four times a week posted small gains on one test, but not on another, compared to students in a comparison group who didn’t get tutoring. First graders assigned to one-to-one tutoring gained the equivalent of 30 additional days of school. By contrast, children who had been tutored in pairs were statistically no different in reading than the comparison group of untutored children. A draft paper about this study, led by researchers from Stanford University, was posted to the Annenberg website in May 2024. 

    Another small study in Grand Forks, North Dakota confirmed the downside of larger groups with online tutoring. Researchers from Brown University directly compared the math progress of middle school students when they received one-to-one tutoring versus small groups of three students. The study was too small, only 180 students, to get statistically strong results, but the half that were randomly assigned to receive individual tutoring appeared to gain eight extra percentile points, compared to the students who were assigned to small group tutoring. It was possible that students in the small groups learned a third as much math, the researchers estimated, but these students might have learned much less. A draft of this paper was posted to the Annenberg website in June 2024. 

    In surveys, tutors said it was hard to keep all three kids engaged online at once. Students were more frequently distracted and off-task, they said.  Shy students were less likely to speak up and participate.  With one student at a time, tutors said they could move at a faster pace and students “weren’t afraid to ask questions” or “afraid of being wrong.” (On the plus side, tutors said groups of three allowed them to organize group activities or encourage a student to help a peer.)

    Behavior problems happen in person too. However, when I have observed in-person small group tutoring in schools, each student is often working independently with the tutor, almost like three simultaneous sessions of one-to-one help. In-person tutors can   encourage a student to keep practicing through a silent glance, a smile or hand signal even as they are explaining something to another student. Online, each child’s work and mistakes are publicly exposed on the screen to the whole group. Private asides aren’t as easy; some platforms allow the tutor to text a child privately in a chat window, but that takes time. Tutors have told me that many teens don’t like seeing their face on screen, but turning the camera off makes it harder for them to sense if a student is following along or confused.

    Matt Kraft, one of the Brown researchers on the Grand Forks study, suggests that bigger changes need to be made to online tutoring lessons in order to expand from one-to-one to small group tutoring, and he notes that school staff are needed in the classroom to keep students on-task. 

    School leaders have until March 2026 to spend the remainder of their $190 billion in pandemic recovery funds, but contracts with tutoring vendors must be signed by September 2024. Both options — in person and virtual — involve tradeoffs. New research evidence is showing that virtual tutoring can work well, especially when motivated students want the tutoring and log in regularly. But many of the students who are significantly behind grade level and in need of extra help may not be so motivated. Keeping the online tutoring small, ideally one-to-one, improves the chances that it will be effective. But that means serving many fewer students, leaving millions of children behind. It’s a tough choice. 

    This story about online tutoring was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Jill Barshay

    Source link

  • OPINION: School counselors are scarce, but AI could play an important role in helping them reach more students – The Hechinger Report

    OPINION: School counselors are scarce, but AI could play an important role in helping them reach more students – The Hechinger Report

    [ad_1]

    If we are to believe the current rapturous cheerleading around artificial intelligence, education is about to be transformed. Digital educators, alert and available at all times, will soon replace their human counterparts and feed students with concentrated personalized content.

    It’s reminiscent of a troubling experiment from the 1960s, immortalized in one touching image: an infant monkey, clearly scared, clutching a crude cloth replica of the real mother it has been deprived of. Next to it is a roll of metal mesh with a feeding bottle attached. The metal mom supplies milk, while the cloth mom sits inert. And yet, in moments of stress, it is the latter the infant seeks succor from.

    Notwithstanding its distressing provenance, this image has bearing on a topical question: What role should AI play in our children’s education? And in school counseling? Here’s one way to think about these questions.

    With its detached efficiency, an AI system is like the metal mesh mother — capable of delivering information, but little else. Human educators — the teachers and the school counselors with whom students build emotional bonds and relationships of trust — are like the cloth mom.

    It would be a folly to replace these educators with digital counterparts. We don’t need to look very far back to validate this claim. Just over a decade ago, we were gripped by the euphoria around MOOCs — educational videos accessible to all via the Internet.

    “The end of classroom education!” “An inflection point!” screamed breathless headlines. The reality turned out to be a lot less impressive.

    MOOCs wound up playing a helpful supporting role in education, but the stars of the show remained the human teachers; in-person learning environments turned out to be essential. The failures of remote learning during Covid support the same conclusion. A similar narrative likely will (and we argue, ought to) play out in the context of AI and school counseling.

    Related: Become a lifelong learner. Subscribe to our free weekly newsletter to receive our comprehensive reporting directly in your inbox.

    Guidance for our children must keep caring adults at its core. Counselors play an indispensable role in helping students find their paths through the school maze. Their effectiveness is driven by their expertise, empathy and ability to be confidants to students in moments of doubt and stress.

    At least, that is how counseling is supposed to work. In reality, the counseling system is under severe stress.

    The American School Counselor Association recommends a student-to-counselor ratio of 250-to-1, yet the actual average was 385-to-1 for the 2022–23 school year, the most recent year for which data is available. In many schools the ratio is far higher.

    Even for the most dedicated counselor, such a ratio makes it impossible to spend much time getting to know any one student; the counselor has to focus on administrative work like schedule changes and urgent issues like mental health. This constraint on availability has cascading effects, limiting the counselor’s ability to personalize advice and recommendations.

    Students sense that their counselors are rushed or occupied with other crises and feel hesitant to ask for more advice and support from these caring adults. Meanwhile, the counselors are assigned extraneous tasks like lunch duty and attendance support, further scattering their attention.

    Against this dispiriting backdrop, it is tempting to turn to AI as a savior. Can’t generative AI systems be deployed as virtual counselors that students can interact with and get recommendations from? As often as they want? On any topic? Costing a fraction of the $60,000 annual salary of a typical human school counselor?

    Given the fantastic recent leaps in the capabilities of AI systems, answers to all these questions appear to be a resounding yes: There is a compelling case to be made for having AI play a role in school counseling. But it is not one of replacement.

    Related: PROOF POINTS: AI essay grading is already as ‘good as an overburdened’ teacher, but researchers say it needs more work

    AI’s ability to process vast amounts of data and offer personalized recommendations makes it well-suited for enhancing the counseling experience. By analyzing data on a student’s personality and interests, AI can facilitate more meaningful interactions between the student and their counselor and lay the groundwork for effective goal setting.

    AI also excels at breaking down complex tasks into manageable steps, turning goals into action plans. This work is often time-consuming for human counselors, but it’s easy for AI, making it an invaluable ally in counseling sessions.

    By leveraging AI to augment traditional approaches, counselors can allocate more time to providing critical social and emotional support and fostering stronger mentorship relationships with students.

    Incorporating AI into counseling services also brings long-term benefits: AI systems can track recommendations and student outcomes, and thus continuously improve system performance over time. Additionally, AI can stay abreast of emerging trends in the job market so that counselors can offer students cutting-edge guidance on future opportunities.

    And AI add-ons are well-suited to provide context-specific suggestions and information — such as for courses and local internships — on an as-needed basis and to adapt to a student’s changing interests and goals over time.

    As schools grapple with declining budgets and chronic absenteeism, the integration of AI into counseling services offers a remarkable opportunity to optimize counseling sessions and establish support systems beyond traditional methods.

    Still, it is an opportunity we must approach with caution. Human counselors serve an essential and irreplaceable role in helping students learn about themselves and explore college and career options. By harnessing the power of AI alongside human strengths, counseling services can evolve to meet the diverse needs of students in a highly personalized, engaging and goal-oriented manner.

    Izzat Jarudi is co-founder and CEO of Edifii, a startup offering digital guidance assistance for high school students and counselors supported by the U.S. Department of Education’s SBIR program. Pawan Sinha is a professor of neuroscience and AI at MIT and Edifii’s co-founder and chief scientist. Carolyn Stone, past president of the American School Counselor Association, contributed to this piece.

    This story about AI and school counselors was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s newsletter.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Izzat Jarudi and Pawan Sinha

    Source link

  • PROOF POINTS: Asian American students lose more points in an AI essay grading study — but researchers don’t know why

    PROOF POINTS: Asian American students lose more points in an AI essay grading study — but researchers don’t know why

    [ad_1]

    When ChatGPT was released to the public in November 2022, advocates and watchdogs warned about the potential for racial bias. The new large language model was created by harvesting 300 billion words from books, articles and online writing, which include racist falsehoods and reflect writers’ implicit biases. Biased training data is likely to generate biased advice, answers and essays. Garbage in, garbage out. 

    Researchers are starting to document how AI bias manifests in unexpected ways. Inside the research and development arm of the giant testing organization ETS, which administers the SAT, a pair of investigators pitted man against machine in evaluating more than 13,000 essays written by students in grades 8 to 12. They discovered that the AI model that powers ChatGPT penalized Asian American students more than other races and ethnicities in grading the essays. This was purely a research exercise and these essays and machine scores weren’t used in any of ETS’s assessments. But the organization shared its analysis with me to warn schools and teachers about the potential for racial bias when using ChatGPT or other AI apps in the classroom.

    AI and humans scored essays differently by race and ethnicity

    “Diff” is the difference between the average score given by humans and GPT-4o in this experiment. “Adj. Diff” adjusts this raw number for the randomness of human ratings. Source: Table from Matt Johnson & Mo Zhang “Using GPT-4o to Score Persuade 2.0 Independent Items” ETS (June 2024 draft)

    “Take a little bit of caution and do some evaluation of the scores before presenting them to students,” said Mo Zhang, one of the ETS researchers who conducted the analysis. “There are methods for doing this and you don’t want to take people who specialize in educational measurement out of the equation.”

    That might sound self-serving for an employee of a company that specializes in educational measurement. But Zhang’s advice is worth heeding in the excitement to try new AI technology. There are potential dangers as teachers save time by offloading grading work to a robot.

    In ETS’s analysis, Zhang and her colleague Matt Johnson fed 13,121 essays into one of the latest versions of the AI model that powers ChatGPT, called GPT 4 Omni or simply GPT-4o. (This version was added to ChatGPT in May 2024, but when the researchers conducted this experiment they used the latest AI model through a different portal.)  

    A little background about this large bundle of essays: students across the nation had originally written these essays between 2015 and 2019 as part of state standardized exams or classroom assessments. Their assignment had been to write an argumentative essay, such as “Should students be allowed to use cell phones in school?” The essays were collected to help scientists develop and test automated writing evaluation.

    Each of the essays had been graded by expert raters of writing on a 1-to-6 point scale with 6 being the highest score. ETS asked GPT-4o to score them on the same six-point scale using the same scoring guide that the humans used. Neither man nor machine was told the race or ethnicity of the student, but researchers could see students’ demographic information in the datasets that accompany these essays.

    GPT-4o marked the essays almost a point lower than the humans did. The average score across the 13,121 essays was 2.8 for GPT-4o and 3.7 for the humans. But Asian Americans were docked by an additional quarter point. Human evaluators gave Asian Americans a 4.3, on average, while GPT-4o gave them only a 3.2 – roughly a 1.1 point deduction. By contrast, the score difference between humans and GPT-4o was only about 0.9 points for white, Black and Hispanic students. Imagine an ice cream truck that kept shaving off an extra quarter scoop only from the cones of Asian American kids. 

    “Clearly, this doesn’t seem fair,” wrote Johnson and Zhang in an unpublished report they shared with me. Though the extra penalty for Asian Americans wasn’t terribly large, they said, it’s substantial enough that it shouldn’t be ignored. 

    The researchers don’t know why GPT-4o issued lower grades than humans, and why it gave an extra penalty to Asian Americans. Zhang and Johnson described the AI system as a “huge black box” of algorithms that operate in ways “not fully understood by their own developers.” That inability to explain a student’s grade on a writing assignment makes the systems especially frustrating to use in schools.

    This table compares GPT-4o scores with human scores on the same batch of 13,121 student essays, which were scored on a 1-to-6 scale. Numbers highlighted in green show exact score matches between GPT-4o and humans. Unhighlighted numbers show discrepancies. For example, there were 1,221 essays where humans awarded a 5 and GPT awarded 3. Data source: Matt Johnson & Mo Zhang “Using GPT-4o to Score Persuade 2.0 Independent Items” ETS (June 2024 draft)

    This one study isn’t proof that AI is consistently underrating essays or biased against Asian Americans. Other versions of AI sometimes produce different results. A separate analysis of essay scoring by researchers from University of California, Irvine and Arizona State University found that AI essay grades were just as frequently too high as they were too low. That study, which used the 3.5 version of ChatGPT, did not scrutinize results by race and ethnicity.

    I wondered if AI bias against Asian Americans was somehow connected to high achievement. Just as Asian Americans tend to score high on math and reading tests, Asian Americans, on average, were the strongest writers in this bundle of 13,000 essays. Even with the penalty, Asian Americans still had the highest essay scores, well above those of white, Black, Hispanic, Native American or multi-racial students. 

    In both the ETS and UC-ASU essay studies, AI awarded far fewer perfect scores than humans did. For example, in this ETS study, humans awarded 732 perfect 6s, while GPT-4o gave out a grand total of only three. GPT’s stinginess with perfect scores might have affected a lot of Asian Americans who had received 6s from human raters.

    ETS’s researchers had asked GPT-4o to score the essays cold, without showing the chatbot any graded examples to calibrate its scores. It’s possible that a few sample essays or small tweaks to the grading instructions, or prompts, given to ChatGPT could reduce or eliminate the bias against Asian Americans. Perhaps the robot would be fairer to Asian Americans if it were explicitly prompted to “give out more perfect 6s.” 

    The ETS researchers told me this wasn’t the first time that they’ve noticed Asian students treated differently by a robo-grader. Older automated essay graders, which used different algorithms, have sometimes done the opposite, giving Asians higher marks than human raters did. For example, an ETS automated scoring system developed more than a decade ago, called e-rater, tended to inflate scores for students from Korea, China, Taiwan and Hong Kong on their essays for the Test of English as a Foreign Language (TOEFL), according to a study published in 2012. That may have been because some Asian students had memorized well-structured paragraphs, while humans easily noticed that the essays were off-topic. (The ETS website says it only relies on the e-rater score alone for practice tests, and uses it in conjunction with human scores for actual exams.) 

    Asian Americans also garnered higher marks from an automated scoring system created during a coding competition in 2021 and powered by BERT, which had been the most advanced algorithm before the current generation of large language models, such as GPT. Computer scientists put their experimental robo-grader through a series of tests and discovered that it gave higher scores than humans did to Asian Americans’ open-response answers on a reading comprehension test. 

    It was also unclear why BERT sometimes treated Asian Americans differently. But it illustrates how important it is to test these systems before we unleash them in schools. Based on educator enthusiasm, however, I fear this train has already left the station. In recent webinars, I’ve seen many teachers post in the chat window that they’re already using ChatGPT, Claude and other AI-powered apps to grade writing. That might be a time saver for teachers, but it could also be harming students. 

    This story about AI bias was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Jill Barshay

    Source link

  • OPINION: Everything I learned about how to teach reading turned out to be wrong – The Hechinger Report

    OPINION: Everything I learned about how to teach reading turned out to be wrong – The Hechinger Report

    [ad_1]

    When I first started teaching middle school, I did everything my university prep program told me to do in what’s known as the “workshop model.”

    I let kids choose their books. I determined their independent reading levels and organized my classroom library according to reading difficulty.

    I then modeled various reading skills, like noticing the details of the imagery in a text, and asked my students to practice doing likewise during independent reading time.

    It was an utter failure.

    Kids slipped their phones between the pages of the books they selected. Reading scores stagnated. I’m pretty sure my students learned nothing that year.

    Yet one aspect of this model functioned seamlessly: when I sat on a desk in front of the room and read out loud from a shared classroom novel.

    Kids listened, discussions arose naturally and everything seemed to click.

    Slowly, the reason for these episodic successes became clear to me: Shared experiences and teacher direction are necessary for high-quality instruction and a well-run classroom.

    Related: Become a lifelong learner. Subscribe to our free weekly newsletter to receive our comprehensive reporting directly in your inbox.

    Over time, I pieced together the idea that my students would benefit most from a teaching model that emphasized shared readings of challenging works of literature; memorization of poetry; explicit grammar instruction; contextual knowledge, including history; and teacher direction — not time practicing skills.

    But even as I made changes and saw improvements, doubts nagged at me. By abandoning student choice, and asking kids to dust off Chaucer, would I snuff out their joy of reading? Is Shakespearean English simply too difficult for middle schoolers?

    To set my doubts aside, I surveyed the relevant research and found that many of the assumptions upon which the workshop model was founded are simply false — starting with the assumption that reading comprehension depends on “reading comprehension skills.”

    There is evidence that teaching such skills has some benefit, but what students really need in order to read with understanding is knowledge about history, geography, science, music, the arts and the world more broadly.

    Perhaps the most famous piece of evidence for this knowledge-centered theory of reading comprehension is the “baseball study,” in which researchers gave children an excerpt about baseball and then tested their comprehension. At the outset of the study, researchers noted the children’s reading levels and baseball knowledge; they varied considerably.

    Ultimately, the researchers found that it was each child’s prior baseball knowledge and not their predetermined reading ability that predicted their comprehension and recall of the passage.

    That shouldn’t be surprising. Embedded within any newspaper article or novel is a vast amount of assumed knowledge that authors take for granted — from the fall of the Soviet Union to the importance of 1776.

    Just about any student can decode the words “Berlin Wall,” but they need a knowledge of basic geography (where is Berlin?), history (why was the Berlin wall built?) and political philosophy (what qualities of the Communist regime caused people to flee from East to West?) to grasp the full meaning of an essay or story involving the Berlin Wall.

    Of course, students aren’t born with this knowledge, which is why effective teachers build students’ capacity for reading comprehension by relentlessly exposing them to content-rich texts.

    My research confirmed what I had concluded from my classroom experiences: The workshop model’s text-leveling and independent reading have a weak evidence base.

    Rather than obsessing over the difficulty of texts, educators would better serve students by asking themselves other questions, such as: Does our curriculum expose children to topics they might not encounter outside of school? Does it offer opportunities to discuss related historical events? Does it include significant works of literature or nonfiction that are important for understanding modern society?

    Related: PROOF POINTS: Slightly higher reading scores when students delve into social studies, study finds

    In my classroom, I began to choose many books simply because of their historical significance or instructional opportunities. Reading the memoirs of Frederick Douglass with my students allowed me to discuss supplementary nonfiction texts about chattel slavery, fugitive slave laws and the Emancipation Proclamation.

    Reading “The Magician’s Nephew” by C. S. Lewis prompted teaching about allusions to the Christian creation story and the myth of Narcissus, knowledge they could use to analyze future stories and characters.

    Proponents of the workshop model claim that letting students choose the books they read will make them more motivated readers, increase the amount of time they spend reading and improve their literacy. The claim is widely believed.

    However, it’s unclear to me why choice would necessarily foster a love of reading. To me, it seems more likely that a shared reading of a classic work with an impassioned teacher, engaged classmates and a thoughtfully designed final project are more motivating than reading a self-selected book in a lonely corner. That was certainly my experience.

    After my classes acted out “Romeo and Juliet,” with rulers trimmed and painted to resemble swords, and read “To Kill a Mockingbird” aloud, countless students (and their parents) told me it was the first time they’d ever enjoyed reading.

    They said these classics were the first books that made them think — and the first ones that they’d ever connected with.

    Students don’t need hours wasted on finding a text’s main idea or noticing details. They don’t need time cloistered off with another book about basketball.

    They need to experience art, literature and history that might not immediately interest them but will expand their perspective and knowledge of the world.

    They need a teacher to guide them through and inspire a love and interest in this content. The workshop model doesn’t offer students what they need, but teachers still can.

    Daniel Buck is an editorial and policy associate at the Thomas B. Fordham Institute and the author of “What Is Wrong with Our Schools?

    This story about teaching reading was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s newsletter.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Daniel Buck

    Source link

  • OPINION: Colleges have to do a better job helping students navigate what comes next – The Hechinger Report

    OPINION: Colleges have to do a better job helping students navigate what comes next – The Hechinger Report

    [ad_1]

    Higher education has finally come around to the idea that college should better help prepare students for careers.

    It’s about time: Recognizing that students do not always understand the connection between their coursework and potential careers is a long-standing problem that must be addressed.

    Over 20 years ago, I co-authored the best-selling “Quarterlife Crisis,” one of the first books to explore the transition from college to the workforce. We found, anecdotally, that recent college graduates felt inadequately prepared to choose a career or transition to life in the workforce. At that time, liberal arts institutions in particular did not view career preparation as part of their role.

    While some progress has been made since then, institutions can still do a better job connecting their educational and economic mobility missions; recent research indicates that college graduates are having a hard time putting their degrees to work.

    Importantly, improving career preparation can help not only with employment but also with student retention and completion.

    Related: Interested in innovations in the field of higher education? Subscribe to our free biweekly Higher Education newsletter.

    I believe that if students have a career plan in mind, and if they better understand how coursework will help them succeed in the workforce, they will be more likely to complete that coursework, persist, graduate and succeed in their job search.

    First-generation students, in particular, whose parents often lack college experience, may not understand why they need to take a course such as calculus, which, on the surface, does not appear to help prepare them for most jobs in the workforce.

    They will benefit deeply from a clearer understanding of how such required courses connect to their career choices and skills.

    Acknowledging the need for higher education to better demonstrate course-to-career linkages — and its role in workforce preparation — is an important first step.

    Taking action to improve these connections will better position students and institutions. Better preparing students for the workforce will increase their success rates and, in turn, will improve college rankings on student success measures.

    This might require a cultural shift in some cases, but given the soaring cost of tuition, it is necessary for institutions to think about return on investment for students and their parents, not only in intellectual terms but also monetarily.

    Such a shift could help facilitate much-needed social and economic mobility, particularly for students who borrow money to attend college.

    Related: OPINION: Post-pandemic, let’s develop true education-to-workforce pathways to secure a better future

    Recent articles and research about low job placement rates for college graduates often posit that internships provide the needed connection between college and careers. Real-world experience is important, but there are other ways to make a college degree more career relevant.

    1. Spell out the connections for students. The class syllabus is one opportunity to make this connection for students. Faculty can explain how different coursework topics and texts translate to career skills and provide real-life examples of those skills at work. In some cases, however, this might be a tough sell for faculty who have spent their careers in the academy and do not see career counseling as part of their job.

    But providing this additional information for students does not need to be a big lift and can be done in partnership with campus staff, such as career services counselors. These connections can also be made in course catalogs, on department websites and through student seminars.

    2. Raise awareness of realistic careers. Many students start college with the goal of entering a commonly known profession — doctor, lawyer or teacher, to name a few. However, there are hundreds of jobs, such as public policy research and advocacy, with which students may not be as familiar. Colleges should provide more detailed information on a wide range of careers that students may never have thought of — and how coursework can help them enter those fields. Experiential learning can provide good opportunities to sample careers that match students’ interests, to help further determine the right fit.

    Increased awareness of job options can also serve as motivation for students as they formulate their goals and plans. Jobs can be described through the same information avenues as the career-coursework connections listed above, along with examples of how coursework is used in each job.

    3. Make coursework-career connections a campuswide priority. College leaders must stress to faculty the importance of better preparing students for careers. Economic mobility is of increasing importance to institutions and the general public, and consumers now rely on information about employment outcomes when selecting colleges (e.g., see College Scorecard).

    Faculty can be assured that adding career preparation to a college degree does not diminish its educational value — quite the contrary; critical thinking and analytical skills, for example, are of utmost importance to liberal arts programs and prospective employers. Simply demonstrating those links does not change coursework content or objectives.

    4. Help students translate their coursework for the job market. Beyond understanding the coursework-to-career linkages, students must know how to articulate them. Job interviews are unnatural for anyone, especially for students new to the workforce — and even more so for those who are the first in their families to graduate from college.

    Career centers often provide interview tips to students — again, if the students seek out that help — but special emphasis should be placed on helping students reflect on their coursework and translate the skills and knowledge they have gained for employers.

    A portfolio can help them accomplish this, and it can be developed at regular intervals throughout a student’s time on campus, since reflecting on several years of coursework all at once can be challenging. A Senior Year Seminar can further promote workforce readiness and tie together the career skills gained throughout one’s time on campus.

    By making these simple changes, institutions can take the lead in making students and the public more aware of the benefits of higher education.

    Abby Miller, founding partner at ASA Research, has been researching higher education and workforce development for over 20 years.

    This story about college and careers was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s newsletter.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Abby Miller

    Source link

  • PROOF POINTS: Some of the $190 billion in pandemic money for schools actually paid off

    PROOF POINTS: Some of the $190 billion in pandemic money for schools actually paid off

    [ad_1]

    Reports about schools squandering their $190 billion in federal pandemic recovery money have been troubling.  Many districts spent that money on things that had nothing to do with academics, particularly building renovations. Less common, but more eye-popping were stories about new football fields, swimming pool passes, hotel rooms at Caesar’s Palace in Las Vegas and even the purchase of an ice cream truck. 

    So I was surprised that two independent academic analyses released in June 2024 found that some of the money actually trickled down to students and helped them catch up academically.  Though the two studies used different methods, they arrived at strikingly similar numbers for the average growth in math and reading scores during the 2022-23 school year that could be attributed to each dollar of federal aid. 

    One of the research teams, which includes Harvard University economist Tom Kane and Stanford University sociologist Sean Reardon, likened the gains to six days of learning in math and three days of learning in reading for every $1,000 in federal pandemic aid per student. Though that gain might seem small, high-poverty districts received an average of $7,700 per student, and those extra “days” of learning for low-income students added up. Still, these neediest children were projected to be one third of a grade level behind low-income students in 2019, before the pandemic disrupted education.

    “Federal funding helped and it helped kids most in need,” wrote Robin Lake, director of the Center on Reinventing Public Education, on X in response to the two studies. Lake was not involved in either report, but has been closely tracking pandemic recovery. “And the spending was worth the gains,” Lake added. “But it will not be enough to do all that is needed.” 

    The academic gains per aid dollar were close to what previous researchers had found for increases in school spending. In other words, federal pandemic aid for schools has been just as effective (or ineffective) as other infusions of money for schools. The Harvard-Stanford analysis calculated that the seemingly small academic gains per $1,000 could boost a student’s lifetime earnings by $1,238 – not a dramatic payoff, but not a public policy bust either. And that payoff doesn’t include other societal benefits from higher academic achievement, such as lower rates of arrests and teen motherhood. 

    The most interesting nuggets from the two reports, however, were how the academic gains varied wildly across the nation. That’s not only because some schools used the money more effectively than others but also because some schools got much more aid per student.

    The poorest districts in the nation, where 80 percent or more of the students live in families whose income is low enough to qualify for the federally funded school lunch program, demonstrated meaningful recovery because they received the most aid. About 6 percent of the 26 million public schoolchildren that the researchers studied are educated in districts this poor. These children had recovered almost half of their pandemic learning losses by the spring of 2023. The very poorest districts, representing 1 percent of the children, were potentially on track for an almost complete recovery in 2024 because they tended to receive the most aid per student. However, these students were far below grade level before the pandemic, so their recovery brings them back to a very low rung.

    Some high-poverty school districts received much more aid per student than others. At the top end of the range, students in Detroit received about $26,000 each – $1.3 billion spread among fewer than 49,000 students. One in 10 high-poverty districts received more than $10,700 for each student. An equal number of high-poverty districts received less than $3,700 per student. These surprising differences for places with similar poverty levels occurred because pandemic aid was allocated according to the same byzantine rules that govern federal Title I funding to low-income schools. Those formulas give large minimum grants to small states, and more money to states that spend more per student. 

    On the other end of the income spectrum are wealthier districts, where 30 percent or fewer students qualify for the lunch program, representing about a quarter of U.S. children. The Harvard-Stanford researchers expect these students to make an almost complete recovery. That’s not because of federal recovery funds; these districts received less than $1,000 per student, on average. Researchers explained that these students are on track to approach 2019 achievement levels because they didn’t suffer as much learning loss.  Wealthier families also had the means to hire tutors or time to help their children at home.

    Middle-income districts, where between 30 percent and 80 percent of students are eligible for the lunch program, were caught in between. Roughly seven out of 10 children in this study fall into this category. Their learning losses were sometimes large, but their pandemic aid wasn’t. They tended to receive between $1,000 and $5,000 per student. Many of these students are still struggling to catch up.

    In the second study, researchers Dan Goldhaber of the American Institutes for Research and Grace Falken of the University of Washington estimated that schools around the country, on average, would need an additional $13,000 per student for full recovery in reading and math.  That’s more than Congress appropriated.

    There were signs that schools targeted interventions to their neediest students. In school districts that separately reported performance for low-income students, these students tended to post greater recovery per dollar of aid than wealthier students, the Goldhaber-Falken analysis shows.

    Impact differed more by race, location and school spending. Districts with larger shares of white students tended to make greater achievement gains per dollar of federal aid than districts with larger shares of Black or Hispanic students. Small towns tended to produce more academic gains per dollar of aid than large cities. And school districts that spend less on education per pupil tended to see more academic gains per dollar of aid than high spenders. The latter makes sense: an extra dollar to a small budget makes a bigger difference than an extra dollar to a large budget. 

    The most frustrating part of both reports is that we have no idea what schools did to help students catch up. Researchers weren’t able to connect the academic gains to tutoring, summer school or any of the other interventions that schools have been trying. Schools still have until September to decide how to spend their remaining pandemic recovery funds, and, unfortunately, these analyses provide zero guidance.

    And maybe some of the non-academic things that schools spent money on weren’t so frivolous after all. A draft paper circulated by the National Bureau of Economic Research in January 2024 calculated that school spending on basic infrastructure, such as air conditioning and heating systems, raised test scores. Spending on athletic facilities did not. 

    Meanwhile, the final score on pandemic recovery for students is still to come. I’ll be looking out for it.

    This story about federal funding for education was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Jill Barshay

    Source link

  • PROOF POINTS: This is your brain. This is your brain on screens

    PROOF POINTS: This is your brain. This is your brain on screens

    [ad_1]

    One brain study, published in May 2024, detected different electrical activity in the brain after students had read a passage on paper, compared with screens. Credit: Getty Images

    Studies show that students of all ages, from elementary school to college, tend to absorb more when they’re reading on paper rather than screens. The advantage for paper is a small one, but it’s been replicated in dozens of laboratory experiments, particularly when students are reading about science or other nonfiction texts.

    Experts debate why comprehension is worse on screens. Some think the glare and flicker of screens tax the brain more than ink on paper. Others conjecture that students have a tendency to skim online but read with more attention and effort on paper. Digital distraction is an obvious downside to screens. But internet browsing, texting or TikTok breaks aren’t allowed in the controlled conditions of these laboratory studies.

    Neuroscientists around the world are trying to peer inside the brain to solve the mystery. Recent studies have begun to document salient differences in brain activity when reading on paper versus screens. None of the studies I discuss below is definitive or perfect, but together they raise interesting questions for future researchers to explore. 

    One Korean research team documented that young adults had lower concentrations of oxygenated hemoglobin in a section of the brain called the prefrontal cortex when reading on paper compared with screens. The prefrontal cortex is associated with working memory and that could mean the brain is more efficient in absorbing and memorizing new information on paper, according to a study published in January 2024 in the journal Brain Sciences. An experiment in Japan, published in 2020, also noticed less blood flow in the prefrontal cortex when readers were recalling words in a passage that they had read on paper, and more blood flow with screens.

    But it’s not clear what that increased blood flow means. The brain needs to be activated in order to learn and one could also argue that the extra brain activation during screen reading could be good for learning. 

    Instead of looking at blood flow, a team of Israeli scientists analyzed electrical activity in the brains of 6- to 8-year-olds. When the children read on paper, there was more power in high-frequency brainwaves. When the children read from screens, there was more energy in low-frequency bands. 

    The Israeli scientists interpreted these frequency differences as a sign of better concentration and attention when reading on paper. In their 2023 paper, they noted that attention difficulties and mind wandering have been associated with lower frequency bands – exactly the bands that were elevated during screen reading. However, it was a tiny study of 15 children and the researchers could not confirm whether the children’s minds were actually wandering when they were reading on screens. 

    Another group of neuroscientists in New York City has also been looking at electrical activity in the brain. But instead of documenting what happens inside the brain while reading, they looked at what happens in the brain just after reading, when students are responding to questions about a text. 

    The study, published in the peer-reviewed journal PLOS ONE in May 2024, was conducted by neuroscientists at Teachers College, Columbia University, where The Hechinger Report is also based. My news organization is an independent unit of the college, but I am covering this study just like I cover other educational research. 

    In the study, 59 children, aged 10 to 12, read short passages, half on screens and half on paper. After reading the passage, the children were shown new words, one at a time, and asked whether they were related to the passage they had just read. The children wore stretchy hair nets embedded with electrodes. More than a hundred sensors measured electrical currents inside their brains a split second after each new word was revealed.

    For most words, there was no difference in brain activity between screens and paper. There was more positive voltage when the word was obviously related to the text, such as the word “flow” after reading a passage about volcanoes. There was more negative voltage with an unrelated word like “bucket,” which the researchers said was an indication of surprise and additional brain processing. These brainwaves were similar regardless of whether the child had read the passage on paper or on screens. 

    However, there were stark differences between paper and screens when it came to ambiguous words, ones where you could make a creative argument that the word was tangentially related to the reading passage or just as easily explain why it was unrelated. Take for example, the word “roar” after reading about volcanoes. Children who had read the passage on paper showed more positive voltage, just as they had for clearly related words like “flow.” Yet, those who had read the passage on screens showed more negative activity, just as they had for unrelated words like “bucket.”

    For the researchers, the brainwave difference for ambiguous words was a sign that students were engaging in “deeper” reading on paper. According to this theory, the more deeply information is processed, the more associations the brain makes. The electrical activity the neuroscientists detected reveals the traces of these associations and connections. 

    Despite this indication of deeper reading, the researchers didn’t detect any differences in basic comprehension. The children in this experiment did just as well on a simple comprehension test after reading a passage on paper as they did on screens. The neuroscientists told me that the comprehension test they administered was only to verify that the children had actually read the passage and wasn’t designed to detect deeper reading. I wish, however, the children had been asked to do something involving more analysis to buttress their argument that students had engaged in deeper reading on paper.

    Virginia Clinton-Lisell, a reading researcher at the University of North Dakota who was not involved in this study, said she was “skeptical” of its conclusions, in part because the word-association exercise the neuroscientists created hasn’t been validated by outside researchers. Brain activation during a word association exercise may not be proof that we process language more thoroughly or deeply on paper.

    One noteworthy result from this experiment is speed. Many reading experts have believed that comprehension is often worse on screens because students are skimming rather than reading. But in the controlled conditions of this laboratory experiment, there were no differences in reading speed: 57 seconds on the laptop compared to 58 seconds on paper –  statistically equivalent in a small experiment like this. And so that raises more questions about why the brain is acting differently between the two media. 

    “I’m not sure why one would process some visual images more deeply than others if the subjects spent similar amounts of time looking at them,” said Timothy Shanahan, a reading research expert and a professor emeritus at the University of Illinois at Chicago. 

    None of this work settles the debate over reading on screens versus paper. All of them ignore the promise of interactive features, such as glossaries and games, which can swing the advantage to electronic texts. Early research can be messy, and that’s a normal part of the scientific process. But so far, the evidence seems to be corroborating conventional reading research that something different is going on when kids log in rather than turn a page.

    This story about reading on screens vs. paper was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Jill Barshay

    Source link

  • OPINION: There’s a promising path to get students back on track to graduation – The Hechinger Report

    OPINION: There’s a promising path to get students back on track to graduation – The Hechinger Report

    [ad_1]

    Rates of chronic absenteeism are at record-high levels. More than 1 in 4 students missed 10 percent or more of the 2021-22 school year. That means millions of students missed out on regular instruction, not to mention the social and emotional benefits of interacting with peers and trusted adults.

    Moreover, two-thirds of the nation’s students attended a school where chronic absence rates reached at least 20 percent. Such levels disrupt entire school communities, including the students who are regularly attending.

    The scope and scale of this absenteeism crisis necessitate the implementation of the next generation of student support.

    Fortunately, a recent study suggests a promising path for getting students back in school and back on track to graduation. A group of nearly 50 middle and high schools saw reductions in chronic absenteeism and course failure rates after one year of harnessing the twin powers of data and relationships.

    From the 2021-22 to 2022-23 school years, the schools’ chronic absenteeism rates dropped by 5.4 percentage points, and the share of students failing one or more courses went from 25.5 percent to 20.5 percent. In the crucial ninth grade, course failure rates declined by 9.2 percentage points.

    These encouraging results come from the first cohort of rural and urban schools and communities partnering with the GRAD Partnership, a collective of nine organizations, to grow  the use of “student success systems” into a common practice.

    Student success systems take an evidence-based approach to organizing school communities to better support the academic progress and well-being of all students.

    They were developed with input from hundreds of educators and build on the successes of earlier student support efforts — like early warning systems and on-track initiatives — to meet students’ post-pandemic needs.

    Related: Widen your perspective. Our free biweekly newsletter consults critical voices on innovation in education.

    Importantly, student success systems offer schools a way to identify school, grade-level and classroom factors that impact attendance; they then deliver timely supports to meet individual students’ needs. They do this, in part, by explicitly valuing supportive relationships and responding to the insights that students and the adults who know them bring to the table.

    Valuable relationships include not only those between students and teachers, and schools and families, but also those among peer groups and within the entire school community. Schools cannot address the attendance crisis without rebuilding and fostering these relationships.

    When students feel a sense of connection to school they are more likely to show up.

    For some students, this connection comes through extracurricular activities like athletics, robotics or band. For others, it may be a different connection to school.

    Schools haven’t always focused on connections in a concrete way, partly because relationships can feel fuzzy and hard to track. We’re much better at tracking things like grades and attendance.

    Still, schools in the GRAD Partnership cohort show that it can be done.

    These schools established “student success teams” of teachers, counselors and others. The teams meet regularly to look at up-to-date student data and identify and address the root causes of absenteeism with insight and input from families and communities, as well as the students themselves.

    The teams often use low-tech relationship-mapping tools to help identify students who are disconnected from activities or mentors. One school’s student success team used these tools to ensure that all students were connected to at least one activity — and even created new clubs for students with unique interests. Their method was one that any school could replicate —collaborating on a Google spreadsheet.

    Another school identified students who would benefit from a new student mentoring program focused on building trusting relationships.

    Related: PROOF POINTS: The chronic absenteeism puzzle

    Some schools have used surveys of student well-being to gain insight on how students feel about school, themselves and life in general — and have then used the information to develop supports.

    And in an example of building supportive community relationships, one of the GRAD Partnership schools worked with local community organizations to host a resource night event at which families were connected on the spot to local providers who could help them overcome obstacles to regular attendance — such as medical and food needs, transportation and housing issues and unemployment.

    Turning the tide against our current absenteeism crisis does not have a one-and-done solution — it will involve ongoing collaborative efforts guided by data and grounded in relationships that take time to build.

    Without these efforts, the consequences will be severe both for individual students and our country as a whole.

    Robert Balfanz is a research professor at the Center for Social Organization of Schools at Johns Hopkins University School of Education, where he is the director of the Everyone Graduates Center.

    This story about post-pandemic education was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s newsletter.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Robert Balfanz

    Source link

  • PROOF POINTS: Teens are looking to AI for information and answers, two surveys show

    PROOF POINTS: Teens are looking to AI for information and answers, two surveys show

    [ad_1]

    Two new surveys, both released this month, show how high school and college-age students are embracing artificial intelligence. There are some inconsistencies and many unanswered questions, but what stands out is how much teens are turning to AI for information and to ask questions, not just to do their homework for them. And they’re using it for personal reasons as well as for school. Another big takeaway is that there are different patterns by race and ethnicity with Black, Hispanic and Asian American students often adopting AI faster than white students.

    The first report, released on June 3, was conducted by three nonprofit organizations, Hopelab, Common Sense Media, and the Center for Digital Thriving at the Harvard Graduate School of Education. These organizations surveyed 1,274 teens and young adults aged 14-22 across the U.S. from October to November 2023. At that time, only half the teens and young adults said they had ever used AI, with just 4 percent using it daily or almost every day. 

    Emily Weinstein, executive director for the Center for Digital Thriving, a research center that investigates how youth are interacting with technology, said that more teens are “certainly” using AI now that these tools are embedded in more apps and websites, such as Google Search. Last October and November, when this survey was conducted, teens typically had to take the initiative to navigate to an AI site and create an account. An exception was Snapchat, a social media app that had already added an AI chatbot for its users. 

    More than half of the early adopters said they had used AI for getting information and for brainstorming, the first and second most popular uses. This survey didn’t ask teens if they were using AI for cheating, such as prompting ChatGPT to write their papers for them. However, among the half of respondents who were already using AI, fewer than half – 46 percent – said they were using it for help with school work. The fourth most common use was for generating pictures.

    The survey also asked teens a couple of open-response questions. Some teens told researchers that they are asking AI private questions that they were too embarrassed to ask their parents or their friends. “Teens are telling us I have questions that are easier to ask robots than people,”  said Weinstein.

    Weinstein wants to know more about the quality and the accuracy of the answers that AI is giving teens, especially those with mental health struggles, and how privacy is being protected when students share personal information with chatbots.

    The second report, released on June 11, was conducted by Impact Research and  commissioned by the Walton Family Foundation. In May 2024, Impact Research surveyed 1,003 teachers, 1,001 students aged 12-18, 1,003 college students, and 1,000 parents about their use and views of AI.

    This survey, which took place six months after the Hopelab-Common Sense survey, demonstrated how quickly usage is growing. It found that 49 percent of students, aged 12-18, said they used ChatGPT at least once a week for school, up 26 percentage points since 2023. Forty-nine percent of college undergraduates also said they were using ChatGPT every week for school but there was no comparison data from 2023.

    Among 12- to 18-year-olds and college students who had used AI chatbots for school, 56 percent said they had used it for help in writing essays and other writing assignments. Undergraduate students were more than twice as likely as 12- to 18-year-olds to say using AI felt like cheating, 22 percent versus 8 percent. Earlier 2023 surveys of student cheating by scholars at Stanford University did not detect an increase in cheating with ChatGPT and other generative AI tools. But as students use AI more, students’ understanding of what constitutes cheating may also be evolving. 

     

    More than 60 percent of college students who used AI said they were using it to study for tests and quizzes. Half of the college students who used AI said they were using it to deepen their subject knowledge, perhaps, as if it were an online encyclopedia. There was no indication from this survey if students were checking the accuracy of the information.

    Both surveys noticed differences by race and ethnicity. The first Hopelab-Common Sense survey found that 7 percent of Black students, aged 14-22, were using AI every day, compared with 5 percent of Hispanic students and 3 percent of white students. In the open-ended questions, one Black teen girl wrote that, with AI, “we can change who we are and become someone else that we want to become.” 

    The Walton Foundation survey found that Hispanic and Asian American students were sometimes more likely to use AI than white and Black students, especially for personal purposes. 

    These are all early snapshots that are likely to keep shifting. OpenAI is expected to become part of the Apple universe in the fall, including its iPhones, computers and iPads.  “These numbers are going to go up and they’re going to go up really fast,” said Weinstein. “Imagine that we could go back 15 years in time when social media use was just starting with teens. This feels like an opportunity for adults to pay attention.”

    This story about ChatGPT in education was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Jill Barshay

    Source link

  • PROOF POINTS: Writing researcher finds AI feedback ‘better than I thought’

    PROOF POINTS: Writing researcher finds AI feedback ‘better than I thought’

    [ad_1]

    Researchers from the University of California, Irvine, and Arizona State University found that human feedback was generally a bit better than AI feedback, but AI was surprisingly good. Credit: Getty Images

    This week I challenged my editor to face off against a machine. Barbara Kantrowitz gamely accepted, under one condition: “You have to file early.”  Ever since ChatGPT arrived in 2022, many journalists have made a public stunt out of asking the new generation of artificial intelligence to write their stories. Those AI stories were often bland and sprinkled with errors. I wanted to understand how well ChatGPT handled a different aspect of writing: giving feedback.

    My curiosity was piqued by a new study, published in the June 2024 issue of the peer-reviewed journal Learning and Instruction, that evaluated the quality of ChatGPT’s feedback on students’ writing. A team of researchers compared AI with human feedback on 200 history essays written by students in grades 6 through 12 and they determined that human feedback was generally a bit better. Humans had a particular advantage in advising students on something to work on that would be appropriate for where they are in their development as a writer. 

    But ChatGPT came close. On a five-point scale that the researchers used to rate feedback quality, with a 5 being the highest quality feedback, ChatGPT averaged a 3.6 compared with a 4.0 average from a team of 16 expert human evaluators. It was a tough challenge. Most of these humans had taught writing for more than 15 years or they had considerable experience in writing instruction. All received three hours of training for this exercise plus extra pay for providing the feedback.

    ChatGPT even beat these experts in one aspect; it was slightly better at giving feedback on students’ reasoning, argumentation and use of evidence from source materials – the features that the researchers had wanted the writing evaluators to focus on.

    “It was better than I thought it was going to be because I didn’t have a lot of hope that it was going to be that good,” said Steve Graham, a well-regarded expert on writing instruction at Arizona State University, and a member of the study’s research team. “It wasn’t always accurate. But sometimes it was right on the money. And I think we’ll learn how to make it better.”

    Average ratings for the quality of ChatGPT and human feedback on 200 student essays

    Researchers rated the quality of the feedback on a five-point scale across five different categories. Criteria-based refers to whether the feedback addressed the main goals of the writing assignment, in this case, to produce a well-reasoned argument about history using evidence from the reading source materials that the students were given. Clear directions mean whether the feedback included specific examples of something the student did well and clear directions for improvement. Accuracy means whether the feedback advice was correct without errors. Essential Features refer to whether the suggestion on what the student should work on next is appropriate for where the student is in his writing development and is an important element of this genre of writing. Supportive Tone refers to whether the feedback is delivered with language that is affirming, respectful and supportive, as opposed to condescending, impolite or authoritarian. (Source: Fig. 1 of Steiss et al, “Comparing the quality of human and ChatGPT feedback of students’ writing,” Learning and Instruction, June 2024.)

    Exactly how ChatGPT is able to give good feedback is something of a black box even to the writing researchers who conducted this study. Artificial intelligence doesn’t comprehend things in the same way that humans do. But somehow, through the neural networks that ChatGPT’s programmers built, it is picking up on patterns from all the writing it has previously digested, and it is able to apply those patterns to a new text. 

    The surprising “relatively high quality” of ChatGPT’s feedback is important because it means that the new artificial intelligence of large language models, also known as generative AI, could potentially help students improve their writing. One of the biggest problems in writing instruction in U.S. schools is that teachers assign too little writing, Graham said, often because teachers feel that they don’t have the time to give personalized feedback to each student. That leaves students without sufficient practice to become good writers. In theory, teachers might be willing to assign more writing or insist on revisions for each paper if students (or teachers) could use ChatGPT to provide feedback between drafts. 

    Despite the potential, Graham isn’t an enthusiastic cheerleader for AI. “My biggest fear is that it becomes the writer,” he said. He worries that students will not limit their use of ChatGPT to helpful feedback, but ask it to do their thinking, analyzing and writing for them. That’s not good for learning. The research team also worries that writing instruction will suffer if teachers delegate too much feedback to ChatGPT. Seeing students’ incremental progress and common mistakes remain important for deciding what to teach next, the researchers said. For example, seeing loads of run-on sentences in your students’ papers might prompt a lesson on how to break them up. But if you don’t see them, you might not think to teach it. Another common concern among writing instructors is that AI feedback will steer everyone to write in the same homogenized way. A young writer’s unique voice could be flattened out before it even has the chance to develop.

    There’s also the risk that students may not be interested in heeding AI feedback. Students often ignore the painstaking feedback that their teachers already give on their essays. Why should we think students will pay attention to feedback if they start getting more of it from a machine? 

    Still, Graham and his research colleagues at the University of California, Irvine, are continuing to study how AI could be used effectively and whether it ultimately improves students’ writing. “You can’t ignore it,” said Graham. “We either learn to live with it in useful ways, or we’re going to be very unhappy with it.”

    Right now, the researchers are studying how students might converse back-and-forth with ChatGPT like a writing coach in order to understand the feedback and decide which suggestions to use.

    Example of feedback from a human and ChatGPT on the same essay

    In the current study, the researchers didn’t track whether students understood or employed the feedback, but only sought to measure its quality. Judging the quality of feedback is a rather subjective exercise, just as feedback itself is a bundle of subjective judgment calls. Smart people can disagree on what good writing looks like and how to revise bad writing. 

    In this case, the research team came up with its own criteria for what constitutes good feedback on a history essay. They instructed the humans to focus on the student’s reasoning and argumentation, rather than, say, grammar and punctuation.  They also told the human raters to adopt a “glow and grow strategy” for delivering the feedback by first finding something to praise, then identifying a particular area for improvement. 

    The human raters provided this kind of feedback on hundreds of history essays from 2021 to 2023, as part of an unrelated study of an initiative to boost writing at school. The researchers randomly grabbed 200 of these essays and fed the raw student writing – without the human feedback – to version 3.5 of ChatGPT and asked it to give feedback, too

    At first, the AI feedback was terrible, but as the researchers tinkered with the instructions, or the “prompt,” they typed into ChatGPT, the feedback improved. The researchers eventually settled upon this wording: “Pretend you are a secondary school teacher. Provide 2-3 pieces of specific, actionable feedback on each of the following essays…. Use a friendly and encouraging tone.” The researchers also fed the assignment that the students were given, for example, “Why did the Montgomery Bus Boycott succeed?” along with the reading source material that the students were provided. (More details about how the researchers prompted ChatGPT are explained in Appendix C of the study.)

    The humans took about 20 to 25 minutes per essay. ChatGPT’s feedback came back instantly. The humans sometimes marked up sentences by, for example, showing a place where the student could have cited a source to buttress an argument. ChatGPT didn’t write any in-line comments and only wrote a note to the student. 

    Researchers then read through both sets of feedback – human and machine – for each essay, comparing and rating them. (It was supposed to be a blind comparison test and the feedback raters were not told who authored each one. However, the language and tone of ChatGPT were distinct giveaways, and the in-line comments were a tell of human feedback.)

    Humans appeared to have a clear edge with the very strongest and the very weakest writers, the researchers found. They were better at pushing a strong writer a little bit further, for example, by suggesting that the student consider and address a counterargument. ChatGPT struggled to come up with ideas for a student who was already meeting the objectives of a well-argued essay with evidence from the reading source materials. ChatGPT also struggled with the weakest writers. The researchers had to drop two of the essays from the study because they were so short that ChatGPT didn’t have any feedback for the student. The human rater was able to parse out some meaning from a brief, incomplete sentence and offer a suggestion. 

    In one student essay about the Montgomery Bus Boycott, reprinted above, the human feedback seemed too generic to me: “Next time, I would love to see some evidence from the sources to help back up your claim.” ChatGPT, by contrast, specifically suggested that the student could have mentioned how much revenue the bus company lost during the boycott – an idea that was mentioned in the student’s essay. ChatGPT also suggested that the student could have mentioned specific actions that the NAACP and other organizations took. But the student had actually mentioned a few of these specific actions in his essay. That part of ChatGPT’s feedback was plainly inaccurate. 

    In another student writing example, also reprinted below, the human straightforwardly pointed out that the student had gotten an historical fact wrong. ChatGPT appeared to affirm that the student’s mistaken version of events was correct.

    Another example of feedback from a human and ChatGPT on the same essay

    So how did ChatGPT’s review of my first draft stack up against my editor’s? One of the researchers on the study team suggested a prompt that I could paste into ChatGPT. After a few back and forth questions with the chatbot about my grade level and intended audience, it initially spit out some generic advice that had little connection to the ideas and words of my story. It seemed more interested in format and presentation, suggesting a summary at the top and subheads to organize the body. One suggestion would have made my piece too long-winded. Its advice to add examples of how AI feedback might be beneficial was something that I had already done. I then asked for specific things to change in my draft, and ChatGPT came back with some great subhead ideas. I plan to use them in my newsletter, which you can see if you sign up for it here. (And if you want to see my prompt and dialogue with ChatGPT, here is the link.) 

    My human editor, Barbara, was the clear winner in this round. She tightened up my writing, fixed style errors and helped me brainstorm this ending. Barbara’s job is safe – for now. 

    This story about AI feedback was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Jill Barshay

    Source link

  • PROOF POINTS: We have tried paying teachers based on how much students learn. Now schools are expanding that idea to contractors and vendors.

    PROOF POINTS: We have tried paying teachers based on how much students learn. Now schools are expanding that idea to contractors and vendors.

    [ad_1]

    Schools spend billions of dollars a year on products and services, including everything from staplers and textbooks to teacher coaching and training. Does any of it help students learn more? Some educational materials end up mothballed in closets. Much software goes unused. Yet central-office bureaucrats frequently renew their contracts with outside vendors regardless of usage or efficacy.

    One idea for smarter education spending is for schools to sign smarter contracts, where part of the payment is contingent upon whether students use the services and learn more. It’s called outcomes-based contracting and is a way of sharing risk between buyer (the school) and seller (the vendor). Outcomes-based contracting is most common in healthcare. For example, a health insurer might pay a pharmaceutical company more for a drug if it actually improves people’s health, and less if it doesn’t. 

    Although the idea is relatively new in education, many schools tried a different version of it – evaluating and paying teachers based on how much their students’ test scores improved – in the 2010s. Teachers didn’t like it, and enthusiasm for these teacher accountability schemes waned. Then, in 2020, Harvard University’s Center for Education Policy Research announced that it was going to test the feasibility of paying tutoring companies by how much students’ test scores improved. 

    The initiative was particularly timely in the wake of the pandemic.  The federal government would eventually give schools almost $190 billion to reopen and to help students who fell behind when schools were closed. Tutoring became a leading solution for academic recovery and schools contracted with outside companies to provide tutors. Many educators worried that billions could be wasted on low-quality tutors who didn’t help anyone. Could schools insist that tutoring companies make part of their payment contingent upon whether student achievement increased? 

    The Harvard center recruited a handful of school districts who wanted to try an outcomes-based contract. The researchers and districts shared ideas on how to set performance targets. How much should they expect student achievement to grow from a few months of tutoring? How much of the contract should be guaranteed to the vendor for delivering tutors, and how much should be contingent on student performance? 

    The first hurdle was whether tutoring companies would be willing to offer services without knowing exactly how much they would be paid. School districts sent out requests for proposals from online tutoring companies. Tutoring companies bid and the terms varied. One online tutoring company agreed that 40 percent of a $1.2 million contract with the Duval County Public Schools in Jacksonville, Florida, would be contingent upon student performance. Another online tutoring company signed a contract with Ector County schools in the Odessa, Texas, region that specified that the company had to accept a penalty if kids’ scores declined.

    In the middle of the pilot, the outcomes-based contracting initiative moved from the Harvard center to the Southern Education Foundation, another nonprofit, and I recently learned how the first group of contracts panned out from Jasmine Walker, a senior manager there. Walker had a first-hand view because until the fall of 2023, she was the director of mathematics in Florida’s Duval County schools, where she oversaw the outcomes-based contract on tutoring. 

    Here are some lessons she learned: 

    Planning is time-consuming

    Drawing up an outcomes-based contract requires analyzing years of historical testing data, and documenting how much achievement has typically grown for the students who need tutoring. Then, educators have to decide – based on the research evidence for tutoring –  how much they could reasonably hope student achievement to grow after 12 weeks or more. 

    Incomplete data was a common problem

    The first school district in the pilot group launched its outcome-based contract in the fall of 2021. In the middle of the pilot, school leadership changed, layoffs hit, and the leaders of the tutoring initiative left the district.  With no one in the district’s central office left to track it, there was no data on whether tutoring helped the 1,000 students who received it. Half the students attended 70 percent of the tutoring sessions. Half didn’t. Test scores for almost two-thirds of the tutored students increased between the start and the end of the tutoring program. But these students also had regular math classes each day and they likely would have posted some achievement gains anyway. 

    Delays in settling contracts led to fewer tutored students

    Walker said two school districts weren’t able to start tutoring children until January 2023, instead of the fall of 2022 as originally planned, because it took so long to iron out contract details and obtain approvals inside the districts. Many schools didn’t want to wait and launched other interventions to help needy students sooner. Understandably, schools didn’t want to yank these students away from those other interventions midyear. 

    That delay had big consequences in Duval County. Only 451 students received tutoring instead of a projected 1,200.  Fewer students forced Walker to recalculate Duval’s outcomes-based contract. Instead of a $1.2 million contract with $480,000 of it contingent on student outcomes, she downsized it to $464,533 with $162,363 contingent. The tutored students hit 53 percent of the district’s growth and proficiency goals, leading to a total payout of $393,220 to the tutoring company – far less than the company had originally anticipated. But the average per-student payout of $872 was in line with the original terms of between $600 and $1,000 per student. 

    The bottom line is still uncertain

    What we don’t know from any of these case studies is whether similar students who didn’t receive tutoring also made similar growth and proficiency gains. Maybe it’s all the other things that teachers were doing that made the difference. In Duval County, for example, proficiency rates in math rose from 28 percent of students to 46 percent of students. Walker believes that outcomes-based contracting for tutoring was “one lever” of many. 

    It’s unclear if outcomes-based contracting is a way for schools to save money. This kind of intensive tutoring – three times a week or more during the school day – is new and the school districts didn’t have previous pre-pandemic tutoring contracts for comparison. But generally, if all the student goals are met, companies stand to earn more in an outcomes-based contract than they would have otherwise, Walker said.

    “It’s not really about saving money,” said Walker.  “What we want is for students to achieve. I don’t care if I spent the whole contract amount if the students actually met the outcomes, because in the past, let’s face it, I was still paying and they were not achieving outcomes.”

    The biggest change with outcomes-based contracting, Walker said, was the partnership with the provider. One contractor monitored student attendance during tutoring sessions, called her when attendance slipped and asked her to investigate. Students were given rewards for attending their tutoring sessions and the tutoring company even chipped in to pay for them. “Kids love Takis,” said Walker. 

    Advice for schools

    Walker has two pieces of advice for schools considering outcomes-based contracts. One, she says, is to make the contingency amount at least 40 percent of the contract. Smaller incentives may not motivate the vendor. For her second outcomes-based contract in Duval County, Walker boosted the contingency amount to half the contract. To earn it, the tutoring company needs the students it is tutoring to hit growth and proficiency goals. That tutoring took place during the current 2023-24 school year. Based on mid-year results, students exceeded expectations, but full-year results are not yet in. 

    More importantly, Walker says the biggest lesson she learned was to include teachers, parents and students earlier in the contract negotiation process.  She says “buy in” from teachers is critical because classroom teachers are actually making sure the tutoring happens. Otherwise, an outcomes-based contract can feel like yet “another thing” that the central office is adding to a teacher’s workload. 

    Walker also said she wished she had spent more time educating parents and students on the importance of attending school and their tutoring sessions. ”It’s important that everyone understands the mission,” said Walker. 

    Innovation can be rocky, especially at the beginning. Now the Southern Education Foundation is working to expand its outcomes-based contracting initiative nationwide. A second group of four school districts launched outcomes-based contracts for tutoring this 2023-24 school year. Walker says that the rate cards and recordkeeping are improving from the first pilot round, which took place during the stress and chaos of the pandemic. 

    The foundation is also seeking to expand the use of outcomes-based contracts beyond tutoring to education technology and software. Nine districts are slated to launch outcomes-based contracts for ed tech this fall.  Her next dream is to design outcomes-based contracts around curriculum and teacher training. I’ll be watching. 

    This story about outcomes-based contracting was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Jill Barshay

    Source link

  • PROOF POINTS: AI essay grading is already as ‘good as an overburdened’ teacher, but researchers say it needs more work

    PROOF POINTS: AI essay grading is already as ‘good as an overburdened’ teacher, but researchers say it needs more work

    [ad_1]

    Grading papers is hard work. “I hate it,” a teacher friend confessed to me. And that’s a major reason why middle and high school teachers don’t assign more writing to their students. Even an efficient high school English teacher who can read and evaluate an essay in 20 minutes would spend 3,000 minutes, or 50 hours, grading if she’s teaching six classes of 25 students each. There aren’t enough hours in the day. 

    Could ChatGPT relieve teachers of some of the burden of grading papers? Early research is finding that the new artificial intelligence of large language models, also known as generative AI, is approaching the accuracy of a human in scoring essays and is likely to become even better soon. But we still don’t know whether offloading essay grading to ChatGPT will ultimately improve or harm student writing.

    Tamara Tate, a researcher at University California, Irvine, and an associate director of her university’s Digital Learning Lab, is studying how teachers might use ChatGPT to improve writing instruction. Most recently, Tate and her seven-member research team, which includes writing expert Steve Graham at Arizona State University, compared how ChatGPT stacked up against humans in scoring 1,800 history and English essays written by middle and high school students. 

    Tate said ChatGPT was “roughly speaking, probably as good as an average busy teacher” and “certainly as good as an overburdened below-average teacher.” But, she said, ChatGPT isn’t yet accurate enough to be used on a high-stakes test or on an essay that would affect a final grade in a class.

    Tate presented her study on ChatGPT essay scoring at the 2024 annual meeting of the American Educational Research Association in Philadelphia in April. (The paper is under peer review for publication and is still undergoing revision.) 

    Most remarkably, the researchers obtained these fairly decent essay scores from ChatGPT without training it first with sample essays. That means it is possible for any teacher to use it to grade any essay instantly with minimal expense and effort. “Teachers might have more bandwidth to assign more writing,” said Tate. “You have to be careful how you say that because you never want to take teachers out of the loop.” 

    Writing instruction could ultimately suffer, Tate warned, if teachers delegate too much grading to ChatGPT. Seeing students’ incremental progress and common mistakes remain important for deciding what to teach next, she said. For example, seeing loads of run-on sentences in your students’ papers might prompt a lesson on how to break them up. But if you don’t see them, you might not think to teach it. 

    In the study, Tate and her research team calculated that ChatGPT’s essay scores were in “fair” to “moderate” agreement with those of well-trained human evaluators. In one batch of 943 essays, ChatGPT was within a point of the human grader 89 percent of the time. On a six-point grading scale that researchers used in the study, ChatGPT often gave an essay a 2 when an expert human evaluator thought it was really a 1. But this level of agreement – within one point – dropped to 83 percent of the time in another batch of 344 English papers and slid even farther to 76 percent of the time in a third batch of 493 history essays.  That means there were more instances where ChatGPT gave an essay a 4, for example, when a teacher marked it a 6. And that’s why Tate says these ChatGPT grades should only be used for low-stakes purposes in a classroom, such as a preliminary grade on a first draft.

    ChatGPT scored an essay within one point of a human grader 89 percent of the time in one batch of essays

    Corpus 3 refers to one batch of 943 essays, which represents more than half of the 1,800 essays that were scored in this study. Numbers highlighted in green show exact score matches between ChatGPT and a human. Yellow highlights scores in which ChatGPT was within one point of the human score. Source: Tamara Tate, University of California, Irvine (2024).

    Still, this level of accuracy was impressive because even teachers disagree on how to score an essay and one-point discrepancies are common. Exact agreement, which only happens half the time between human raters, was worse for AI, which matched the human score exactly only about 40 percent of the time. Humans were far more likely to give a top grade of a 6 or a bottom grade of a 1. ChatGPT tended to cluster grades more in the middle, between 2 and 5. 

    Tate set up ChatGPT for a tough challenge, competing against teachers and experts with PhDs who had received three hours of training in how to properly evaluate essays. “Teachers generally receive very little training in secondary school writing and they’re not going to be this accurate,” said Tate. “This is a gold-standard human evaluator we have here.”

    The raters had been paid to score these 1,800 essays as part of three earlier studies on student writing. Researchers fed these same student essays – ungraded –  into ChatGPT and asked ChatGPT to score them cold. ChatGPT hadn’t been given any graded examples to calibrate its scores. All the researchers did was copy and paste an excerpt of the same scoring guidelines that the humans used, called a grading rubric, into ChatGPT and told it to “pretend” it was a teacher and score the essays on a scale of 1 to 6. 

    Older robo graders

    Earlier versions of automated essay graders have had higher rates of accuracy. But they were expensive and time-consuming to create because scientists had to train the computer with hundreds of human-graded essays for each essay question. That’s economically feasible only in limited situations, such as for a standardized test, where thousands of students answer the same essay question. 

    Earlier robo graders could also be gamed, once a student understood the features that the computer system was grading for. In some cases, nonsense essays received high marks if fancy vocabulary words were sprinkled in them. ChatGPT isn’t grading for particular hallmarks, but is analyzing patterns in massive datasets of language. Tate says she hasn’t yet seen ChatGPT give a high score to a nonsense essay. 

    Tate expects ChatGPT’s grading accuracy to improve rapidly as new versions are released. Already, the research team has detected that the newer 4.0 version, which requires a paid subscription, is scoring more accurately than the free 3.5 version. Tate suspects that small tweaks to the grading instructions, or prompts, given to ChatGPT could improve existing versions. She is interested in testing whether ChatGPT’s scoring could become more reliable if a teacher trained it with just a few, perhaps five, sample essays that she has already graded. “Your average teacher might be willing to do that,” said Tate.

    Many ed tech startups, and even well-known vendors of educational materials, are now marketing new AI essay robo graders to schools. Many of them are powered under the hood by ChatGPT or another large language model and I learned from this study that accuracy rates can be reported in ways that can make the new AI graders seem more accurate than they are. Tate’s team calculated that, on a population level, there was no difference between human and AI scores. ChatGPT can already reliably tell you the average essay score in a school or, say, in the state of California. 

    Questions for AI vendors

    At this point, it is not as accurate in scoring an individual student. And a teacher wants to know exactly how each student is doing. Tate advises teachers and school leaders who are considering using an AI essay grader to ask specific questions about accuracy rates on the student level:  What is the rate of exact agreement between the AI grader and a human rater on each essay? How often are they within one-point of each other?

    The next step in Tate’s research is to study whether student writing improves after having an essay graded by ChatGPT. She’d like teachers to try using ChatGPT to score a first draft and then see if it encourages revisions, which are critical for improving writing. Tate thinks teachers could make it “almost like a game: how do I get my score up?” 

    Of course, it’s unclear if grades alone, without concrete feedback or suggestions for improvement, will motivate students to make revisions. Students may be discouraged by a low score from ChatGPT and give up. Many students might ignore a machine grade and only want to deal with a human they know. Still, Tate says some students are too scared to show their writing to a teacher until it’s in decent shape, and seeing their score improve on ChatGPT might be just the kind of positive feedback they need. 

    “We know that a lot of students aren’t doing any revision,” said Tate. “If we can get them to look at their paper again, that is already a win.”

    That does give me hope, but I’m also worried that kids will just ask ChatGPT to write the whole essay for them in the first place.

    This story about AI essay scoring was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Jill Barshay

    Source link

  • 7 realities for Black students in America, 70 years after Brown – The Hechinger Report

    7 realities for Black students in America, 70 years after Brown – The Hechinger Report

    [ad_1]

    Linda Brown was a third grader in Topeka, Kansas, when her father, Oliver Brown, tried to enroll her in the white public school four blocks from her home. Otherwise, she would have had to walk across railroad tracks to take a bus to attend the nearest all-Black one.

    When she was denied admission, Oliver Brown sued.

    The case, and four others from Delaware, the District of Columbia, South Carolina and Virginia were combined and made their way to the Supreme Court. All of them involved school children required to attend all-Black schools that were of lower quality than schools for white children.

    While the Supreme Court found in 1954 in Oliver Brown’s favor, years would pass before desegregation  of American schools began in earnest. And for many Black students now, 70 years since the nation’s highest court held unanimously that separate is inherently unequal, educational resources and access remain woefully uneven.

    Here are some of the racial realities of American public education today:

    25: That’s the percentage increase in Black-white school segregation between 1991 and 2019, according to an analysis of 533 districts by sociologists Sean Reardon at Stanford University and Ann Owens at the University of Southern California. While school segregation fell dramatically beginning in 1968 with a series of court orders, it began to tick up in the early 1990s because of the expiration of court orders mandating integration, school choice policies, and other factors. Still, schools remain significantly less segregated than they did before and immediately after the Brown decision.

    10: That’s the proportion of Black students learning in a school where more than 90 percent of their classmates were also Black, according to 2022 Department of Education data. That figure is down from 23 percent in 2000. Even as Black-white school segregation has increased slightly since the early 1990s, the number of extremely segregated schools has shrunk, in part because of an increase in the Hispanic student population. Meanwhile, from 2000 to 2022, the percentage of white students attending a school that is 90 percent or more white fell from 44 percent to 14 percent.

    6: This is the percentage of teachers in American public schools who are Black. By comparison, Black students make up about 15 percent of public school enrollment. One legacy of Brown v. Boardis the dearth of  Black teachers: More than 38,000 Black educators lost their jobs after the decision came down, as white administrators of integrating schools refused to hire Black professionals for teaching roles or pushed them out. Yet research suggests that more Black teachers in the classroom can help boost Black student outcomes such as college enrollment.

    Related: Become a lifelong learner. Subscribe to our free weekly newsletter to receive our comprehensive reporting directly in your inbox. 

    2014: That’s the year that Wilcox County High School, in rural Georgia, held its first school-sponsored, racially integrated prom. After desegregation, parents in the community, like many across the South, began organizing private, off-site proms to keep the events exclusively white. That practice persisted in Wilcox County until 2013, when high schoolers organized a prom for both white and Black students. The next year, the school made it official, finally holding an integrated event.

    $14,385: This is the average amount spent per Black pupil in public school, compared with $14,263 per white student, according to a 2022 analysis of 2017-18 data by the Federal Reserve Bank of St. Louis. The researchers found that while school district spending was very similar for Black and white students, the sources of funding differed somewhat, with Black students receiving more federal funding and white students receiving more local funding. The amount of money spent on instruction per pupil, meanwhile, was slightly lower for Black students – $7,169 – than for white students ($7,329). The researchers attributed that to a number of small, predominantly white districts that spent far above average on their students.

    7: That’s the share of incoming students at the University of Mississippi who were Black in 2022 — even though nearly half the state’s public high school graduates, 48 percent, were Black that year. That gap between Black students graduating from high school in Mississippi and those enrolling at the state flagship university has grown over the past decade, according to a Hechinger analysis. Similar trends are playing out elsewhere in the country: In 2022, 16 state flagship universities had a gap of 10 percentage points or more between Black high school graduates and incoming freshmen. And at two dozen flagships, the gap for Black students stayed the same or grew between 2019 and 2022. Yet public flagships were created to educate the residents of their states, and most make that explicit.

    Revisiting Brown, 70 years later

    The Hechinger Report takes a look at the decision that was intended to end segregation in public schools in an exploration of what has, and hasn’t, changed since school segregation was declared illegal.

    700: That’s roughly how many high schools are offering the College Board’s Advanced Placement African American Studies course this school year, more than 10 times as many that offered it a year earlier, when it debuted. The course was created in part in response to longstanding concerns that African American history has been downplayed or left out of K-12 curriculum. But the A.P. course, an elective, became ensnared in politics. The content has evolved after criticism that it introduced students to “divisive concepts,” among other reasons; it has been banned or restricted in some states. Nevertheless, about 13,000 students are enrolled in this second year of the pilot course, which took more than 10 years to develop. Forty-five percent of students taking the class had never previously taken another AP course, which can earn them college credit.

    This story about Brown v. Board of Education was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    [ad_2]

    Hechinger Report

    Source link