NYC Schools Launch Groundbreaking Emergency Alert System for Active Shooter Situations
Every morning, parents across New York (and the country) watch their children head off to school with the same hope that they’ll be safe, learn well, and come home smiling. But in a time when school shootings have become heartbreakingly familiar, it gets increasingly more difficult. To strengthen safety at school, New York City has taken a groundbreaking step by launching the nation’s first Emergency Alert System that links schools directly to 911, ensuring help can be initiated within seconds.
Yesterday, Mayor Eric Adams and Chief Technology Officer Matthew Fraser announced the rollout of a new Emergency Alert System (EAS), the first in the nation to directly connect public schools to 911 dispatchers. The pilot launched this fall at Spring Creek Campus in Brooklyn, and plans are set to expand to 25 school buildings across all five boroughs (representing 51 schools) during the 2025–2026 school year.
Normally, in an emergency, a school staff member must call 911, describe the situation, confirm the address, and wait while the call is routed to a dispatcher, a process that can take minutes.
The new Emergency Alert System bypasses that delay. Each participating school will be equipped with multiple fixed buttons and wireless lanyards that can trigger a hard lockdown alert. Once activated, a signal goes directly to 911 dispatch in under 10 seconds, straight to the NYPD’s real-time operations. Dispatchers can immediately send units to the location while a school dashboard provides police with the building’s key details.
Inside the school, visual and audible alerts let teachers and students know that 911 has been notified and that a hard lockdown is in effect. Notifications also go out simultaneously to NYPD School Safety and NYC Public Schools officials.
Safety Is the Top Concern
City officials are emphasizing that this new system is about preparedness, not panic. “We never want to imagine the unthinkable,” said NYC Public Schools Chancellor Melissa Aviles-Ramos, “but safety has to remain our top priority. This pilot adds another layer of protection and peace of mind for our students, families, and educators.”
CTO Fraser, a father himself, echoes that peace of mind: “There’s nothing more important than knowing your kids will come home safe at the end of the day.”
Like other existing school safety measures, such as locked doors, safety agents on-site, and regular lockdown drills, this new system just adds another layer of protection. It works alongside what’s already in place, like the Safer Access Program that keeps entrances secure, NYPD School Safety Agents in every building, and the emergency protocols schools practice.
Between 2015 and 2025, the K-12 School Shooting Database recorded more than 1,900 school shooting incidents nationwide, with 351 in 2023 and 336 in 2024.
For now, only select schools are part of the pilot program for 2025–2026, and if your child’s school is included, the administration will reach out with details. Lockdown drills will continue as usual. Teachers and staff will receive training on how and when to use the system, and students won’t need to do anything since it’s meant to work quietly behind the scenes. While parents don’t need to do anything different, it’s always a good idea to stay informed through school updates and review emergency procedures with your children.
Of course, no system can eliminate every risk with our current gun laws, but this initiative is another step to protect our children. As Brooklyn District Attorney Eric Gonzalez said, “I pray that these rapid response alerts will never have to be used.”
Grading papers is hard work. “I hate it,” a teacher friend confessed to me. And that’s a major reason why middle and high school teachers don’t assign more writing to their students. Even an efficient high school English teacher who can read and evaluate an essay in 20 minutes would spend 3,000 minutes, or 50 hours, grading if she’s teaching six classes of 25 students each. There aren’t enough hours in the day.
Could ChatGPT relieve teachers of some of the burden of grading papers? Early research is finding that the new artificial intelligence of large language models, also known as generative AI, is approaching the accuracy of a human in scoring essays and is likely to become even better soon. But we still don’t know whether offloading essay grading to ChatGPT will ultimately improve or harm student writing.
Tamara Tate, a researcher at University California, Irvine, and an associate director of her university’s Digital Learning Lab, is studying how teachers might use ChatGPT to improve writing instruction. Most recently, Tate and her seven-member research team, which includes writing expert Steve Graham at Arizona State University, compared how ChatGPT stacked up against humans in scoring 1,800 history and English essays written by middle and high school students.
Tate said ChatGPT was “roughly speaking, probably as good as an average busy teacher” and “certainly as good as an overburdened below-average teacher.” But, she said, ChatGPT isn’t yet accurate enough to be used on a high-stakes test or on an essay that would affect a final grade in a class.
Tate presented her study on ChatGPT essay scoring at the 2024 annual meeting of the American Educational Research Association in Philadelphia in April. (The paper is under peer review for publication and is still undergoing revision.)
Most remarkably, the researchers obtained these fairly decent essay scores from ChatGPT without training it first with sample essays. That means it is possible for any teacher to use it to grade any essay instantly with minimal expense and effort. “Teachers might have more bandwidth to assign more writing,” said Tate. “You have to be careful how you say that because you never want to take teachers out of the loop.”
Writing instruction could ultimately suffer, Tate warned, if teachers delegate too much grading to ChatGPT. Seeing students’ incremental progress and common mistakes remain important for deciding what to teach next, she said. For example, seeing loads of run-on sentences in your students’ papers might prompt a lesson on how to break them up. But if you don’t see them, you might not think to teach it.
In the study, Tate and her research team calculated that ChatGPT’s essay scores were in “fair” to “moderate” agreement with those of well-trained human evaluators. In one batch of 943 essays, ChatGPT was within a point of the human grader 89 percent of the time. On a six-point grading scale that researchers used in the study, ChatGPT often gave an essay a 2 when an expert human evaluator thought it was really a 1. But this level of agreement – within one point – dropped to 83 percent of the time in another batch of 344 English papers and slid even farther to 76 percent of the time in a third batch of 493 history essays. That means there were more instances where ChatGPT gave an essay a 4, for example, when a teacher marked it a 6. And that’s why Tate says these ChatGPT grades should only be used for low-stakes purposes in a classroom, such as a preliminary grade on a first draft.
ChatGPT scored an essay within one point of a human grader 89 percent of the time in one batch of essays
Corpus 3 refers to one batch of 943 essays, which represents more than half of the 1,800 essays that were scored in this study. Numbers highlighted in green show exact score matches between ChatGPT and a human. Yellow highlights scores in which ChatGPT was within one point of the human score. Source: Tamara Tate, University of California, Irvine (2024).
Still, this level of accuracy was impressive because even teachers disagree on how to score an essay and one-point discrepancies are common. Exact agreement, which only happens half the time between human raters, was worse for AI, which matched the human score exactly only about 40 percent of the time. Humans were far more likely to give a top grade of a 6 or a bottom grade of a 1. ChatGPT tended to cluster grades more in the middle, between 2 and 5.
Tate set up ChatGPT for a tough challenge, competing against teachers and experts with PhDs who had received three hours of training in how to properly evaluate essays. “Teachers generally receive very little training in secondary school writing and they’re not going to be this accurate,” said Tate. “This is a gold-standard human evaluator we have here.”
The raters had been paid to score these 1,800 essays as part of three earlier studies on student writing. Researchers fed these same student essays – ungraded – into ChatGPT and asked ChatGPT to score them cold. ChatGPT hadn’t been given any graded examples to calibrate its scores. All the researchers did was copy and paste an excerpt of the same scoring guidelines that the humans used, called a grading rubric, into ChatGPT and told it to “pretend” it was a teacher and score the essays on a scale of 1 to 6.
Older robo graders
Earlier versions of automated essay graders have had higher rates of accuracy. But they were expensive and time-consuming to create because scientists had to train the computer with hundreds of human-graded essays for each essay question. That’s economically feasible only in limited situations, such as for a standardized test, where thousands of students answer the same essay question.
Earlier robo graders could also be gamed, once a student understood the features that the computer system was grading for. In some cases, nonsense essays received high marks if fancy vocabulary words were sprinkled in them. ChatGPT isn’t grading for particular hallmarks, but is analyzing patterns in massive datasets of language. Tate says she hasn’t yet seen ChatGPT give a high score to a nonsense essay.
Tate expects ChatGPT’s grading accuracy to improve rapidly as new versions are released. Already, the research team has detected that the newer 4.0 version, which requires a paid subscription, is scoring more accurately than the free 3.5 version. Tate suspects that small tweaks to the grading instructions, or prompts, given to ChatGPT could improve existing versions. She is interested in testing whether ChatGPT’s scoring could become more reliable if a teacher trained it with just a few, perhaps five, sample essays that she has already graded. “Your average teacher might be willing to do that,” said Tate.
Many ed tech startups, and even well-known vendors of educational materials, are now marketing new AI essay robo graders to schools. Many of them are powered under the hood by ChatGPT or another large language model and I learned from this study that accuracy rates can be reported in ways that can make the new AI graders seem more accurate than they are. Tate’s team calculated that, on a population level, there was no difference between human and AI scores. ChatGPT can already reliably tell you the average essay score in a school or, say, in the state of California.
Questions for AI vendors
At this point, it is not as accurate in scoring an individual student. And a teacher wants to know exactly how each student is doing. Tate advises teachers and school leaders who are considering using an AI essay grader to ask specific questions about accuracy rates on the student level:What is the rate of exact agreement between the AI grader and a human rater on each essay? How often are they within one-point of each other?
The next step in Tate’s research is to study whether student writing improves after having an essay graded by ChatGPT. She’d like teachers to try using ChatGPT to score a first draft and then see if it encourages revisions, which are critical for improving writing. Tate thinks teachers could make it “almost like a game: how do I get my score up?”
Of course, it’s unclear if grades alone, without concrete feedback or suggestions for improvement, will motivate students to make revisions. Students may be discouraged by a low score from ChatGPT and give up. Many students might ignore a machine grade and only want to deal with a human they know. Still, Tate says some students are too scared to show their writing to a teacher until it’s in decent shape, and seeing their score improve on ChatGPT might be just the kind of positive feedback they need.
“We know that a lot of students aren’t doing any revision,” said Tate. “If we can get them to look at their paper again, that is already a win.”
That does give me hope, but I’m also worried that kids will just ask ChatGPT to write the whole essay for them in the first place.
The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.
Four meta-analyses conclude that it’s more effective to teach phonemic awareness with letters, not as an oral-only exercise. Credit: Allison Shelley for EDU
Educators around the country have embraced the “science of reading” in their classrooms, but that doesn’t mean there’s a truce in the reading wars. In fact, controversies are emerging about an important but less understood aspect of learning to read: phonemic awareness.
That’s the technical name for showing children how to break down words into their component letter sounds and then fuse the sounds together. In a phonemic awareness lesson, a teacher might ask how many sounds are in the word cat. The answer is three: “k,” “a,” and “t.” Then the class blends the sounds back into the familiar sounding word: from “kuh-aah-tuh” to “kat.” The 26 letters of the English alphabet produce 44 phonemes, which include unique sounds made from combinations of letters, such as “ch” and “oo.”
Many schools have purchased scripted oral phonemic awareness lessons that do not include the visual display of letters. The oral lessons are popular because they are easy to teach and fun for students. And that’s the source of the current debate. Should kids in kindergarten or first grade be spending so much time on sounds without understanding how those sounds correspond to letters?
A new meta-analysis confirms that the answer is no. In January 2024, five researchers from Texas A&M University published their findings online in the journal Scientific Studies of Reading. They found that struggling readers, ages 4 to 6, no longer benefited after 10.2 hours of auditory instruction in small group or tutoring sessions, but continued to make progress if visual displays of the letters were combined with the sounds. That means that instead of just asking students to repeat sounds, a teacher might hold up cards with the letters C, A and T printed on them as students isolate and blend the sounds.
Meta-analyses sweep up all the best research on a topic and use statistics to tell us where the preponderance of the evidence lies. This newest 2024 synthesis follows three previous meta-analyses on phonemic awareness in the past 25 years. While there are sometimes shortcomings in the underlying studies, the conclusions from all the phonemic meta-analyses appear to be pointing in the same direction.
“If you teach phonemic awareness, students will learn phonemic awareness,” which isn’t the goal, said Tiffany Peltier, a learning scientist who consults on literacy training for teachers at NWEA, an assessment company. “If you teach blending and segmenting using letters, students are learning to read and spell.”
Phonemic awareness has a complicated history. In the 1970s, researchers discovered that good readers also had a good sense of the sounds that constitute words. This sound awareness helps students map the written alphabet to the sounds, an important step in learning to read and write. Researchers proved that these auditory skills could be taught and early studies showed that they could be taught as a purely oral exercise without letters.
But science evolved. In 2000, the National Reading Panel outlined the five pillars of evidence-based reading instruction: phonemic awareness, phonics, fluency, vocabulary and comprehension. This has come to be known as the science of reading. By then, more studies on phonemic awareness had been conducted and oral lessons alone were not as successful. The reading panel’s meta-analysis of 52 studies showed that phonemic awareness instruction was almost twice as effective when letters were presented along with the sounds.
Many schools ignored the reading panel’s recommendations and chose different approaches that didn’t systematically teach phonics or phonemic awareness. But as the science of reading grew in popularity in the past decade, phonemic awareness lessons also exploded. Teacher training programs in the science of reading emphasized the importance of phonemic awareness. Companies sold phonemic programs to schools and told teachers to teach it every day. Many of these lessons were auditory, including chants and songs without letters.
Twenty years after the reading panel’s report, a second meta-analysis came out in 2022 with even fresher studies but arrived at the same conclusion. Researchers from Baylor University analyzed over 130 studies and found twice the benefits for phonemic awareness when it was taught with letters. A third meta-analysis was presented at a poster session of the 2022 annual meeting of the Society for the Scientific Study of Reading. It also found that instruction was more effective when sounds and letters were combined.
On the surface, adding letters to sounds might seem identical to teaching phonics. But some reading experts say phonemic awareness with letters still emphasizes the auditory skills of segmenting words into sounds and blending the sounds together. The visual display of the letter is almost like a subliminal teaching of phonics without explicitly saying, “This alphabetic symbol ‘a’ makes the sound ‘ah’.” Others explain that there isn’t a bright line between phonemic awareness and phonics and they can be taught in tandem.
The authors of the latest 2024 meta-analysis had hoped to give teachers more guidance on how much classroom time to invest on phonemic awareness. But unfortunately, the classroom studies they found didn’t keep track of the minutes. The researchers were left with only 16 high-quality studies, all of which were interventions with struggling students. These were small group or individual tutoring sessions on top of whatever phonemic awareness lessons children may also have been receiving in their regular classrooms, which was not documented. So it’s impossible to say from this meta-analysis exactly how much sound training students need.
The lead author of the 2024 meta-analysis, Florina Erbeli, an education psychologist at Texas A&M, said that the 10.2 hours number in her paper isn’t a “magic number.” It’s just an average of the results of the 16 studies that met her criteria for being included in the meta-analysis. The right amount of phonemic awareness might be more or less, depending on the child.
Erbeli said the bigger point for teachers to understand is that there are diminishing returns to auditory-only instruction and that students learn much more when auditory skills are combined with visible letters.
I corresponded with Heggerty, the market leader in phoneme awareness lessons, which says its programs are in 70 percent of U.S. school districts. The company acknowledged that the science of reading has evolved and that’s why it revised its phonemic awareness program in 2022 to incorporate letters and introduced a new program in 2023 to pair it with phonics. The company says it is working with outside researchers to keep improving the instructional materials it sells to schools. Because many schools cannot afford to buy a new instructional program, Heggerty says it also explains how teachers can modify older auditory lessons.
The company still recommends that teachers spend eight to 12 minutes a day on phonemic awareness through the end of first grade. This recommendation contrasts with the advice of many reading researchers who say the average kid doesn’t need this much. Many researchers say that phonemic awareness continues to develop automatically as the child’s reading skills improve without advanced auditory training.
NWEA literacy consultant Peltier, whom I quoted earlier, suggests that phonemic awareness can be tapered off by the fall of first grade. More phonemic awareness isn’t necessarily harmful, but there’s only so much instructional time in the day. She thinks that precious minutes currently devoted to oral phonemic awareness could be better spent on phonics, building vocabulary and content knowledge through reading books aloud, classroom discussions and writing.
Another developer of a phonemic awareness program aimed at older, struggling readers is David Kilpatrick, professor emeritus at the State University of New York at Cortland. He told me that five minutes a day might be enough for the average student in a classroom, but some struggling students need a lot more. Kilpatrick disagrees with the conclusions of the meta-analyses because they lump different types of students together. He says severely dyslexic students need more auditory training. He explained that extra time is needed for advanced auditory work that helps these students build long-term memories, he said, and the meta-analyses didn’t measure that outcome.
Another reading expert, Susan Brady, professor emerita at the University of Rhode Island, concurs that some of the more advanced manipulations can help some students. Moving a sound in and out of a word can heighten awareness of a consonant cluster, such as taking the “r” out of the word “first” to get “fist,” and then inserting it back in again. But she says this kind of sound subtraction should only be done with visible letters. Doing all the sound manipulations in your head is too taxing for young children, she said.
Brady’s concern is the misunderstanding that teachers need to teach all the phonemes before moving on to phonics. It’s not a precursor or a prerequisite to reading and writing, she says. Instead, sound training should be taught at the same time as new groups of letters are introduced. “The letters reinforce the phoneme awareness and the phoneme awareness reinforces the letters,” said Brady, speaking at a 2022 teacher training session. She said that researchers and teacher trainers need to help educators shift to integrating letters into their early reading instruction. “It’s going to take a while to penetrate the belief system that’s out there,” she said.
I once thought that the reading wars were about whether to teach phonics. But there are fierce debates even among those who support a phonics-heavy science of reading. I’ve come to understand that the research hasn’t yet answered all our questions about the best way to teach all the steps. Schools might be over-teaching phonemic awareness. And children with dyslexia might need more than other children. More importantly, the science of reading is the same as any other scientific inquiry. Every new answer may also raise new questions as we get closer to the truth.
This story aboutphonemic awareness was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Proof Points newsletter.
Related articles
The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.