This audio is auto-generated. Please let us know if you have feedback.

As technology has grown more sophisticated, algorithms have slowly crept into more and more operations on college campuses. 

Take admissions, where some colleges are using artificial intelligence to help them decide whether to admit a student. While that practice is still somewhat rare, four-year institutions are more commonly using algorithms to help with another admissions decision — how much aid to offer already admitted students. 

If an institution has limited resources, education experts say, an algorithm can help optimize how aid is distributed. Others say the practice could cause issues for students and even open institutions up to potential legal risk. 

But both skeptics and proponents agree that using an algorithm successfully — and fairly —  depends on institutions and vendors being thoughtful. 

What is an enrollment algorithm?

Enrollment management and aid algorithms are essentially tools that predict the likelihood that a student will enroll in an institution after being offered admission. But admissions teams can also move the needle on that likelihood — by doing things like offering scholarships and other aid packages. 

“The concept is to award financial aid in a way that results in the maximum total amount of net tuition revenue for the institution,” said Nathan Mueller, principal at EAB, an education consulting firm, and architect of the company’s financial aid optimization work. 

Enrollment goes up as institutions offer more scholarship aid, but revenue per student decreases. 

“What we’re helping them find is the place in between, where they’re giving the best mix of institutional financial aid to raise enrollment to the point where if they gave one more dollar, even though they would increase enrollment, they would start losing that institutional revenue,” Mueller said.  

At the individual college level, that process means determining an admitted student’s likelihood of attending and how sensitive they will be to changes in price. 

The inputs for each algorithm can differ, depending on an institution’s goals. 

Algorithms can, for example, take into account applicant information, such as grades, test scores, location and financial data. Or they may also look at an applicant’s demonstrated interest in a college — whether they have visited campus, interacted with an admissions officer or answered optional essay prompts. 

EAB counsels its own clients to not use those interest markers in aid determinations.

“We do look at some of those things, as ways of understanding how engaged a student is and understanding their price sensitivity,” Mueller said. “It absolutely has predictive value, but from our vantage point it crosses into the area of something that’s really not an appropriate mechanism to determine how much aid a student receives.”

In the past, Mueller said, many colleges committed to cover 100% of a student’s demonstrated need. But in the early ‘90s, Congress changed how need analyses were conducted — making many families appear needier — and reduced funding for Pell Grants. As a result, fewer colleges believed they could afford to make that pledge, he said.

While some institutions do not use algorithms to help determine aid, their goals are often similar to those that do, Mueller said. Today EAB works with about 200 clients — most of them private colleges — on financial aid optimization.

Careful consideration

Vendors emphasize that the algorithms they offer aren’t just mathematical models that run and spit out a result to be followed exactly. They allow an admissions team to try out different aid strategies and see how those might change things like the diversity, gender balance and academic profile of their incoming class. 

“The criticisms about algorithms or about artificial intelligence specifically have been around this idea that they are sort of running loose on their own and don’t have overriding guardrails that reference institutional philosophies or strategic goals,” Mueller said. “We would never want anyone to just follow a mathematical exercise without any consideration of the other key strategic aspects.”

But Alex Engler, a senior fellow at The Brookings Institution said he’s skeptical about whether institutions are appropriately contemplating how they’re using these tools. 

Because algorithms are frequently trained on data resulting from human decision-making, they often show evidence of human bias and lead to different outcomes for different subgroups. 

Lilah Burke

Source link

You May Also Like

Poptential™ High School Economics Curriculum by Certell Offers Free Stock Market Investment Lessons

INDIANAPOLIS, Ind. — October historically has been a month of notable volatility…

“Another unforced error” in the FAFSA fiasco

The Education Department acknowledged Friday that a calculation error led to inaccurate…

Introducing My PowerSchool: An Intuitive User Experience Empowering Families, Staff, and Administrators

FOLSOM, CALIF. – PowerSchool (NYSE: PWSC), the leading provider of cloud-based software for…

NASA Space Mission Leader Launches Space Science Board Game

Xtronaut 2.0 Fun and Educational STEM Game from Award-Winning Xtronaut Enterprises Press…