Dozens of local middle and high school students are being honored in the state 2026 Scholastic Art & Writing Awards for their artistic and literary work.
The annual awards celebrate artists, photographers and writers in grades 7-12 across the nation. This year alone, more than 12,000 entries were submitted to the Massachusetts contest.
This page requires Javascript.
Javascript is required for you to be able to read premium content. Please enable it in your browser settings.
ABHA, Saudi Arabia — From the air, Abha’s mountains emerge as a shock of emerald green rising from a sea of sand. Terra firma brings other surprises: a bracing wind that has me grabbing for a jacket — a piece of clothing all but ignored in other parts of Saudi Arabia.
Indeed, so much of Abha, the capital of the southwestern province of Asir, seems a world away — and two dozen degrees cooler — from the scorching desert that dominates Western notions of the kingdom.
I’m here as a tourist — and Saudi Arabia hopes for many more. The government is spending nearly $1 trillion to make attractive what, just over a decade ago, was one of the most tourist-averse countries on earth.
If you’ve read anything about tourism in Saudi Arabia, you’ve probably seen mention of Vision 2030, the all-out diversification plan to reduce the kingdom’s reliance on oil; Neom, the sci-fi-esque desert metropolis with plans for an artificial moon and flying cars; or the Red Sea Project, which intends to turn a 92-island archipelago off the country’s pristine Red Sea coast into a network of 50 luxury hotels and about 1,000 residential units.
Those two flagship projects were heavily featured during President Trump’s visit to Riyadh in May, which saw Saudi Crown Prince Mohammed bin Salman — Vision 2030’s architect — guide him to a hall with elaborate mock-ups of the finished product.
A man sits in an old fort on Mt. Qais, one of the verdant areas in southwestern Saudi Arabia.
(Tasneem Alsultan)
Abha and Asir weren’t in the prince’s presentation, but they are nevertheless part of the tourism transformation, though for now they offer more grounded and arguably more authentic pleasures — the primary reasons why I chose to come here. (The other, less whimsical reason is that I wasn’t sure I could convince my editors to OK a $2,500-a-night private “dune villa” at the St. Regis Red Sea for “journalistic purposes.”)
Perched at almost 7,500 feet above sea level, Abha is occasionally nicknamed by Saudis as the “Lady of the Fog” or “the Bride of the Mountain.”
Both titles seemed apt on the day I arrived, and, as fog wafted over a nearby summit, I visited Art Street, a park with theaters, music festivals, restaurants and cafes. Lilac jacaranda trees were in full bloom. Later, I took a 20-minute drive to Al Sahab Park, a short distance outside Abha, crowded with people admiring the evening mist shrouding Jabal Soudah, the country’s highest peak at 9,892 feet.
“People come here to touch the clouds,” said Hussein al-Lamy, a 42-year-old pharmaceutical company employee who lives two hours away. He smiled, taking in the Harley bikers parked near the cliffs and the men and women strolling nearby sporting Asir’s traditional garlands made of orange marigold, dill and artemisia, a gray-green plant similar to sage.
“I left my kids and wife at home for a few days’ visit here,” he said. “It’s a good place to clear the mind.”
Men gather for a wedding in Abha, the capital of Saudi Arabia’s Asir province.
(Tasneem Alsultan)
Next morning, I took a walk through Souq Al Thulatha, a central shopping thoroughfare that despite its name (which in Arabic means Tuesday Market) is open every day of the week.
One stall sold slices of mangoes brought in from Jazan, the fertile southern province famous for its tropical fruits, wheat and coffee; others sold raisins, spices, nuts and gourmet honey from Yemen. Traffic was still light, but vendors told me that at the height of the summer season — when many Saudis flee the fry-an-egg-on-your-hood heat of Riyadh and Jeddah to Abha — you would barely have room to stand.
In its drive to become a must-see destination, the kingdom is ecumenical about its audience, hoping to attract not only Saudis who in the past would travel elsewhere — and who spent $27 billion on international travel in 2024, according to government figures — but also international visitors.
There are signs it’s working: An International Monetary Fund report noted that annual tourists exceeded the Vision 2030 target of 100 million seven years ahead of schedule.
Work is already underway on Abha’s touristic makeover. All over the city, you see signs advertising projects sponsored by the Public Investment Fund, the oil-backed sovereign wealth fund overseeing the gargantuan investments in the kingdom’s no-holds-barred metamorphosis. Construction will soon begin on upgrading the airport.
Locals pose at a mural in one of the many parks in Abha, which has been working to attract more international tourists.
(Tasneem Alsultan)
Beyond the city limits, the fund is planning six tourist districts in the region’s choicest spots; they’ll leverage the area’s majestic vistas to focus on wellness spas, yoga pavilions, meditation retreats, golf courses and glamping pods, according to promotional materials.
“We’re in a transitional phase for the moment, so there’s construction and it can be a bit inconvenient, but things are already getting better,” said Mohammad Hassan, 36, owner of a cafe in Abha called Bard wa Sahab (Cold and Clouds), near an Instagram-ready mountaintop vantage point.
Hassan acknowledged that the spate of development was likely to increase competition and had already spurred a rise in rents. But he appeared happy about what the changes will mean for his business.
“Before, Abha mostly got Saudi visitors or people from the [Persian] Gulf,” he said. “We’re already seeing more foreigners, but the government’s plans will make Abha known internationally.”
Other locals grumble that the construction has made Asir’s most beautiful areas off-limits, and that the focus on luxury will change the freewheeling character of the region.
“We would go to the mountains and camp for days. Authorities have stopped all that, and of course we won’t be able to do it when the resorts open,” said Nasser, a municipal worker who gave only his first name for privacy reasons.
“Maybe all that the government is doing will make it better, but it’s impossible for the old way of life we had here to return,” he said.
Another potential break with the past is possibility of allowing alcohol in the country. But crossing that Rubicon is no easy decision for authorities all too aware of the kingdom’s status as the birthplace of Islam, which bans alcohol and takes a dim view of those who drink and sell it.
Rijal Almaa, an ancient village about 15 miles from Abha, is a popular destination for tourists in Saudi Arabia’s Asir province.
(Tasneem Alsultan)
Nevertheless, many believe it’s coming. Staff working on the construction designs for the Red Sea Project say hotel rooms in various resorts will be equipped with elaborate minibars. And the Four Seasons in Riyadh has opened a tonic bar — but with no booze — that asks you to “delight in a symphony of handcrafted cocktails meticulously prepared to elevate your senses.”
Despite the hundreds of billions Saudi Arabia has spent, there are skeptics. They point to depressed oil prices that mean the government can’t balance its budget or keep up with Vision 2030’s ballooning costs. A few projects have already stalled; architects working on the resorts say that layoffs have spiked and that the scope of their work has been reduced. Other flagship projects, including the Line, have seen their once-fantastical goals grounded by the realities of physics and finance.
Whatever the fate of Vision 2030’s grander plans, Abha’s charms await.
The Rijal Almaa heritage village, located in Asir province, is more than 900 years old.
(Tasneem Alsultan)
One afternoon, I decided to brave Jabal Soudah, figuring a short hike was in order. I started down a barely there path with a vague plan to soon turn back. Indeed, I was so ill-equipped (with inappropriate walking shoes, a tiny bottle of water and a massive cold) that I should have done so. But I kept going, curious to see what the next bend would bring.
Four hours later, sunburned and more winded than I like to admit, I reached a hamlet where I later hitched a ride back to the city.
But before I found the ride, I ignored the exhaustion and lingered for a moment in this corner of a country more known for desert than the dense forest I had crossed. Before me, the mountain range extended somewhere beyond the haze. The fog coalesced around the summits, with sunset’s final rays transforming them into a gracefully undulating landscape of golden gauze.
So Samsung made a “Vision Pro Lite.” That was my immediate takeaway after this week’s debut of the Galaxy XR, the first Android XR device to hit the market. While Samsung deserves credit for offering something close to the Vision Pro for nearly half the price, an $1,800 headset still won’t get mainstream consumers rushing out the door to experience the wonders of mixed reality. And with the limited amount of content in Android XR at the moment, the Galaxy XR is in the same position as the Vision Pro: It’s just a well-polished developer kit.
The only logical reason to buy a Galaxy XR would be to test out apps for Android XR. If you just want to experience VR and dabble in a bit of augmented reality, you’re better off spending that money on a gaming laptop and the excellent $500 Meta Quest 3. (The Meta Quest Pro, the company’s first high-end mixed reality device, was unceremoniously killed after launching at an eye-watering $1,500.)
But even for developers, the Galaxy XR feels like it’s lacking, well, vision. Samsung has done an admirable job of copying almost every aspect of the Vision Pro: The sleek ski goggle design, dual micro-OLED displays and hand gesture interaction powered by a slew of cameras and sensors. But while Apple positioned the Vision Pro as its first stab at spatial computing, an exciting new platform where we can use interactive apps in virtual space, Samsung and Google are basically just gunning to put Android on your face.
There aren’t many custom-built XR apps, aside from Google’s offerings like Maps and Photos. (Something that also reminds me of the dearth of real tablet apps on Android.) And the ability to view 360-degree videos on YouTube has been a staple of every VR headset for the last decade — it’s not exactly notable on something that costs $1,800. Samsung and Google also haven’t said much about how they plan to elevate XR content. At least Apple is attempting to push the industry forward with its 8K Immersive Videos, which look sharper and more realistic than low-res 360-degree content.
For the most part, it seems as if Google is treating Android XR as another way to force its Gemini AI on users. In its press release for the Galaxy XR, Samsung notes that it’s “introducing a new category of AI-native devices designed to deliver immersive experiences in a form factor optimized for multimodal AI.”
…What?
In addition to being a crime against the English language, what the company is actually pitching is fairly simple: It’s just launching a headset that can access AI features via camera and voice inputs.
Who knows, maybe Gemini will make Android XR devices more capable down the line. But at the moment, all I’m seeing in the Galaxy XR is another Samsung device that’s shamelessly aping Apple, from the virtual avatars to specific pinch gestures. And Google’s history in VR and interactive content doesn’t inspire much hope about Android XR. Don’t forget how it completely abandoned Google Cardboard, the short-lived Daydream project and its hyped up Stadia cloud service. Stadia’s death was particularly galling, since Google initially pitched it as a way to revolutionize the very world of gaming, only to let it fall on its face.
There’s no doubt that Samsung, Apple and Meta have a ton of work left ahead in the world of XR. Samsung is at least closer to delivering something under $1,000, and Meta also recently launched the $800 Ray-Ban Display. But price is only one part of the problem. Purpose is another issue entirely. After living with the Vision Pro since its debut, I can tell that Apple is at least thinking a bit more deeply about what it’s like to wear a computer on your face. Just look at the upgrades its made around ultra-wide Mac mirroring, or the way Spatial Personas make it feel as if you’re working alongside other people. With Android XR, Google seems to just be making a more open Vision Pro.
Honestly, it’s unclear if normal users will ever want to use any sort of XR headset regularly, no matter how cheap they get. The experience making these headsets could help Google, Apple and Meta develop future AR glasses, or eyewear that offer some sort of XR experience (Samsung already has something in the works with Warby Parker and Gentle Monster). But while Apple and Meta have broken new ground in XR, Google and Samsung just seem to be following in their footsteps.
Apple’s Vision Pro was meant to usher in a new era for headsets. However, its high price and somewhat limited utility resulted in what may be the company’s biggestflop in years. Now it’s time for Samsung to give things a go with the Galaxy XR. It’s a fresh take on modern mixed reality goggles developed through deep partnerships with Qualcomm and Google and it attempts to address some of the Vision Pro’s biggest shortcomings.
The hardware
While both Apple and Samsung’s headsets have a lot of similarities (like their basic design and support for features such as hand and eye tracking), there are also some very important differences. First, at $1,800, the Galaxy XR is essentially half the price of the Vision Pro (including the new M5-powered model). Second, instead of Apple’s homegrown OS, Samsung’s headset is the first to run Google’s new Android XR platform, which combines a lot of familiar elements from its mobile counterpart but with a bigger emphasis on AI and Gemini-based voice controls. And third, because Samsung relied more on partners like Google and Qualcomm, the Galaxy XR feels like it’s built around a larger, more open ecosystem that plays nicely with a wider range of third-party devices and software.
The Galaxy XR fundamentally doesn’t look that much different from the Vision Pro. It features a large visor in front with an assortment of 13 different exterior sensors to support inside-out tracking, passthrough vision and hand recognition. There are some additional sensors inside for eye and face tracking. There’s also a connector for the wire that leads to its external clip-on battery pack alongside built-in speakers with spatial audio. The one big departure is that unlike the Vision Pro, the Galaxy XR doesn’t have an outward-facing display, so it won’t be able to project your face onto the outside of the headset, which is just fine by me.
Sam Rutherford for Engadget
However, the devil is in the details because while the original Vision Pro weighed between 600 and 650 grams (around 1.3 to 1.4 pounds) depending on the configuration (not including its battery pack), the Galaxy XR is significantly lighter at 545 grams (1.2 pounds). And that’s before you consider the new M5 Vision Pro, which has somehow gone backwards by being even heavier at 750-800 grams (around 1.6 pounds). Furthermore, it seems Samsung learned a lot from its rivals by including a much larger and thicker head cushion that helps distribute the weight of the headset more evenly. Granted, during a longer session, I still noticed a bit of pressure and felt relief after taking off the Galaxy XR, but it’s nothing like the Vision Pro, which in my experience gets uncomfortable almost immediately. Finally, around back, there’s a simple strap with a knob that you can twist to tighten or loosen the headband as necessary. So even without extra support running across the top of your head, getting in and out of the Galaxy XR is much easier and comfier than the Vision Pro.
Sam Rutherford for Engadget
On the inside, the Galaxy XR is powered by Qualcomm’s Snapdragon XR2+ Gen 2 chip with dual micro OLED displays that deliver 4K resolution (3,552 x 3,840) to each eye at up to 90Hz. I wish Samsung was able to go up to a 120Hz refresh rate like on the Vision Pro, but considering the Galaxy XR’s slightly higher overall resolution, I’m not that bothered. And I must say, the image quality from this headset is seriously sharp. It’s even better than Apple’s goggles and it might be the best I’ve ever used, particularly outside of $10,000+ enterprise-only setups. Once again, when you consider that this thing costs half the price of a Vision Pro, this headset feels like a real accomplishment by Samsung to the point where I wouldn’t be surprised if the company is losing money on every unit it sells.
In terms of longevity, Samsung says that for general use the Galaxy XR should last around two hours. If you’re only watching videos though, that figure is more like two and a half. Thankfully, if you do need to be in mixed reality for longer, you can charge the headset while it’s being used. As for security, the Galaxy XR uses iris recognition to skip traditional passwords, which is nice.
The platform: Android XR
Sometimes, trying out a new software platform can be a little jarring. But that’s not really the case for Android XR, which shouldn’t present much of a learning curve for anyone who has used other headsets or Google’s ubiquitous mobile OS. After putting the goggles on, you can summon a home menu with an app launcher by facing your palm up and touching your index finger and thumb together. From there, you can open apps and menus by moving your hands and pinching icons or rearranging virtual windows by grabbing the anchor point along the bottom and putting them where you want.
Sam Rutherford for Engadget
Notably, while there is a growing number of new apps made specifically for XR, you still get access to all of your standard Android titles. Those include Google Photos, Google Maps and Youtube, all of which I got a chance to play around with during a 25-minute demo. In Photos, you can browse your pictures normally. However, to take advantage of the Galaxy XR’s hardware, Google created a feature that allows the app to convert standard flat images (with help from the cloud) into immersive ones. While the effect isn’t true 3D, it adds distinct foreground, midground and background layers to images in a way that makes viewing your photo roll just a bit more interesting.
In Maps, you start out with a view of the world before using hand gestures to move and zoom in wherever you want or voice commands to laser in on a specific location. The neat new trick for this app is that if you find bubbles over things like restaurants and stores, you can click those to be transported inside those businesses, where Android XR will stitch together 2D photos to create a simulated 3D environment that you can move and walk around in. Granted, this doesn’t have a ton of practical use for most folks unless you want to take a virtual tour of something like a wedding venue. But, the tech is impressive nonetheless.
Sam Rutherford for Engadget
Finally in the YouTube app, the Galaxy XR did a great job of making standard 360 videos look even better. While quality will always depend on the gear that captured the content, viewing spatial clips was a great way to show off its resolution and image quality. Google says it will also put a new tab on the app to make finding 360 videos easier, though you can always watch the billions of standard flat videos as well.
Interestingly, you can use and navigate the Galaxy XR entirely with hand gestures, but voice commands (via Gemini) are also a major part of the Android XR platform. Because the goggles sit on your head, unlike with mobile devices, there’s no need to use a wake word every time you want to do something. You just talk and Gemini listens (though you can choose to disable this behavior if you prefer), so this makes voice interactions feel a lot more natural. Because Gemini can also do things like adjust settings or organize all the apps you have open, in addition to answering questions, it feels like Google is starting to deliver on some of those Star Trek moments where you can simply ask the computer to do something and it just happens. Yes, it’s still very early, but as a platform, Android XR feels much more like a virtual playground than VisionOS does at the moment.
Other features
Sam Rutherford for Engadget
While I didn’t get to test these out myself, there are some other important features worth mentioning. In addition to apps, you can also play your standard selection of Android games like Stardew Valley or connect the headset to your PC (like with Steam Link) to play full desktop titles. Furthermore, I was told that the Galaxy XR can be tethered to a computer and used like a traditional VR headset. And while Samsung is making optional wireless controllers for the Galaxy XR (and a big carrying case), you may not need them at all as you’ll also have the ability to pair the goggles with typical Bluetooth-based gamepads along with wireless mice and keyboards.
Google also says it’s working on a new system called Likenesses that can create personalized avatars for use in video calls and meetings that use data from interior sensors to deliver more realistic expressions. Additionally, you’ll be able to use tools like Veo3 to make AI-generated videos while providing prompts using your voice. But this is just scratching the surface of the Galaxy XR’s capabilities and I want to use this thing more before offering a final verdict.
Early thoughts
Sam Rutherford for Engadget
In many ways, the Galaxy XR looks and feels like a flagship mixed reality headset in the same vein as the Vision Pro, but for the Android crowd (and Windows users to some extent as well). On top of that, Google has done some interesting things with Android XR to make it feel like there’s a much wider range of content and software to view and use. In many ways, the addition of a dedicated AI assistant in Gemini and voice controls feels much more impactful on goggles than a phone because you can’t always count on having physical inputs like a mouse or keyboard. And with the Galaxy XR being half the price of the Vision Pro, Samsung and Google have done a lot to address some of the most glaring issues with Apple’s rival.
In case the price drop wasn’t enough, it feels like all the companies involved are doing as much as possible to sweeten the deal. I actually started laughing when I first heard all the discounts and free subscriptions that come with the headset. That’s because in addition to the goggles themselves, every Galaxy XR will come with what’s being called the Explorer Pack: 12 months of access to Google AI Pro, 12 months of YouTube Premium (which itself includes YouTube Music), 12 months of Google Play Pass, 12 Months of NBA League Pass and a bundle of other custom XR content and apps. So on top of a slick design, top-tier optics and a new platform, Google and Samsung are basically tossing a kitchen sink of apps and memberships in with the headset.
Sam Rutherford for Engadget
My only reservation is that when it comes to mass adoption, I think smartglasses have supplanted headsets as the next big mainstream play. Granted, there is a lot of technology and software shared between both categories of devices (Google has already teased upcoming Android XR smartglasses) that should allow Samsung or Google to pivot more easily down the line. But the idea that in the future there will be a headset in every home seems less likely every day. Still, as a showcase for the potential of mixed reality and high-end optics, the Galaxy XR is an exciting piece of tech.
The Samsung Galaxy XR is available now for $1,800 on Samsung.com.
Apple just announced its fall slate of devices powered by its new M5 chip: A 14-inch MacBook Pro, iPad Pro and revamped Vision Pro. In this episode, Devindra and Sam Rutherford dive into what’s actually new this time around. (Spoiler: It’s really all about the new GPU.) Also, Sam goes deep on his review of the ROG Xbox Ally X, Microsoft’s first stab at a portable “Xbox.”
Subscribe!
Topics
Apple refreshes of the Macbook Pro, Vision Pro and iPad Pro with M5 chips – 1:24
Sam Rutherford’s review of the ASUS ROG Xbox Ally X – 18:45
Microsoft makes big promises with Copilot Voice, can it follow through? – 39:00
OpenAI’s Sora app reaches 1M downloads in less than 5 days, faster than ChatGPT – 50:42
Sam Altman announces you’ll be able to sext with ChatGPT starting in December – 54:00
Pop culture picks – 1:09:41
Credits
Hosts: Devindra Hardawar and Sam Rutherford Producer: Ben Ellman Music: Dale North and Terrence O’Brien
The first trailer for VisionQuest was shown at New York Comic Con.
VisionQuest, sometimes referred to as Vision Quest, is the forthcoming Marvel Television series arriving on Disney+ next year. Serving as a spin-off to WandaVision, the show is created by Terry Matalas and will once again see Paul Bettany reprise his role as Vision.
What happens in the VisionQuest trailer?
While the VisionQuest trailer that was shown at New York Comic Con hasn’t been released online, Variety has a description of the footage, which offers some insight as to what MCU fans can expect from the new show.
“The trailer showed Bettany back as White Vision from the ending of WandaVision, plus a regular-looking human version,” the article reads. “There was also human versions of Ultron, who was voiced by James Spader in the second Avengers movie, and the AI programs Jarvis, Friday, and Edith. At the end of the trailer, there was a brief shot of an adult Tommy, Vision, and Wanda’s son that appeared as a child in WandaVision.
“Vision Quest was described as the third part of a trilogy that included WandaVision and Agatha All Along. In the trailer, Bettany walks up to a white mansion and is greeted by human servants who are really just recreated AI programs. They include Jarvis, who was Tony Stark’s AI program that later became Vision; Friday, who replaced Jarvis and was the AI assistant to Stark and Spider-Man; and Edith, the AI program in Stark’s sunglasses from Spider-Man: Far From Home.”
In addition to Bettany and Spader, the cast of Marvel Studios’ VisionQuest includes Todd Stashwick as Paladin, Ruaridh Mollica as Thomas Shepherd, T’Na Miller as Jocasta, Emily Hampshire as Edith, Orla Brady as Friday, Faran Tahir as Raza, and more.
Marvel’s VisionQuest will premiere on the Disney+ streaming service in 2026, though an exact premiere date has not yet been set.
Originally reported by Brandon Schreur at SuperHeroHype.
The Stats don’t lie: after age 65, most people will struggle to focus visually on close-up objects. You might have seen this among your friends and relatives or even experienced it yourself, holding books, magazines, or your phone farther away from your face to try to bring words and pictures into focus. Many of those affected start using reading glasses. But a new treatment could become available: eye drops.
This deterioration of vision is called presbyopia. It is not a disease but a natural, physiological change caused by aging—specifically by the loss of elasticity and flexibility of the crystalline lens at the front of the eye, which impairs the ability of the eye to change the curvature of the lens to bring objects into focus. This stiffening begins in middle age and tends to stabilize around age 65. For people with shortsightedness, or myopia, who struggle to see faraway objects clearly, the onset of presbyopia may at first lead to improved vision by compensating for their existing condition. For those with farsightedness, or hyperopia, the effects of presbyopia often present earlier than in the rest of the population.
Living with presbyopia can cause fatigue and headaches, and in rare cases double vision, but generally it isn’t something to be worried about. But correcting it can make daily activities easier and help maintain good quality of life. The classic means of correction are reading glasses, though in some cases people opt for eye surgery—either laser refractive surgery to reshape the cornea to compensate for the loss of flexibility of the lens or intraocular surgery to replace the lens with an artificial one. The latter is often proposed when there is also some clouding in the lens (a cataract).
But recently, researchers have been working on eye drops that, in different ways depending on the active ingredient used, improve near focus. Two types have been approved by the US Food and Drug Administration: one based on a substance called aceclidine, the other on pilocarpine.
Pilocarpine is the star molecule, with multiple trials of new formulations underway. It is a natural alkaloid that interacts with parts of the nervous system, which has the effect, in the eye, of inducing miosis—the narrowing of the pupil diameter—and contraction of the ciliary muscle, the ring of muscle that controls the shape of the lens. The two effects combined improve the elasticity of the lens and the ability to focus on nearby objects.
A recent trial conducted in Argentina has tested a pilocarpine eye drop at different concentrations (1 percent, 2 percent, 3 percent) in combination diclofenac, a nonsteroidal anti-inflammatory that soothes the adverse effects of pilocarpine such as irritation and discomfort. (The FDA-approved pilocarpine eye drops are concentrated at 1.25 percent.)
In a two-year retrospective study of 766 people, average age 55 years, the researchers found that the eye drops enabled the majority of patients to improve their vision. “Our most significant result showed rapid and sustained improvements in near vision for all three concentrations,” said lead researcher Giovanna Benozzi when presenting the research at the 43rd Congress of the European Society of Cataract and Refractive Surgeons.
In a world that rewards short-term thinking and instant gratification, staying true to a long-term mission is becoming increasingly rare. In this personal reflection, I share the challenges and rewards of dedicating 15 years to The Emotion Machine, and why fighting the temptation of rapid success is key to building something truly meaningful and lasting.
When I first started this website in 2009, I told myself it was a lifelong project that I could continue to build on until the day I died. Fifteen years later, I still stubbornly hold onto this belief, but I underestimated the difficulty of this commitment.
Our current society does not reward long-term thinking. We are taught to live in the moment, take what is right in front of you, and indulge in what is comfortable and convenient; not in what is meaningful, but hard.
This short-term attitude has taken over all of our society from business to politics to relationships.
It’s rare to see someone think on a long timeline, especially 10, 20, 50, or 100 years into the future. In many ways, our brains aren’t wired to think on this scale; but we’re capable of doing it, and developing real foresight and concern about the future is a necessary ingredient to almost all human greatness.
But who is really thinking about the future today?
Companies focus on their daily stock prices and quarterly earnings, politicians focus on their election seasons, new relationships are just one swipe away on a dating app, and modern work has become increasingly focused on gigs and temporary contracts.
Today, it’s rare to see anyone committed to anything for over 10 years, whether it’s a career, a relationship, a creative hobby, or a personal goal.
It’s not completely our faults. Our current world incentives this short-term thinking by promoting hedonism (“give pleasure now”), materialism (“money is the most important thing”), and nihilism (“nothing really matters because eventually I’ll die.”)
All of these beliefs and attitudes come together to create an epidemic of shortsightedness and selfishness, which ultimately lead to a lack of real meaning and purpose. This is not just an individual problem, but a systemic problem that permeates our society and institutions on almost every level.
Where are the long-term visions?
Our society lacks long-term vision and it manifests itself in countless ways. One example I know from firsthand experience is short-term thinking within the online creator “self help” spaces.
As someone who has been writing and sharing content for over a decade, I’ve seen thousands of other websites, blogs, and social media accounts come and go. Many of them get really hyped up on some version of “become your own boss” or “I’m going to be an influencer”-type mindset, and then give up after a couple months of disappointment.
One fundamental problem is they weren’t ever emotionally invested in what they were building. Their work wasn’t driven by a long-term vision or deep-seated convictions, they were solely interested in what they perceived as an easy and convenient way to get popular or make money.
Once again, materialism shows its weakness. Money can be a bad motivator – even a destructive one – when it clashes with certain goals that require you to think beyond a mere trader mindset to achieve. If you are only motivated by money, then you are at the whims of money. If you are motivated by something deeper, then it takes more than money (or lack of) to stop you.
This same attitude reveals itself within a lot of startup and tech companies. Many of today’s entrepreneurs start new companies or new projects just so they can sell it to a bigger corporation in a couple years. They don’t build things from cradle-to-grave anymore. They don’t care about creative ownership of their projects, or what happens to what they’ve built when it reaches the marketplace, they just see these projects as vehicles for quick bucks and rapid exits.
Fighting the allure of rapid and cheap success
Over the years I’ve had many opportunities to abandon the mission of this website for quick personal gain, but I chose not to.
I’ve rejected numerous money-making opportunities because I felt they jeopardized the integrity of the website, from paid sponsorships, to SEO backlinks, to advertisements, to having tempting offers to buy the website outright.
In theory, I could sell this website overnight and it would be a massive financial relief to me, especially as costs of living increase and more people experience economic hardship and debt-based living.
These are difficult temptations I wrestle with. This world incentives short-term thinking and immediate rewards. I have to remind myself on a daily basis what my core values are.
I imagine my life if I sold this website. Sure, it takes care of financial problems and it gives me more free time. I definitely have other goals and passions that I could put more energy into like music or screenwriting, but it’s also walking away from fifteen years of blood, sweat, and tears. That’s an emotional investment that is hard to rebuild with anything.
Most importantly, there’s more work to do. I still have hundreds of ideas and drafts for future articles that I need to write and publish. There’s still more to say – and I feel like I’d be doing a disservice to the world if I didn’t say it.
I look around the self help space today and believe my work still adds something special and valuable.
Building an evergreen website
Fifteen years isn’t that long compared to the timescale I’m thinking on.
All of the content on this site is designed to be evergreen, so someone can read an article a hundred years into the future and still take something valuable from it. In contrast, the majority of content on the internet that is focused on news, pop culture, or current events is barely relevant after a week.
From an intergenerational perspective, The Emotion Machine could be a website that exists long after my death if I can find someone to pass it down to as a successor at some point. I would love for it to be an ongoing project. Our tagline is “Self Improvement in the 21st Century” so I’m at least thinking on a one hundred year scale. I’ll have to remember to update that in 2100.
To be completely honest, I’m proud of the work accomplished here so far, even when I feel it isn’t fully appreciated. This site has a vast library of articles, quizzes, and worksheets, and while I find that most people (including monthly members) don’t fully take advantage of these resources, I know they stand on their own as evergreen education for whomever is willing to learn.
A lifetime commitment
This article is a declaration to myself more than anything. It’s been a tough year so far and I needed to remind myself what really matters to me and why I invest my energy in the things I do. People like you also help keep me going, especially those that join and support this work. Thank you.
Enter your email to stay updated on new articles in self improvement:
Rise Vision, the #1 digital signage software solution is excited to announce the launch of its new screen sharing feature, designed to enhance collaboration, engagement, and teaching. This latest innovation allows users to seamlessly share content wirelessly from any device to any display running Rise Vision’s digital signage.
The new screen sharing feature transforms any Rise Vision display into a wireless presentation hub, eliminating the need for adapters, dongles, or proprietary hardware. With Rise Vision, users can now easily share their screens with no account required, or opt for a more secure, moderated session, ensuring full control over presentations.
“We developed this feature to meet the growing need for simple, accessible, and secure screen sharing in classrooms, offices, and other collaborative environments,” said Shea Darlison, Chief Revenue Officer at Rise Vision. “Our screen sharing solution offers a powerful, cost-effective way to make presentations more engaging and interactive, while minimizing the need for specialized hardware.”
Key features of Rise Vision’s screen sharing include:
Easy Sharing: Share content wirelessly from any device—laptops, tablets, and smartphones—to any Rise Vision display with no need for special training or professional development.
Cross-Platform Compatibility: Whether you’re using a PC, Mac, Android, or iOS device, Rise Vision’s screen sharing works across all devices and operating systems.
Secure Sharing: With moderator control and a secure pin-code system, users maintain full control over who can share their screen and to which display, ensuring a smooth and controlled experience.
Browser and Native Sharing: Share a window or your whole screen from your browser without installing an application, or use our Android and iOS apps to share from supported devices.
Centralized Cloud-Based Control: IT administrators can remotely manage all screen sharing devices from the cloud, saving time and effort in supporting users.
Rise Vision’s screen sharing feature is also highly cost-effective, offering organizations a streamlined solution for enhancing old displays and rejuvenating legacy hardware, without the need for costly replacements.
This new feature joins Rise Vision’s comprehensive suite of digital signage solutions, including digital signage management, emergency alerts, and hardware as a service, giving businesses and educational institutions the convenience of working with a single vendor.
For more information on Rise Vision’s new screen sharing feature, visit the company’s website.
eSchool Media staff cover education technology in all its aspects–from legislation and litigation, to best practices, to lessons learned and new products. First published in March of 1998 as a monthly print and digital newspaper, eSchool Media provides the news and information necessary to help K-20 decision-makers successfully use technology and innovation to transform schools and colleges and achieve their educational goals.
Rohaan’s always had an air of mystique around him. Even back to his early days in the experimental trap and deep bass worlds, releasing on MAD ZOO and Deadbeats. His style has that indistinguishable quality that the likes of IMANU, Current Value and Amon Tobin have which certainly transcends genre, but also seems to transcend space and time. A powerful manifestor as well as a creator of some of the most interesting beats of the last seven years, it seems inevitable that he would eventually release on VISION.
With his genreless take on deep bass, Rohaan’s first release with the Noisia boys was actually on their erstwhile “miscellaneous bass” label, Division. He made a funky, loud dubstep remix of Tek Genesis’s “Cloud Kingdom Theme” that seemed like a departure even from his own diverse style. But if we’ve come to expect anything from Rohaan, it’s the unexpected. His debut EP, Boy In A Dream, which came out earlier this month on VISION is certainly that. Containing everything from techy, clubby D&B that defies subgenre to ameny almost jungle to video game halftime to techno-infused bass house, fans shouldn’t be surprised if there were samples from an actual kitchen sink thrown in there just to make a point.
Because of the diversity (even for Rohaan) of this EP, YEDM wanted to catch up with the Manchester-based artist to find out how the hell this extremely interesting piece of work came together. The takeaway? It’s a love letter to the club. Rohaan’s advice for making D&B? Don’t listen to D&B. Read on.
Let’s start with the tagline VISION used in your promo: “2 years ago I wrote ‘Vision Recordings’ on a note and stuck it to my bedroom wall…and now here we are.” What does reaching this goal mean to you?
So, I write four key goals on a note each year. These are usually written at a time where that goal is in my line of sight but very far away. So, to be here, EP made and released, it’s a wonderful career-affirming place to be. I have looked up to VISION since I was at school studying music, my best friends and peers all love the label, so it’s definitely a wonderful place to be knowing my sound fits the bill!
Some fans might actually be surprised to learn that Boy In a Dream is your Vision debut EP, as your sound’s always seemed well-suited to the label, especially in recent years. Why do you think now is the right time or what do you think made this EP stand out to them?
I’ve had multiple releases with them in the past, doing three remixes for the likes of Noisia, The Upbeats and Icicle, then a collab single with Tom Finster. This is my debut solo release with them. We actually started working on the idea of an EP back in September 2022, so it’s been a long process of many demos and many weeks of refining my sound to get here. Very excited to bring it to life.
It seems clear on the EP that you didn’t necessarily have a specific label in mind; how did you go about putting it together, especially in terms of all the styles?
In terms of this release, we had many conversations with VISION to refine the huge demo list and get them to the final 6 that you hear today. Some of these were just fun things I started, others were specifically made for VISION, so it varies. My style and sound are quite eclectic, so I wanted to showcase that in this EP.
While a lot of fans think you hit the bigs somewhat suddenly with Shogun, prior to that, you released on some excellent cutting-edge imprints like Deadbeats, Mad Zoo and Unchained. How do you think your experience working with the more twisted beats labels shaped your style when it began to get more popular?
With each release, I’m learning and evolving, both through external life experience and seeing the response to my music from fans’ point of view. My style has definitely evolved into two parts. Pop/more stream friendly, and club music. My recent single “Run Away” with Kelbin is a great example of the pop side. My Boy in a Dream EP is a great example of my club influences. It’s been amazing to see my name and my homies names gain so much traction the last few years. That we can actually host headline shows and make music for a living is wonderful thing.
In terms of style, from do you feel you take the most influence? Did you really focus on curating your style in the beginning or was it more hit and miss?
I have a Patreon page where I posted a video recently about “how to find your sound and create something original.” I talk about the importance of expanding your creative inputs and horizons and the career-shifting results that will have in the long run. I’m passionate about that for sure.
All your previous EPs have been, despite the complexity and diversity of the sound, honed around a specific concept. Were you thinking concept EP for Boy In a Dream? If so, what was it?
To be honest, this is more of a collection of club leaning-tunes. No deep story with this one. Each track is its own world, its own universe for people to explore. My Bleach EP was a true story-driven EP, but this one felt great to just give it all to the club scene. I have been on tour for the best part of a year and a half now, all over the world, so my input is mostly club music and energy leaning that way, hence the output of this EP. I’m a boy living his dream
Each individual track seems to be its own mini theme or concept within the EP. How do you go about putting a vibe together for a track? What was your goal for some of your favorites on the EP?
I really try to say one thing through a track and say it the best I can. So each track is a refined version of its demo self. Each track has a clear theme from start to finish and says it the best I could get it to say it with my current creative self. Each track serves a different purpose.
Conceptualizing aside, do you think fans will be able to recognize the vein of your style that runs through all the tracks?
It’s not something that I think about really. It’s all got the Rohaan name on it, it’s a more refined version of my sound and gives them a taste of it all. If they come to a show of mine they will see the full extent of my style come through
What do you want listeners to take away from the EP as a whole?
I want them to play it as loud as they possibly can and to as many people as possible. This EP is for the club and the house party, so enjoy!
Anything else exciting on the horizon? What can fans expect next from you (aside from the unexpected)?
Many a thing! I’m just about to finish my 4four-week North American Tour, and I have loads of singles already lined up for this year. I’m playing Tomorrowland, Lightning in a Bottle and some more huge festivals that I can’t say just yet. But what a journey so far! I’m so grateful and full of gratitude for every person that reaches out about my music. I just got gifted a watch in NYC! So I’m just taking it all in, really.
Thank you for having me and be sure to come to one of my upcoming shows. They are special!
Boy In a Dream is out now on VISION and can be streamed on Spotify or purchased on Beatport.
Newswise — Northampton, MA – [Feb 4, 2024] – Today, the American Macular Degeneration Foundation (AMDF) and the Thought Leadership & Innovation Foundation (TLI) announce a new strategic partnership aimed at amplifying awareness and understanding of macular degeneration, a leading cause of vision loss in older adults.
Kicking off this collaboration is the release of Living with Macular Degeneration: Patient Stories | Laura Carabello: The Benefits of Early Intervention, a short film produced by AMDF featuring TLI Fellow and AMDF patient advocate Laura Carabello.
“This strategic partnership with TLI unlocks a vast potential to reach millions impacted by macular degeneration,” says Matthew Levine, Director of Grants, Advocacy & Partnerships at AMDF. “TLI’s expertise in thought leadership amplification will strengthen our trusted resources, elevate patient voices, and drive impactful conversations with eye care specialists, researchers, and policymakers.”
Carabello’s story offers a window into daily life with macular degeneration and the experience of anti-VEGF treatments. Diagnosed with wet macular degeneration, Laura credits her awareness of her genetic risk with seeking immediate medical attention upon experiencing symptoms, and thereby retaining much of her vision. Her narrative powerfully underscores the importance of early detection of macular degeneration and adherence to treatment plans.
Macular degeneration, also known as age-related macular degeneration (AMD), affects central vision, color perception, and fine detail clarity, greatly impacting daily living and independence. Aging, family history, smoking, poor diet, obesity, and high blood pressure are key risk factors.
“This partnership embodies a shared commitment to making a tangible difference in the lives of those living with macular degeneration,” says Shawn Murphy, vice president, TLI. “Laura’s story is a powerful testament to the power of early intervention, ongoing care, and the transformative potential of effective treatments.”
Together, AMDF and TLI are poised to illuminate a brighter path for individuals facing macular degeneration. Beginning in February, which is AMD Awareness Month, through ongoing collaborative efforts in awareness campaigns, groundbreaking research initiatives, and patient/provider support, they aim to minimize the impact of this condition and safeguard the precious gift of sight.
About The American Macular Degeneration Foundation
The American Macular Degeneration Foundation(macular.org) is a patient-centric foundation that supports potentially game-changing AMD research, education and advocacy in order to improve quality of life and treatment outcomes for all of those affected by AMD.
About TLI
The Thought Leadership & Innovation Foundation (TLI) is a not-for-profit organization that works at the nexus of science, technology and public health, innovating for superior prevention, treatment and outcomes for those facing life-altering medical diagnoses.TLI helps patients across the country and around the world find better healthcare outcomes. Visit www.thoughtfoundation.org and follow us on LinkedIn.
Opinions expressed by Entrepreneur contributors are their own.
According to Goldman Sachs, the economic stage for 2024 appears to be a bullish one, as it predicts an annual global GDP growth of 2.6%, which should buoy spirits if you’re a leader hoping for happy returns. Be careful, though: Growth and scaling aren’t always synonymous. If you have unrealistic expectations when it comes to the latter, you could well hamper the results of the former.
The simple fact is that the vast majority of companies don’t have an unlimited capacity to scale. At some point, rapid and unchecked growth can cause them to buckle and break in operation and logistics, which upends vision, brand and broader intentions.
At EOS Worldwide, we have a cultural ethos that everyone should fight for the greater good, which is seen in our core values, as well as in our focus and marketing strategy. Everyone moves forward because of that shared vision and care. And the payoffs go far: Team members feel confident in their purpose, as well as empowered because they know they’ve been chosen specifically for a unique set of talents. Scaling happens naturally as a result.
Among the critical considerations in avoiding overextension is determining which pace is uniquely right for you, certainly, but also that your vision be more than words.
Begin with a documented “North Star” concept to be embraced today, tomorrow and far into the future. Make it at once compelling and clear, and be certain that it resonates with all team members. If behaviors among some staff members aren’t aligning, for example, it might well be that vision training hasn’t been sufficient. This can be frustrating as you start to scale, which makes it an absolutely critical step.
Keep in mind, too, that instilling a vision effectively isn’t cheap in any sense: it means investing money, time and energy, and you might have to give up some efficiency in the process. There is, after all, an inherent inefficiency in driving toward a shared goal, because you need to make room for creativity and exploration.
Your vision also needs to be protected. It sets core values, and so it’s vital to avoid bending or breaking it in order to attain scaling ambitions. For example, one of our company’s core values is to “do the right thing.” Sounds disarmingly simple, but we make a point of following through on it via another core principle: “helping first.” This means that we train our teams to give without expecting anything in return. Again, this isn’t always efficient, but it keeps us grounded and consistent.
We’re still scaling, to be sure, but simply aren’t willing to sacrifice purpose, or to stray outside niche or core competencies. Consequently, our 10-year growth target is doable, because it has just enough dynamic tension to keep everyone stretching toward an ambitious objective while also having the right amount of “give” so the challenge doesn’t break everyone.
Has your company lost its way in an effort to scale without restraint? Then consider putting the following measures in place:
1. Break big “Rocks” into smaller ones
You likely already have one-, three- and 10-year targets. Perfect, but to make sure you’re moving in a steady and manageable direction, my suggestion is that you create something analogous to what we term at EOS Worldwide a 90-Day World™ and individual “Rocks” (objectives) therein. It’s a structure specifically designed to mark each quarter-year contribution towards annual goals and has resulted in measurably greater success.
Your version might include giving every team member a weekly scorecard that includes key tasks towards meeting 90-day expectations. It’s then the responsibility of managers to work to ensure employees are hitting scorecard numbers — making progress toward personal and company objectives. This process also keeps an organization from scaling too fast, as it’s a form of reverse engineering that starts with a broader vision: Nothing can suddenly get added (like a new product line) that doesn’t mesh with that mission focus.
2. Make sure you’ve got the right mix
Every person has two roles at work: the one they play today and the one they’ll play in the future. However, you can’t just scale big and hand out dozens of promotions in a year, or teams wind up feeling overwhelmed and unprepared.
So, employees need to be given the capacity, time and energy necessary to grow. For example, say you’ve mapped out an accountability chart that anticipates the staff knowledge and expertise you’ll need in one year or three years. Is the current team going to be the one to executive effectively? Do they have the capacity and resources?
Knowing the answers to these questions early means you can prepare accordingly, which might or might not include rearranging a team. In a 2021 survey, the Pew Research Center revealed that a stunning 63% of workers were ready to leave their employers because of a lack of promotional opportunities. This means that if you’ve hired the wrong people and can’t provide advancement, you owe it to them to either find a way to upskill or say goodbye in a respectful and responsible way that aligns with your vision.
Another pitfall of scaling too quickly is an inability to maintain a preferred culture. To avoid a forced or brittle atmospheric shock during robust growth, it’s pivotal to treat company culture with intention, and patience.
Consider Starbucks and its scaling challenges, detailed in part in a Branding Strategy Insider article. It’s a powerhouse now, but it hit growth boundaries the hard way. For the first couple of decades, growth was modest, then came a flexion point where the company added 200-plus locations annually. As its former CEO, Howard Schultz, explained in his 2012 book, Onward: How Starbucks Fought for Its Life without Losing Its Soul (Rodale Books), the business scaled so quickly that it broke its ability to properly service customers. Their people could no longer create or control the desired experience, and the culture suffered. Fortunately, the now-35,000-plus-location colossus made this realization early and righted the ship.
Infinite scaling may sound like the fast track to profitability, but it’s a unicorn dream: Don’t fall for that temptation. Instead, plan growth based on vision, people and culture. You’ll then operate with thoughtful restraint and be faced with fewer preventable problems.
As humans age, our eyes adjust based on how we use them, growing or shortening to focus where needed, and we now know that blurred input to the eye while the eye is growing causes myopia.
Newswise — In a recent article in The Lancet, David S. Friedman, MD, PhD, MPH, director of the Glaucoma Service at Mass Eye and Ear, and colleagues describe the current state of glaucoma care and what advances the future might bring to patients. In this Q+A, he discusses how far treatment has come and what can be expected in the near future.
Glaucoma is currently the second-leading cause of blindness worldwide, and in the United States alone, about 3 million Americans live with glaucoma. Glaucoma is often referred to as the “sneak thief of sight” because many people do not realize they have it until the disease is severe. The damage from glaucoma is irreversible, so earlier detection and treatment is essential to avoid unnecessary blindness.
In the paper published last month in TheLancet,David S. Friedman, MD, PhD, MPH, director of the Glaucoma Service at Mass Eye and Ear and colleagues discuss the tremendous need for better detection and treatment of glaucoma. Dr. Friedman spoke with Focus to describe some of the current limitations in glaucoma care and what research is underway to improve care for patients.
What exactly is glaucoma?
Glaucoma is a specific form of damage to the optic nerve. The optic nerve is the cable that connects your eye to your brain and if it is damaged, vision is not normal. Early in glaucoma there are usually few or no symptoms. As it gets worse, there can be some difficulty when going from bright to dark or from dark to bright. Later in the disease the side (peripheral) vision is lost, and it can be difficult to do many daily activities like moving around easily and reading.
How does glaucoma impact the daily lives of your patients?
Patients with glaucoma have several limitations: They are more afraid of falling and fall more frequently, they walk more slowly and reading is more difficult than for those without glaucoma. With more advanced disease they are more likely to stop driving. Severe glaucoma can lead to substantial limitations, but fortunately, most patients under care retain good useful vision.
A photo representation of vision loss a person with glaucoma may experience. Credit: National Eye Institute
Who is more at risk for glaucoma?
Glaucoma is much more common with aging, so older people are more likely to have glaucoma than younger people. Glaucoma is also more common in certain groups with African Americans having nearly four times the rate of Whites. Hispanic populations, especially older ones, also have high rates that are similar to African Americans.
Can glaucoma be cured?
Glaucoma cannot be cured. The damage to the nerve that occurs, at present, cannot be reversed. That said, our treatments, which can include eye drop medications, laser treatments and surgery, can help retain the remaining vision and the great majority of patients are mostly stable when under care. Many promising new therapies are being investigated that may lead to vision restoration in glaucoma patients, but none are presently available for care.
What are some of the current knowledge gaps in glaucoma care?
Research is uncovering more information about glaucoma than ever before, such as the role of certain genes in glaucoma. Despite this incredible progress, we still do not have a perfect screening test that could be administered easily and accurately to identify glaucoma patients. Many do not realize they have glaucoma until the disease is advanced. We also struggle determining which patients are getting worse despite being treated for glaucoma. Improvements here would allow for more rapid and targeted interventions for patients.
There also are treatment gaps at present. The only effective treatment for glaucoma remains lowering eye pressure. Yet, about half of all patients with glaucoma have intraocular pressure in the normal range so factors other than eye pressure play an important role in why some people get glaucoma. We need treatments that protect the nerve through other means.
Dr. David S. Friedman, performs an eye exam on a patient at Mass Eye and Ear.
What are some innovative areas of research that excite you for the future?
There are several areas of study that are quite exciting and suggest a brighter future ahead.
Better testing of patients should be widely available in the coming years. There are several companies that have developed virtual reality headsets that can test vision and side vision and these likely will become a standard approach to monitoring glaucoma over time.
For treatment, improved delivery of eye medications should occur soon. Some approaches allow for long-term delivery of eye pressure lowering medications through implanted devices in the eye, or by wearing a contact lens that can deliver drug. Perhaps even more exciting are novel approaches to protect the nerve or to regenerate the nerve including gene therapy and stem cell therapy. These approaches are still in early development but hopefully can lead to clinical trials over the next few years. Gene therapies are currently used for other retinal conditions, such as forms of inherited retinal blindness. Research into stem cells is also promising, and some day we may be able to transplant these cells to replace the tissues damaged in glaucoma.
Another exciting approach being actively researched looks at how to apply our knowledge of glaucoma genetics to provide personalized care to patients. Some treatments may work better with patients with specific genes, for example. There are numerous studies underway, including several at Mass Eye and Ear, that are using genetics, artificial intelligence and advanced imaging to develop personalized risk scores for patients that could better predict how their glaucoma will progress, which might lead to better and more personalized care.
How has glaucoma care evolved since you started practicing, and where do you see disease management evolving over the next 5 years?
There have been major advances in how we image the nerve and the nerve fiber layer that have greatly improved our ability to monitor patients and diagnose glaucoma. This has been a dramatic change. We also have safer procedures for lowering eye pressure. While these are important advances that have benefitted our patients, much still needs to be done.
We now have tremendous knowledge about the genetics of glaucoma, and this will transform how we care for patients in the coming years.
There is also a lot more clinical trials evidence for how we should be treating patients with glaucoma.
By now, everyone in both the D&B and bass worlds knows that when Billain is about to drop a new release, it’s going to be a game-changer. Now with his last releases being in February, the scene is more than ready for a new joint from the Bosnia-based mega-producer. Or so they think. Different Eyes, the upcoming EP due out this Friday, November 17 on Vision, is once again going to lock Billain into the pinnacle of creativity in bass music.
Having already teased the title track two weeks ago, fans might assume Different Eyes will be another atmospheric concept EP, similar to 2022’s Lands Unbreached or 2019’s Nomad’s Revenge. Being that Billain has been so focused on film production with his multi-award winning short Fugitive and scoring said film as well as new A/V projects, it wouldn’t be too farfetched of an assumption. It would, however, be wrong. The fast, aggressive, yet painfully emotive D&B styles that caused both industry and fans to become infatuated with the dizzying levels of production this artist can attain is on Different Eyes in full force.
While almost every track onthis EP can easily unalive any dancefloor, it’s important to note that Different Eyes is still a concept album and a journey, and it should be listened to as such at least once. It starts with the atmospheric, largely beatless wonder of an intro track, “It’s First Dream.” This lullaby brings the listener back into a world that only Billain fully knows: one of heavy atmos, cyberpunk dreamscapes and endless lands made of sound and code. It’s actually kind of him to lull the listener into this state, because the next tracks hit so damned hard, we nee a buffer.
What follows in the next five tracks is a sequence of ever faster and crunchier bass hurricanes, reflecting chaos and anger and tightly-reigned skill all at once. Our YEDM premiere is the second track, “Baka,” which presumably taken from the anime slang for “crazy” or “foolish,” and it certainly has the wild chaos of an anime fight scene. Easily the heaviest and most chaotic track on the EP, “Baka” drops the listener into the narrative of Different Eyes like a 3-meter vert ramp and doesn’t let go until it’s damn well ready. As intense and chaotic as it sounds, “Baka” was likely the most tightly produced track on the album, simply by virtue of how chaotic it is. It’s always the maddest syncopation that takes the most programming, and it might also be a little nod to jazz fusion. Only the best DJs will be able to mix this track, and it’s likely that’s the way Billain wanted it.
Going through the journey of the rest of the EP with “Kinetic,” “Uncanny Valley,” “FUCK Y00” and “Void Me,” the intensity and speed of the work only increases, but unlike “Baka,” they all have a trackable drum & bass beat. The EP ends up feeling like exploring a wild new planet in some futuristic inner space hellscape, from the prep of “Different Eyes” to the bumpy, aggressive culture shock of “Baka” to finding one’s stride in the “Uncanny Valley” to being over it already with “FUCK Y00” to the last ride of the ego-destroying ride of “Void Me.” “Different Eyes” is the victory lap, a reward for beating the game and making it through this fever dream of an EP.
As a psycho-thriller in sonic form, Different Eyes is a reflection of doing hard inner work. It’s chaos and anger and confusion and a hurricane of emotions, but the title track is the goal: meant to be the new perspective once one has let all that beautiful pain go. Whether you are working on something personally or just unthinkingly follow the arc of this EP, you will come to the end of this masterpiece seeing the world with “Different Eyes.”
Different Eyes drops tomorrow, November 17 on Vision. Click here to pre-order or pre-save.
Use this worksheet to meditate on each of your five senses. Take a step back and make note of any stimuli you observe through your vision, hearing, touch, smell, and taste. This is a great exercise to improve mindfulness and non-judgmental awareness.
This content is for Monthly, Yearly, and Lifetime members only. Join HereLogin
Vision is the cornerstone of achievement, and visionary leaders possess the unique ability to see opportunities where others see obstacles. Learn how to unlock the secrets of becoming a visionary leader and start your journey toward unprecedented success!
Join us on October 26th at 2:00 PM ET for an inspiring webinar led by Logan Stout, author, keynote speaker, and entrepreneur, whose companies have generated billions in revenue. Discover how you can become a visionary leader not only for yourself but for everyone counting on you.
During this insightful webinar, you will learn:
How to establish a clear Vision that guides your path to success.
Strategies to take action on your Vision and turn dreams into reality.
Techniques to embody your Vision, making it an integral part of your leadership style.
Methods to effectively transfer your Vision to inspire and empower your team.
The self-discipline needed to stay committed to your Vision, no matter the obstacles.
Don’t miss this opportunity to learn from Logan Stout’s wealth of experience and wisdom. Register now to secure your spot for this transformative webinar on visionary leadership! Whether you’re an aspiring leader or an established one, this event will equip you with the skills and mindset needed to make your Vision a reality.
Register now and set yourself on the path to becoming a visionary leader.
About the Speaker:
Logan Stout is an accomplished business owner having generated billions of dollars of revenue throughout his career. He is a philanthropist, entrepreneur, best-selling author, keynote speaker and leadership trainer who has made regular appearances on all forms of major media outlets: TV, Magazines, Radio, Podcasts and more.
He has been endorsed by Hall of Fame athletes including Troy Aikman and Pudge Rodriguez, renowned entrepreneurs Barbara Corcoran and Daymond John from ABC’s Shark Tank, Success Magazine’s Darren Hardy, Zig Ziglar’s son and CEO of Ziglar, Inc. Tom Ziglar and many more spanning a wide range of professions.
Newswise — A collaboration between researchers at the University of Illinois Urbana-Champaign and Duke University has developed a robotic eye examination system, and the National Institutes of Health has awarded the researchers $1.2 million to expand and refine the system.
The researchers have developed a robotic system that automatically positions examination sensors to scan human eyes. It currently uses an optical scan technique which can operate from a reasonably safe distance from the eye, and now the researchers are working to add more features that will help it perform most steps of a standard eye exam. These features will require the system to operate in closer proximity to the eye.
“Instead of having to spend time in a doctor’s office going through the manual steps of routine examinations, a robotic system can do this automatically,” said Kris Hauser, a U. of I. computer science professor and the study’s principal investigator. “This would mean faster and more widespread screening leading to better health outcomes for more people. But to achieve this, we need to develop safer and more reliable controls, and this award allows us to do just that.”
Automated medical examinations could both make routine medical services accessible to more people and allow health care workers to treat more patients. However, medical examinations present unique safety concerns compared to other automated processes. The robots must be trusted to operate reliably and safely in proximity to sensitive body parts.
A prior system developed by Hauser and his collaborators was a robotic eye examination system that deploys a technique called optical coherence tomography which scans the eye to create a three-dimensional map of the eye’s interior. This capability allows many conditions to be diagnosed, but the researchers want to expand the system’s capabilities by including a slit eye examiner and an aberrometer. These additional features require the robot arm to be held within two centimeters of the eye, highlighting the need for enhanced robotic safety.
“Getting the robot within two centimeters of the patient’s eye while ensuring safety is a bit of a new concern,” Hauser said. “If a patient’s moving towards the robot, it has to move away. If the patient is swaying, the arm has to match their movement.”
Hauser likened the control system to those used in autonomous vehicles. While the system can’t react to all possible human behaviors, he said, it must prevent “at-fault collisions” like self-driving cars must do.
The award will enable the researchers to conduct large-scale reliability testing. An important component of these tests is ensuring that the system works for as many people as possible. To achieve this, the researchers have developed a second robot that will use mannequin heads to emulate unexpected human behaviors. Moreover, the second robot will automatically randomize the heads’ appearance with different skin tones, facial features, hair and coverings to help the researchers understand and mitigate the effects of algorithmic bias in their system.
The system will be designed for use in clinical settings, but Hauser imagines that one day such systems could be used in retail settings much like blood pressure stations.
“Something like this could be used in an eyeglass store to scan your eyes for the prescription, or it could give a diagnostic scan in a pharmacy and forward the information to your doctor,” he said. “This is really where an automated examination system like this would be most effective: giving as many people access to basic health care services as possible.”
***
Duke University professors Jospeh Izatt of biomedical engineering and Anthony Kuo of ophthalmology are co-principal investigators.
The award, cosponsored by the National Robotics Initiative, will be distributed over three years.
[ad_2]
University Of Illinois Grainger College Of Engineering
Newswise — Primate species with better colour vision are not more likely to have red skin or fur colouration, as previously thought.
The findings, published this week in the Biological Journal of the Linnean Society, suggest that red skin and/or red-orange fur may be beneficial for use in social communication even in primate species that don’t have particularly good colour vision.
It’s long been assumed that primates’ colourful skin and fur is linked to their enhanced colour vision, and the results may have implications for understanding why these traits exist in different species.
Lead author Robert MacDonald from the University of Bristol explained: “There is a profusion of colour in the animal kingdom – think of the striking feathers of a bird of paradise, or the array of vivid hues on display in a coral reef.
“Mammals, though, don’t tend to be so colourful, and are usually quite muted shades of black, brown, or grey.
“Primates such as monkeys, apes and lemurs are the exception to this. Several primate species have really vibrant coloration, in particular bright red skin on the face or anogenital region which can change intensity to signal things like fertility or rank in the dominance hierarchy, or red-orange fur.
“Primates also happen to have unusually good colour vision in comparison to other mammals; while all other mammals are red-green colourblind, meaning red and green appear as the same colour to them, some primates (including humans) can differentiate between shades of red and green. This enhanced colour visual system is generally thought to have evolved in order to more easily spot ripe red fruit or nutritious young red leaves among foliage, but it also makes it easier to spot the vibrant red colours that some primates exhibit.”
Primates are known to use their red colour traits for communication with other members of their species, for example in signalling information about fertility or rank in the social hierarchy. It seems intuitive that having a better colour visual system that allows these traits to stand out more might have facilitated the evolution of these traits in the first place – it would make sense for a species with better colour vision to evolve to be more colourful to take advantage of this ability.
The team set out to definitively investigate whether the evolution of enhanced colour visual system in some primates that allows the differentiation of red from green has facilitated the evolution of red colour traits.
Using photographs, the researchers categorised each species of primate in terms of having or not having particular colourful traits (e.g. red skin on the genital region or face, red-orange fur on different parts of the body). They then compared this colour information with each species’ colour visual ability, taking into account the primate family tree, as well as a few other factors which might also influence coloration or colour visual ability such as whether they’re nocturnal or diurnal and the size of the social group they live in. The aim was to find out whether species that have better colour vision are more likely to have red colouration, after controlling for other potential influencing factors.
Robert explained: “The fact that we didn’t find that species with better colour vision are more likely to be colourful contradicts some long-held assumptions about the origins of the striking variation in colour we see within primates, and means we might have to take a closer look about what colourful red skin or fur is being used for in individual species. It shows that despite the large amount of work that has gone into investigating primate colouration in recent years, we still don’t fully understand the pressures that have shaped the evolution of colour in our own closest relatives.”
Newswise — The ability to visualize faces, objects, landscapes, or even scenes from the past exists on a spectrum. While some can picture the layout of a city in minute detail and mentally walk through it, street by street, others have a perfectly blank internal cinema. In this case, we speak of aphantasia—the inability to voluntarily produce the visual mental image corresponding to an idea.
People whose aphantasia is congenital—i.e., not due to a stroke, brain injury, or psychiatric illness—become aware of their peculiarity reasonably late in life. Indeed, this small deficit in visualization does not cause any handicap, and they have no reason to suspect they are atypical. Nor do they realize that at the other end of the spectrum are hyperphantasic individuals who can produce mental images as precise as illustrations in a book.
“Talking to these people is fascinating. We tend to think that access to visual perception, conceptualization, and memory is the same for everyone. Nothing could be further from the truth,” Paolo Bartolomeo, neurologist and researcher at Paris Brain Institute, says. “Aphantasics cannot mentally picture what their parents, friends, or partner look like when they are away. But they can still describe the physical characteristics of their loved ones: this visual information has been stored, in one way or another”.
Visual mental imagery in question
There is currently a lively debate about the origin of aphantasia. Is it linked to a perceptual deficit? Emotional and psychological factors? A slight difficulty in accessing one’s sensations? To answer this question, Paolo Bartolomeo and Jianghao Liu, a doctoral student in the “Neurophysiology and Functional Neuroimaging” team at Paris Brain Institute, recruited 117 volunteers—including 44 aphantasics, 31 hyperphantasics and 42 people with typical mental imagery—and gave them a mental imagery and visual perception test.
“Our test, called the Imagination Perception Battery (BIP), is designed to assess the link between perception and mental imagery through the different visual qualities that enable a scene to be described—such as shape, color, position in space, presence of words or faces“, Jianghao Liu explains.
Participants were asked to look at a blank screen. At the same time, an off-screen voice announced a visual quality (such as ‘shape’), followed by two words corresponding to concepts they had to materialize in their minds as accurately as possible (‘beaver’ and ‘fox’ for example). The voice also gave them a qualifier (such as ‘long’); then, the participants were asked to decide which of the beaver or fox best matched the epithet ‘long’.
The speed and relevance of responses were recorded, and the respondents were asked to assess the quality of the mental image they had—or had not—managed to produce from the description. Finally, they had to take a perception test in which the stimuli were presented in a visual format: the long fox appeared in the form of an image accompanied by its audio description without the participants having to picture it.
When imagination takes its time
“Our results indicate that the performance of people with aphantasia is equivalent to other groups in terms of perception and the ability to associate a concept with its representation,” Liu comments. “With one exception! Aphantasics are, on average, slower than hyperphantasics and typical imagers when it comes to processing visual information, particularly shapes and colors. They also have little confidence in the accuracy of their answers”.
Previous studies have shown that aphantasics are just as quick as other people to answer questions that require manipulating abstract concepts. Therefore, only the processing of visual information is delayed for them. How can this phenomenon be explained?
“Participants in the aphantasic group perceive elements of reality accurately and show no deficits in memory and language processing. We believe that they present a slight defect of what we call phenomenal consciousness. This means that they have access to information about shapes, colors, and spatial relationships—but that this visual information does not translate into a visual mental image in conscious experience”, Bartolomeo says. “This peculiarity is probably compensated by other cognitive strategies, such as mental lists of visual characteristics, which allow aphantasics to remember everything they have seen.”
The future of perception
These preliminary results are limited by the data collection method, which consisted of an online questionnaire. However, they put us on a promising track to understand how visual mental imagery works. Future studies could reveal the neural mechanisms underlying these observations and, ultimately, help us to understand the visualization deficits specific to stroke patients.
“We also hope to develop interventional tools for certain psychiatric illnesses, such as post-traumatic stress disorder (PTSD), which is characterized by the eruption of images from traumatic memories. If we could rid patients of these intrusive mental images, it would greatly promote their recovery”, Liu concludes.