ɫɫ

5 Ways ɫɫ Says AI Is Surprisingly Good

How Artificial Intelligence Can Help in Diagnostics, Weather Forecasting and More

AI-generated drawing of the ChatGPT logo with a smiley face on it and arms extending to hand a book to someone, plant a tree, give ideas, and shield someone from the rain with an umbrella.
When asked how it would represent itself doing good things, ChatGPT said “I think it would be a blend of symbols and gentle humor — a sort of ‘ChatGPT mascot’ that hints at my purpose. It would be less about me as a machine and more about the feeling I want to give: help, clarity, and a bit of delight.” (Image generated by ChatGPT)

The capabilities of artificial intelligence are expanding every day. ɫɫ is harnessing AI in many beneficial ways. Here, researchers in veterinary medicine, health, climate science, engineering and education ask how AI can help in their fields — and discover the good side of the growing technology.

How Can AI Change the Way Vets Diagnose Disease?

Artificial intelligence has the potential to reshape how veterinarians diagnose diseases in animals. Researchers at the ɫɫ School of Veterinary Medicine are leading some of the transformation. They’ve developed three AI models that can help detect specific conditions using routine bloodwork: leptospirosis (a serious bacterial disease),  (a rare life-threatening disorder of the adrenal glands), and a liver abnormality called a portosystemic shunt. These tools can spot patterns in lab results that might otherwise be missed and speed up the diagnosis.

Stefan Keller, a veterinary pathologist at the ɫɫ School of Veterinary Medicine, said Addison’s disease, for example, is incredibly difficult to recognize because it can mimic other conditions, like kidney or intestinal disease. 

Computer screen showing black and white diagnostic imaging of a dog.
Technicians at the ɫɫ Veterinary Medical Teaching Hospital analyze diagnostic images of a dog to identify issues in the spleen. (Gregory Urquiaga / ɫɫ)

“In that specific case, what we’re doing is we are trying to run the AI application in the background on every blood work that goes through the clinic,” he said. “The idea here is to catch cases that would have not otherwise been identified by a clinician.”

AI could also play a role in radiology. Some companies already offer AI-based analysis of X-rays, which can assist general practitioners who may not have specialist training. Keller said this could give pet owners access to the care that normally would require a specialist and do so more immediately and theoretically cheaply. But that convenience comes with risks.

“You potentially cut the expert out of the loop depending on how you set that up.”

Keller believes machines are better at recognizing patterns, which holds great promise for diagnosing disease. “In my field, we look at tissues and try to diagnose what type of cancer it is,” he said. “We are now at the point where machines can not only tell what type of cancer it is, but they can also diagnose the underlying gene mutation.”

But turning AI into a practical tool in the clinic isn’t easy. The technology has to connect with existing electronic medical record systems — something they were never designed to do. ɫɫ veterinarians are also thinking about how to train vet students to use these tools wisely. Relying on AI too soon could erode diagnostic skills they need.

“I think if AI is implemented responsibly and slowly, it has tremendous potential to do good, but it also has tremendous potential to cause serious damage if it’s not done right,” said Keller. “The promise of the technology warrants that we go through these hoops and hurdles to make it happen.”

— Amy Quinton

Can AI Help Forecast Extreme Weather?

From flash floods and wildfire to water supply and crop cycles, timely, credible information about the changing planet can be a life saver.

At Paul Ullrich’s lab at ɫɫ, scientists use regional and global climate models to capture extreme weather events and other meteorological patterns. Increasingly, machine learning-based systems are deepening our understanding and enhancing predictions of the Earth system.

Photo of a flooded highway with flooded almond orchards on both sides of the road.
An atmospheric river extreme weather event caused the Sacramento River to flood nearby farmland on February 20, 2017 in Chico. (Danaan Andrew-Pacleb / Getty Images)

One of the most visible examples of this is for weather forecasting, where machine learning produces forecasts that often rival modern physics-based models. But artificial intelligence and machine learning, or ML, models can be used to inform a wide range of natural resource needs.   

“A variety of AI methods are available that can greatly outperform physical simulations at estimating streamflow in California, and they can even be used to estimate snowpack,” said Ullrich, an associate professor in the Department of Land, Air and Water Resources. “We can also rapidly produce weather forecasts and simulations of past weather events using a fraction of the computational power and can generate many possible realizations of how those events could have occurred.”

AI could also be used to project and assess climate change and its impacts, including how global to regional temperatures respond to carbon dioxide.

Despite its potential, the technology is still in its infancy. Earth system models using AI face an additional challenge of needing to represent future states that are expected to be very different from the past without direct observations.

Ullrich recently led  in the American Geophysical Union Journal of Geophysical Research: Machine Learning and Computation that recommends ways the scientific community can evaluate machine-learning-based Earth system models to build trust in the models’ faithfulness to reality.

The study emphasized that these systems “are potentially a step-change in our ability to do meaningful science and deliver actionable information to decision-makers … However, to be useful, either scientifically or in practice, we must have confidence that ML-based Earth system models are producing simulations and predictions that are consistent with physical laws.”

— Kat Kerlin

How Can Doctors Harness AI to Improve Colorectal Cancer Screening?

Over the last century, medical technology has exploded, giving doctors access to more patient data than ever before. While this abundance of information is valuable, it can also make efficiently identifying the most important pieces to provide care more difficult.

Scene in a hospital room with a woman nurse or doctor in a white coat and stethoscope pointing to a computer screen mounted on the wall displaying data. A man, presumably a patient, sits on the end of the hospital bed.
(Karin Higgins / ɫɫ)

Today, the integration of artificial intelligence is transforming this landscape. AI helps doctors analyze medical data and identify the most vital information to support clinical decisions. This means doctors can get the key information they need faster and focus on what they do best: using their medical knowledge to treat patients and ultimately offer better care. 

At ɫɫ Health, gastroenterologists like Hisham Hussan are using AI to enhance colorectal cancer screening. By analyzing patients’ electronic health records — such as blood tests, cholesterol levels, and even social factors — AI can flag who might be at higher risk for polyps (a small clump of cells that forms on the lining of the colon) or colon cancer. Once identified, those patients can be contacted to set up a screening appointment.

Another emerging application involves using machine learning to predict polyp development in individuals who have experienced significant weight loss. New research suggests that depending on dietary factors, substantial weight loss may negatively affect gut health. AI helps identify at-risk individuals so clinicians can intervene earlier.

Advances in real-time image analysis are also transforming colonoscopy procedures. AI-powered systems integrated with colonoscopy cameras can distinguish between benign and precancerous lesions and even identify subtypes of precancerous polyps. Early studies show this technology helps doctors find more polyps, including ones they might have missed on their own, making diagnoses more accurate.

“While AI offers powerful capabilities, it is essential that doctors use it as a support tool, not a replacement for our clinical judgment,” shared Hussan. “Like any technology or human, AI isn’t perfect and can miss things. The best care happens when AI and doctors work together, combining powerful technology with expert medical insight.”

— Liam Connolly

How Can AI Read the Body’s Signals?

Whenever we move a muscle, it gives off a small electrical signal that can be picked up by electrodes on the skin, a technique called surface electromyography. Clenching your fist, for example, sets off electrical signals from muscles in the forearm. For years, researchers have sought ways to use these signals to control prosthetic limbs.

Using electromyography to let someone seamlessly control a robotic prosthetic hand is actually really challenging,” said Jonathon Schofield, associate professor in the ɫɫ Department of Mechanical and Aerospace Engineering

“The electrical signals we record from muscles are often small, the patterns of muscle activity are complex, and everything needs to work reliably and without noticeable delay for the user to feel like they are naturally controlling the prosthesis," Schofield said. 

Neuroengineers like Schofield and Lee Miller at the Department of Neurobiology, Physiology and Behavior use a form of artificial intelligence — machine learning — to solve this problem. The machine learning interface can be trained on sensory data until it “understands” the pattern of signals that relates to a particular hand gesture. 

A man sits at a desk with a black device strapped to his wrist, and wires leading to a robotic arm sitting on the desk. He's moving his hand, and a readout appears on the screen.
In this 2022 photo, Peyton Young, a PhD candidate in mechanical engineering, works in the Schofield lab with a robotic limb that is connected to an electromyography forearm controller which can move the robotic arm. (Gregory Urquiaga / ɫɫ)

Miller’s lab is also using a similar approach to recover the natural voices of people who have lost them due to injury or surgery on their larynx. By training an algorithm on recordings of someone’s voice, together with electromyography from movements of their face and jaw as they attempt to speak, they hope to one day recreate someone’s authentic natural speech from a voice synthesizer. 

“Restoring speech by decoding electromyography signals on the face or neck has been a research goal for decades. Only now is it within reach, largely thanks to recent advances in machine learning and in particular, neural networks,” Miller said.

Speech involves coordinating over 100 muscles over time, Miller added.  

“To translate muscle signals into speech, the algorithm must understand both the complexity of each moment as well as how moments are stitched together across time. It’s like making sense of a symphony solely from the orchestral score.” he said. 

Machine learning can also allow prosthetics to adapt to their users. Not all prosthetic users are the same, and age, weight, skin tone and the amount of muscle in the limb can all affect electromyography signals. A prosthetic limb that can “learn” to work better with its user will be more acceptable. 

In addition to electrical signals, Schofield’s lab has been experimenting with measuring pressure changes in the forearm as muscles tense and relax. Feeding the algorithm with this data as well as the electrical signals gives improved results, they have learned. 

— Andy Fell

How Are High Schoolers Thinking About the Ethics of AI?

As artificial intelligence becomes increasingly embedded in everyday life, a prevailing narrative about AI in schools has emerged: Students are relying on it to avoid doing their coursework.

, an associate professor at the ɫɫ School of Education who specializes in K-12 students’ comprehension of digital tools, disagrees with this discourse. She and her colleague Amy Stornaiuolo at University of Pennsylvania view young people as “philosophers of technology” — critical thinkers who engage with the ethical, cultural and social dimensions of emerging technology, such as phone applications and online platforms. Yet even though students regularly engage with AI, their voices are often left out of conversations on artificial intelligence’s impacts, especially in shaping their learning experiences.

Photo of a student holding a phone that has an AI chatbot on it while doing homework. A computer, notebooks, pencils and papers are visible on the desk underneath the phone.
(Getty Images)

Higgs and Stornaiuolo caution teachers, parents and other adults against judging the impact of AI on K-12 classrooms before consulting the students who use it. “To ensure decisions and policies related to AI and education reflect the realities of young people,” they wrote, “the students’ voices must be at the forefront of discussions and decision-making.”

In their of high school students’ perceptions and applications of artificial intelligence, 91 percent of participants reported experimenting with AI tools. However, many also expressed concern about being seen as “cheaters” for doing so.

“These concerns seemed rooted in recognition of narratives constructed by adults about AI and youth. One student suggested that adults were more interested in perpetuating stereotypes of youth using AI for cheating than in understanding what youth were actually doing with AI,” wrote Higgs and Stornaiuolo.

Other participants raised deeper questions about authenticity and humanity’s relationship to technology. Twenty-seven percent of students said they intentionally avoided using AI regularly or at all, citing a desire to produce original work and retain control over the creative process. Some expressed anxiety about AI displacing authentic artistic expression — and, by extension, shaping how human experience is represented.

Higgs recommended that educators and schools engage young people in critical discussions on artificial intelligence by encouraging them to explore it. Talking through the experiences students have while creating freely and testing the limits of a platform can support them to grapple with the ethical issues surrounding AI — without defaulting to narratives about the right and wrong ways to use technology.

— Madeline Gorrell

Primary Category

Tags