Top Stories Daily Listen Now
RawStory

Science

Marjorie Taylor Greene blames historic flooding on 'climate change' — with a racial spin

Rep. Marjorie Taylor Greene (R-GA) seemed to blame "climate change" for historic flooding in Georgia and North Carolina in the aftermath of Hurricane Helene.

During a Tuesday town hall event in Murray, Georgia, Greene complained that her Democratic colleagues wanted to allow additional immigrants to enter the United States.

Keep reading... Show less

U.S. breast cancer rate rising sharply even as deaths fall: study

Breast cancer rates are rising sharply in the United States, driven by increases among younger women and Asian Americans, a study said Tuesday.

The biennial report by the American Cancer Society found the number of cases grew by one percent each year from 2012 to 2021, even as the overall death rate continued its historic trend of decline, falling 44 percent from 1989 to 2022.

Keep reading... Show less

Droughts drive Spanish boom in pistachio farming

Two decades ago, Miguel Angel Garcia harvested grapes and grains on his farm in central Spain, like his father and grandfather before him.

Now he produces pistachios -- a more lucrative crop that can better withstand the droughts that have become more frequent and intense in Spain.

Keep reading... Show less

Online spaces are rife with toxicity. Well-designed AI tools can help clean them up

Imagine scrolling through social media or playing an online game, only to be interrupted by insulting and harassing comments. What if an artificial intelligence (AI) tool stepped in to remove the abuse before you even saw it?

This isn’t science fiction. Commercial AI tools like ToxMod and Bodyguard.ai are already used to monitor interactions in real time across social media and gaming platforms. They can detect and respond to toxic behaviour.

The idea of an all-seeing AI monitoring our every move might sound Orwellian, but these tools could be key to making the internet a safer place.

However, for AI moderation to succeed, it needs to prioritise values like privacy, transparency, explainability and fairness. So can we ensure AI can be trusted to make our online spaces better? Our two recent research projects into AI-driven moderation show this can be done – with more work ahead of us.

Negativity thrives online

Online toxicity is a growing problem. Nearly half of young Australians have experienced some form of negative online interaction, with almost one in five experiencing cyberbullying.

Whether it’s a single offensive comment or a sustained slew of harassment, such harmful interactions are part of daily life for many internet users.

The severity of online toxicity is one reason the Australian government has proposed banning social media for children under 14.

But this approach fails to fully address a core underlying problem: the design of online platforms and moderation tools. We need to rethink how online platforms are designed to minimize harmful interactions for all users, not just children.

Unfortunately, many tech giants with power over our online activities have been slow to take on more responsibility, leaving significant gaps in moderation and safety measures.

This is where proactive AI moderation offers the chance to create safer, more respectful online spaces. But can AI truly deliver on this promise? Here’s what we found.

‘Havoc’ in online multiplayer games

In our Games and Artificial Intelligence Moderation (GAIM) Project, we set out to understand the ethical opportunities and pitfalls of AI-driven moderation in online multiplayer games. We conducted 26 in-depth interviews with players and industry professionals to find out how they use and think about AI in these spaces.

Interviewees saw AI as a necessary tool to make games safer and combat the “havoc” caused by toxicity. With millions of players, human moderators can’t catch everything. But an untiring and proactive AI can pick up what humans miss, helping reduce the stress and burnout associated with moderating toxic messages.

But many players also expressed confusion about the use of AI moderation. They didn’t understand why they received account suspensions, bans and other punishments, and were often left frustrated that their own reports of toxic behavior seemed to be lost to the void, unanswered.

Participants were especially worried about privacy in situations where AI is used to moderate voice chat in games. One player exclaimed: “my god, is that even legal?” It is – and it’s already happening in popular online games such as Call of Duty.

Our study revealed there’s tremendous positive potential for AI moderation. However, games and social media companies will need to do a lot more work to make these systems transparent, empowering and trustworthy.

Right now, AI moderation is seen to operate much like a police officer in an opaque justice system. What if AI instead took the form of a teacher, guardian, or upstander – educating, empowering or supporting users?

Enter AI Ally

This is where our second project AI Ally comes in, an initiative funded by the eSafety Commissioner. In response to high rates of tech-based gendered violence in Australia, we are co-designing an AI tool to support girls, women and gender-diverse individuals in navigating safer online spaces.

We surveyed 230 people from these groups, and found that 44% of our respondents “often” or “always” experienced gendered harassment on at least one social media platform. It happened most frequently in response to everyday online activities like posting photos of themselves, particularly in the form of sexist comments.

Interestingly, our respondents reported that documenting instances of online abuse was especially useful when they wanted to support other targets of harassment, such as by gathering screenshots of abusive comments. But only a few of those surveyed did this in practice. Understandably, many also feared for their own safety should they intervene by defending someone or even speaking up in a public comment thread.

These are worrying findings. In response, we are designing our AI tool as an optional dashboard that detects and documents toxic comments. To help guide us in the design process, we have created a set of “personas” that capture some of our target users, inspired by our survey respondents.

Some of the user ‘personas’ guiding the development of the AI Ally tool. Ren Galwey/Research Rendered

We allow users to make their own decisions about whether to filter, flag, block or report harassment in efficient ways that align with their own preferences and personal safety.

In this way, we hope to use AI to offer young people easy-to-access support in managing online safety while offering autonomy and a sense of empowerment.

We can all play a role

AI Ally shows we can use AI to help make online spaces safer without having to sacrifice values like transparency and user control. But there is much more to be done.

Other, similar initiatives include Harassment Manager, which was designed to identify and document abuse on Twitter (now X), and HeartMob, a community where targets of online harassment can seek support.

Until ethical AI practices are more widely adopted, users must stay informed. Before joining a platform, check if they are transparent about their policies and offer user control over moderation settings.

The internet connects us to resources, work, play and community. Everyone has the right to access these benefits without harassment and abuse. It’s up to all of us to be proactive and advocate for smarter, more ethical technology that protects our values and our digital spaces.

Keep reading... Show less

A new immersive cinema is helping firefighters to better prepare for megafires

As summer approaches, the threat of bushfires looms. Earlier this month, an out-of-control blaze in Sydney’s northern beaches burnt more than 100 hectares of bushland, threatening nearby homes.

Climate change is making bushfires larger, hotter and faster. Previously unthinkable catastrophes, such as the “Black Summer” megafires in Australia in 2019/2020 and the ones that ravaged Maui, Hawaii, in August 2023, are becoming more common.

Keep reading... Show less

CubeSats, the tiniest of satellites, are changing the way we explore the solar system

Most CubeSats weigh less than a bowling ball, and some are small enough to hold in your hand. But the impact these instruments are having on space exploration is gigantic. CubeSats – miniature, agile and cheap satellites – are revolutionizing how scientists study the cosmos.

A standard-size CubeSat is tiny, about 4 pounds (roughly 2 kilograms). Some are larger, maybe four times the standard size, but others are no more than a pound.

Keep reading... Show less

On remote Greek island, migratory birds offer climate clues

Gently holding a blackcap warbler in his palm, ornithologist Christos Barboutis blew on its feathers to reveal the size of its belly: a good indicator of how far the bird can migrate.

Acutely vulnerable to climate change, migratory birds offer valuable clues to scientists about how our warming planet is affecting wildlife: from their shifting migration patterns to their body weight.

Keep reading... Show less

Hurricanes, storms, typhoons... Is September wetter than usual?

With typhoon Yagi battering Asia, storm Boris drenching parts of Europe, extreme flooding in the Sahel and hurricane Helene racing towards Florida, September so far has been a very wet month.

But while scientists can link some extreme weather events directly to human-caused global warming, it remains too early to draw clear conclusions about this sodden month.

Keep reading... Show less

Bees have irrational biases when choosing which flowers to feed on

Just like people confronted with a sea of options at the grocery store, bees foraging in meadows encounter many different flowers at once. They must decide which ones to visit for food, but it isn’t always a straightforward choice.

Flowers offer two types of food: nectar and pollen, which can vary in important ways. Nectar, for instance, can fluctuate in concentration, volume, refill rate and accessibility. It also contains secondary metabolites, such as caffeine and nicotine, which can be either disagreeable or appealing, depending on how much is present. Similarly, pollen contains proteins and lipids, which affect nutritional quality.

Keep reading... Show less

Airdropping vaccines to eliminate canine rabies in Texas

Rabies is a deadly disease. Without vaccination, a rabies infection is nearly 100% fatal once someone develops symptoms. Texas has experienced two rabies epidemics in animals since 1988: one involving coyotes and dogs in south Texas, and the other involving gray foxes in west central Texas. Affecting 74 counties, these outbreaks led to thousands of people who could have been exposed, two human deaths and countless animal lives lost.

In 1994, Gov. Ann Richards declared rabies a state health emergency. The Texas Department of State Health Services responded by launching the Oral Rabies Vaccination Program to control the spread of these wildlife rabies outbreaks.

Keep reading... Show less

World's first CO2 storage service soon ready in Norway

Norway is set to inaugurate Thursday the gateway to a massive undersea vault for carbon dioxide, a crucial step before opening what its operator calls the first commercial service offering CO2 transport and storage.

The Northern Lights project plans to take CO2 emissions captured at factory smokestacks in Europe and inject them into geological reservoirs under the seabed.

Keep reading... Show less

Restoring nature, 'adaptation' helped limit Storm Boris impact

The restoration of a creek in Vienna reduced the impact of flooding caused by Storm Boris, authorities say, one of many projects experts believe helped central Europe endure the deluge better than in previous years.

Flooding unleashed by the storm burst dams and devastated entire villages in central Europe, killing at least two dozen people in Austria, the Czech Republic, Poland and Romania.

Keep reading... Show less

‘Birth control is poison’: MAGA group spokeswoman details plan for Trump to win ‘females’

The spokesperson for Charlie Kirk's Turning Point Action group expects Robert F. Kennedy Jr. to help former President Donald Trump to win over "females" because she said birth control was "poison" and men in the U.S. "don't have sperm anymore."

During a Tuesday interview, Turning Point's Caitlin Sinclair hailed Kennedy's Make America Healthy Again (MAHA) strategy for the Trump campaign.

Keep reading... Show less