Artificial Intelligence

Viewed record Moderate Risk
History 339 daily observations
Method Curated sources and AI scoring
Viewing December 18, 2025 Return to latest

Artificial Intelligence Risk

3.7 / 5
Moderate Risk -0.1 from previous reading

Assessment for this date

Today's AI risk is moderate, with advancements in AI capabilities and deployments raising concerns about alignment, military use, and concentration of power.

Record date

December 18, 2025

Trend

Viewing the record for December 18, 2025 within the full trend.

Risk Drivers

What is pushing the current reading.

The current news highlights significant advancements in AI capabilities, such as the introduction of GPT-5.2 and its applications in various fields, including military and enterprise sectors. The collaboration between major corporations and AI developers, like OpenAI and Foxconn, underscores the increasing integration of AI into critical infrastructure, which could lead to concentration of power and potential misuse. Furthermore, the development of AI in military contexts, as noted in the collaboration with NORAD, raises concerns about AI deployment in warfare, which could exacerbate alignment issues and uncontrolled self-improvement. These trends suggest a growing risk of AI being used in ways that may not align with human values or interests, necessitating robust governance and ethical oversight.

Risk Reduction Actions

Priority actions generated from the current analysis.

Government

Implement stricter regulations and oversight on AI development and deployment, particularly in military and critical infrastructure sectors.

NGO

Advocate for transparency and ethical standards in AI collaborations between corporations and government entities.

Industry

Develop and adopt AI safety frameworks to ensure alignment with human values and prevent concentration of power.

Academia

Conduct research on AI alignment and safety to address potential risks associated with advanced AI systems.

Public

Raise awareness about the implications of AI advancements and advocate for responsible AI use and governance.

Sources Monitored

Visible feeds used in this category's nightly run.

Selected Articles

Supporting articles referenced in the latest score.