Francis Clase
AI Researcher

3 years of research in superintelligence, AI safety, and advanced AI systems. Exploring the theoretical foundations and practical implications of artificial general intelligence.

Research Overview

I focus on understanding the dynamics of advanced AI systems, from theoretical models of superintelligence to practical questions of AI safety and alignment. My work bridges foundational theory with emerging empirical evidence from modern AI development.

3+
Years of Research
Multiple
Research Areas
Independent
Researcher

Publications

Research contributions to AI safety and superintelligence theory.

The Intelligence Explosion: From Singular Event to Complex System Dynamics

Francis Clase

Independent Research • 2025

A comprehensive analysis synthesizing 60 years of intelligence explosion theory, from I.J. Good's foundational 1965 hypothesis through modern critiques and alternative models.

Featured

Unexplored Frontiers in AI Superintelligence Research

Francis Clase

Research Survey • 2025

Systematic identification of theoretical gaps and practical opportunities in superintelligence research. Covers documentation gaps, interdisciplinary connections, and accessible research projects.

AI Safety Frameworks for Advanced Systems

Francis Clase

In Progress • 2025

Developing practical safety evaluation frameworks for AI systems approaching human-level capabilities across multiple domains.

Draft

Empirical Analysis of Large Language Model Capabilities

Francis Clase

Technical Report • 2024

Systematic evaluation of reasoning capabilities and emergent behaviors in current large language models.

Value Alignment in Multi-Agent AI Systems

Francis Clase

Working Paper • 2024

Exploring coordination mechanisms and value preservation in systems with multiple interacting AI agents.

Latest Research Highlights

Recent developments and ongoing work in AI safety and superintelligence theory.

FEATURED RESEARCH

Intelligence Explosion Dynamics

Comprehensive analysis of 60 years of theory from Good to modern critiques.

IN PROGRESS

AI Safety Frameworks

Developing practical evaluation frameworks for advanced AI systems.

ANALYSIS

LLM Capabilities Study

Systematic evaluation of reasoning and emergent behaviors.

Research Specializations

Core areas of focus in artificial intelligence research.

Superintelligence Theory

Analyzing intelligence explosion dynamics, recursive self-improvement, and takeoff scenarios.

AI Safety & Alignment

Exploring control problems, value alignment, and safety measures for advanced AI systems.

Emerging AI Systems

Studying current AI developments and their implications for future superintelligent systems.

Research Collaboration

Interested in discussing AI safety, superintelligence theory, or potential research collaborations? Connect with Francis Clase for academic exchanges and research insights.