The Center for AGI Investigations: Who, what, where, when, why, and how
Who We Are
We’re launching a blog today to discuss AI and share information about our research group, the Center for Artificial General Intelligence Investigations.
We are an independent, public-interest, applied science research hub dedicated to exploring the physical properties of emergent intelligence.
Our primary mission is to develop a holistic, empirical methodology for investigating claims related to the emergence of “human-level” artificial intelligence.
In layperson’s terms, we want to use the scientific method to determine whether an AI model has crossed the threshold of “human-level” intelligence, or AGI, through practical testing.
Scenario 1: The fictional XYZ Widget company has invested trillions of dollars into developing a Super Widget.
At first, the Super Widget seemed like it was going to be extremely difficult to figure out.
The Super Widget team knew it had some good ideas, but nobody could be certain how effective they’d be at scale. And then, boom, not only were they effective at scale, the results were so promising that the entire widget industry took notice and began imitating XYZ’s work.
And for good reason. Thanks to XYZ’s novel techniques and innovative architectures, 95% of the work the Super Widget team expected to do was accomplished in less than half the time they predicted it would take. The shareholders were very, very pleased.
As time went on, however, that last 5% remained unsolved. Despite the team’s best efforts, years passed with little in the way of genuine progress. Eventually, a big meeting was called with all of the company’s decision makers present.
“What do we do about the Super Widget?”
“Do we really need that last 5%?”
“What, exactly, is the difference between what we got and what we want to ship?”
“Can we ship what we got and call it a Super Widget?”
What We’re Doing
Back in the real world, there are several technology firms currently experiencing a similar situation to that of the fictional XYZ Widget corporation. Only, instead of building Super Widgets, they’re trying to make good on the promise that human-level artificial intelligence is just around the corner.
Our team at the Center for AGI Investigations is working to represent the public interest with transparent, open research whenever organizations with conflicting interests position themselves as a voice of authority on the emergence of “human-level” artificial intelligence.
Simply put: We think the stakes are too important to allow any organization that stands to gain or lose a significant portion of its market capitalization based on the results of the so-called “race to AGI” to decide whether “human-level” intelligence is no longer indisputably the domain of humanity.
Toward our goal of conducting and publishing open research in the public interest, we are currently doing the following:
Engaging leading researchers from related disciplines in informative discussions. We’re presenting our core research ideas to domain experts. This exchange-of-ideas is a crucial step in determining the scope of research that’s currently within our reach.
Launching our website and blog. Though not necessary for scientific research, these official channels give us a platform for transparent communications about our research, funding efforts, and ongoing status.
Working on our first research project. We believe that the initial step toward developing a practical method to test for human-level AI is determining a threshold at which we can declare a fail-state. Specifically, we’re reviewing current research on embodiment and the foundational principles of intelligence to see if there’s a way to level the playing field by giving AI models agency, sensory input, and the means to interpret sensory data.
We may be the new think tank in town, but we’ve been exploring this research, working on our initial observations and theories, and gathering data for decades.
Where the Center is located
We’re not a physical Center; there is no building. Instead, we’re a Center in the sense of a “hub for research.”
The foundational concept behind the Center for AGI Investigations is that it doesn’t have a static team of scientists pursuing their work under its banner.
Let’s put that into perspective:
The Center decides to pursue a specific research project, such as the above mentioned review on literature related to embodiment in support of research toward an empirical threshold for detecting AGI, after identifying and agreeing that such research is in the public interest.
We identify and engage potential research partners across the gamut of possible involvement that ranges from the brief, professional exchange of ideas all the way to primary contributions.
We conduct subsequent research using open science methods and under the explicit understanding that said research belongs to the public.
We repeat this process with each new project.
When we’ll do all the things we’re promising
Firstly, let’s sum up exactly what the Center is promising to do.
We’re figuring out how to debunk weak claims related to the emergence of “human level” AI so the public interest isn’t overshadowed by pursuit of profits in the race to develop AGI.
We’re figuring out how to investigate the underlying physical interactions involved in the emergence of organic intelligence in order to determine whether the problem of replicating human level intelligence on binary architectures is fundamentally tractable.
We’re examining the potential consequences for global stakeholders, on behalf of the public interest, as the probability of a reputable organization or influential individual declaring a “winner” in the “race to AGI” rises.
We intend to do these things as soon as the research allows, which will be quicker with support. Much of our work has already begun. Our current roadmap in support of our ongoing research is as follows:
Today (March 24, 2025): Announce the Center, launch the blog, and engage in the exchange of ideas with the entire world.
First three months: Public update on our ongoing viability analysis toward the following:
Registering as a nonprofit organization
Identifying potential founding partners and board members
Funding and scaling our research
First six months: Public status update on our first preprint research paper.
By the end of 2025: Publish the following documents for public record:
A rough draft of our bylaws, pending nonprofit status
A three year organizational steering plan
Status updates on partner outreach, recruiting, and funding
A comprehensive review and audit of our research to date
Probably 2026, hopefully sooner:
Core founding board members identified and nonprofit applications submitted or awaiting preliminary board approval.
Publish preprint
Why we’re doing things the way we are
We spent hundreds of hours researching and deliberating on the best way to go about accomplishing our mission. Our eureka moment was realizing that the only product or service we had any interest in providing was protecting the public interest from the conflicting interests at play in the race to develop AGI.
With that as our mission, we conducted a deep analysis of the current AGI research landscape and identified hundreds of individual organizations that ran the gamut from small teams of researchers linked via academic networks all the way to laboratories with billions of dollars in backing.
Here’s what we found:
To the extent that it’s arguably possible, AI alignment and safety is an oversubscribed field of research.
There are hundreds of relatively well-funded groups actively trying to develop AGI or human-level artificial intelligence.
Many of those well-funded groups are registered nonprofit organizations.
There are thousands of for-profit startups directly working on AGI or tools and services related to the development of AGI.
The disconnect between the science, journalism, policymaker, and citizen stakeholder communities on matters related to the legitimate capabilities of modern AI models is demonstrably widespread.
In many instances, scientists and developers working on the development of advanced AI technologies have clearly demonstrated a propensity to believe that current models, such as OpenAI’s ChatGPT 4o and Anthropic’s Claude 3, possess the emergent qualities of an omnipotent oracle. We’ve encountered instances where computer scientists and other researchers admit they were surprised to learn that these models can’t actually answer questions like “how many people are riding on buses in Chicago right now?”
Here’s what we didn’t find:
Any other public interest groups working on a holistic methodology for investigating claims related to the emergence of human-level artificial intelligence via external, practical testing and observations.
This represents what we feel is a massive gap in the current research landscape. To address this gap, we looked for organizations that shared our basic philosophy. And what we learned was that the closest kindred spirit we could find in the realm of public interest research was the Search for Extraterrestrial Intelligence (SETI) Institute.
Their job isn’t to create aliens or define exactly how intelligent they’ll be. SETI’s mission is “to lead humanity's quest to understand the origins and prevalence of life and intelligence in the universe and share that knowledge with the world.”
Likewise, we’re not trying to build human-level intelligence or determine all the things an AGI might be capable of. We’re developing a means to detect human-level intelligence the only way our species knows how: through the scientific observation and empirical measurement of its practical demonstration.
And, like SETI, we’re dedicated to pursuing that research in the public interest and sharing our findings with the world.
Ultimately, we intend to conduct research investigating the physical properties surrounding the emergence of intelligence. But our research priorities are focused on the most pressing concerns related to our mission to protect the public interest. For now, that means working to ensure humanity maintains control of the narrative surrounding exactly what a “human-level” intelligence is (and isn’t).
To that end, we felt there was no other choice than to commit ourselves to operating as an open science research group dedicated to the public interest.
We asked ourselves how we could support our research efforts without a product or service to offer consumers and clients. Here’s what we considered:
Getting all of our ideas together in a big binder and sending it to MIT, OpenAI, Center for AI Safety, or some other organization that’s already working in the area of AGI research to see if they could use it.
Quietly pursuing our research independently and letting it speak for itself on the peer-review scene.
Forming a decentralized research collective.
Swallowing our egos, coming up with something to sell, and becoming a startup. Then using any eventual profits to fund a nonprofit arm of our for-profit organization that could serve the public interest (the Reverse OpenAI Maneuver, so to speak).
We gave these scenarios (and dozens of others) careful consideration. Here’s what we decided:
We want research partners. But any research we do has to be owned by the public and explicitly undertaken in the public interest. Currently, our thinking is that in order to legally empower “the public” with ownership of our work we should form a nonprofit with strongly worded bylaws about the scope of research, impetus for conducting research, and specific research partnerships we’ll engage in pertaining to our mission to represent the public interest.
We’ve been quietly pursuing our research for decades. We believe that now is the time to publicly pursue it. Since we intend to conduct research on behalf of the public, we consider every scientist and researcher on the planet a potential colleague, mentor, or contributor. We’re all “the public” and we believe this work is in everyone’s interest.
Decentralized research collectives don’t meet our minimum standards for ensuring the public interest isn’t compromised by undeclared conflicts of interest.
We have no interest in being a startup because we’d rather be beholden to the public than our shareholders. Even if we follow the aforementioned prescription and spin out a nonprofit think tank, our primary source of funding and direction would just be our for-profit stakeholders wearing different hats.
Why we think the world should take us seriously
Before we can explain why anyone should care about what we’re doing, let us revisit the question of who we are.
The person who put together our website and blog along with all the other behind-the-scenes work to get us where we are now is Nicole Greene. Her husband, the person writing the document you’re currently reading, is Tristan Greene. By and large, we’re the ones doing all of the heavy-lifting to get the Center up and running.
I'm a science and technology journalist, independent researcher, consultant, and US military veteran. I spent 10 years in the US Navy, which I split between the Iraq War and serving aboard a vessel conducting counternarcotics operations in the Caribbean.
After my time in the service, I pursued a career in technology that led me to apply for an internship at The Next Web in 2016 as a technology journalist.
In the ensuing years, I’ve written and published thousands of articles on a variety of science and technology topics. I’m probably most well known for my writing on artificial intelligence and quantum computing. My specialty is interpreting preprints and peer-reviewed articles for mainstream audiences and devising thought experiments to explain complex scenarios.
As an independent researcher and freelance writer and consultant, I’ve done brief contract work for Anthropic (work entirely unrelated to what the Center for AGI Investigations is doing) and I’ve recently done consulting work for a production company called Fighter Steel.
Despite being largely known as an AI reporter and researcher, in whatever circles I’m known in, I would argue that my primary interest is in the study of physics. As a lifelong autodidact, I have no academic background to speak of.
Here’s what I believe:
The emergence of a truly human-level artificial intelligence — the Singularity, as Kurzwiel calls it — would be the most impactful global event since the dinosaurs went extinct.
It is incredibly challenging to make a viable argument for the emergence of human-level intelligence via a binary-based architecture.
There’s a lot more money tied up in the development of AGI than most people think.
So-called “AGI” benchmarks don’t measure cognitive capabilities, they measure benchmark performance.
It’s weird that so many people are willing to believe hook line and sinker that we’re on the immediate precipice of welcoming what’s either the 1st or 2nd smartest intelligence in our known universe into existence, but many of those same people scoff at the notion that Google, Microsoft, and IBM’s quantum labs will produce useful research within the next few decades.
From my vantage point as a science and technology journalist, I’ve gleaned a bird’s eye view of what a lot of people working toward the development of AGI are saying versus what they’re actually doing and, let me tell you: we’re not all on the same page.
The way I see it, the biggest challenge when it comes to discussing the foundations of intelligence is that everybody is saying something different, but we’re all using the same words.
Why you should trust us to do research
Scenario 2:
The decision makers at the fictional XYZ Widget company from Scenario 1 have finished their meeting. They've decided to announce to the world that they’ve developed the Super Widget.
Nobody really knows how to define a Super Widget, but XYZ has been adamant for years that the Super Widget can do things no regular widget can do. Anyone who doesn’t have access to a Super Widget will certainly be at a disadvantage in all matters in which widgets have any importance.
Since we’re talking about widget and Super Widget adoption at the global scale, it’s demonstrably in the public interest to be able to determine what the difference between a widget and a Super Widget is.
But here’s the problem: when we ask XYZ to tell us, they send us thousands of pages of peer-reviewed research explaining that it’s impossible to explain how or why widgets become Super Widgets.
Per the fictional XYZ corp’s fictional research: “Our techniques have shown that we can increase the widgetness of widgets by feeding them other widgets. We believe that scaling this technique will result in a widget that has eaten so many other widgets that it spontaneously morphs into a Super Widget. Unfortunately, the scientific machinations underpinning widget morphization occur within the black box of the negative spacetime continuum and we’re unable to conduct direct observations or indirect measurements of the emergence of the Super Widget state. We know it when we see it.”
If the Super Widget development team at XYZ corp doesn’t know how to tell the difference between a widget and Super Widget, how in the heck is anyone else supposed to?
At the Center for AGI Investigations, we don’t think “it feels like AGI” is good enough evidence for science or the public interest. Not in the fictional world of widgets and not in the real world when it comes to black box artificial intelligence systems.
But, let’s keep it real, we’re not about to solve intelligence. Anyone who claims to have the answers to life, the universe, and everything is either the late Douglas Adams or selling something.
Topics such as consciousness, sentience, and existence have kept scientists and philosophers busy for millennia. We want to contribute to that knowledge. But we feel well-positioned to pursue more practical research that will have an immediate impact by filling a glaring gap in the current landscape with something useful.
Thus, in order to best contribute to the public interest, our research focuses on developing techniques for the investigation of claims related to the existence of AGI. And, in order to investigate claims related to AGI, here’s what we’ve determined so far:
The public doesn’t have unfettered access to current AI models, their training sets, or the proprietary methodologies used to develop them.
In many cases, the developers creating models cannot know the exact contents of the datasets used to train them because there are more individual files than a team of humans working nonstop could parse at-a-glance in several lifetimes.
Benchmarking doesn’t predict performance.
The task of determining the exact full-spectrum cognitive capabilities of a given model appears to be intractable with current methods.
A viable alternative to measuring performance could be establishing a paradigm for determining a fail-state for investigating claims. It’s impossible, for example, to measure progress toward a specific state without conducting observations (see: Uncertainty), but the challenge of empirically determining whether a predicted end-state has occurred should be reducible to a binary threshold even if we can’t accurately measure its subjective or objective developmental proximity.
There may not be a feasible methodology by which to use the scientific method to empirically confirm that the threshold for AGI has been crossed. Perhaps, however, we can use it to falsify claims made without it.
How we’re going to pull all of this off
One day at a time, with your help, and with absolute transparency. Our goals are ambitious but they are as achievable as our plans to achieve them are detailed. We know exactly what we’re trying to accomplish and we are keenly aware of the obstacles and challenges we face in getting there.
Conducting research isn’t free or easy. Neither is getting it reviewed or publishing it.
Funding doesn’t grow on trees.
You’ve never heard of us before now.
Our ideas on AI research are unconventional, to say the least.
In order for us to succeed in spite of these concerns, we need to do the following:
Engage and partner with like-minded individuals and organizations.
Find a way to support our research efforts.
Put together a team that can help us achieve our goals.
With those challenges and requirements in mind, these are our top priorities right now:
Finding someone to steward our transition from open research group to registered nonprofit. We need a leader who cares about our mission and understands what we’re trying to accomplish. This person’s primary responsibilities will be heading the executive and strategy and planning committees and recruiting officers for the board. They’ll also act as a representative for the Center with a specific focus on stewarding the nonprofit registration process.
Finding a legal officer.
Finding a financial officer.
Securing research partners
Publishing our first preprint
Funding
Aside from our current researcher outreach efforts, and the projects we’re already working on, here’s what we’re doing right now to address those priorities:
Talking to people in our personal and professional networks to see who might be interested in getting involved.
Establishing multiple content streams (such as this blog) in order to share our ideas, generate enthusiasm for our research, and build community.
Contacting journalists, podcasters, policy makers, and public interest groups to discuss our mission, goals, and current research.
Establishing clear, transparent, and ethical fundraising protocols.
Final thoughts
We believe deeply in our mission because we think humanity deserves to know whether human-level AI has been reached (or not) without relying on guesswork.
We’re also a bit worried about what could happen if the AGI bubble continues to swell and, ultimately, bursts. Our research indicates that there is a loud contingency of experts ready to call BS on the weakest claims related to the emergence of AGI, but we’ve struggled to make a list of contemporary luminaries who share our concerns that human-level AI simply may not be tractable on binary architectures.
Our penultimate concern, as unlikely as it might seem, is that we live in a universe where AGI isn’t possible using current architecture but, despite this, one day soon the general public becomes tricked into thinking that a typical generative AI system is actually a human-level intelligence.
A lot of our peers in the AI research community seem to think this would be a minor concern. Some of the thinking we’ve encountered that bothers us is the notion that a narrow AI system that’s strong enough to convince most people that it’s a strong AI system is close enough to being strong for marketing purposes.
To us, that’s like saying a platypus that looks enough like a duck to convince most people it’s a duck is basically just a duck. But, if you grab a platypus by the foot, thinking it’s a duck, you’re going to wish you hadn’t. Platypi inject their venom via hollow spurs hidden in their legs and both the injection and the venom are said to cause excruciating pain. Ducks, on the other hand, present far fewer direct safety risks.
One potential side-effect of fudging science when it comes to deciding whether human-level intelligence, consciousness, or both can emerge and/or exist on a binary architecture is that we could potentially lose control over the narrative of our collective perception of base reality.
That may sound like conspiracy theory, but we’d argue it’s no less fantastic than Nick Bostrom’s Simulation Argument.
Scenario 3: The final scenarioing
This time, there’s no XYZ company or widgets. In this scenario, you’re you, we’re us, and base reality is exactly what we perceive it to be.
Imagine the following sequence of events occurs:
A popular research team announces that it has achieved AGI.
A quorum of relevant scientists decides the claim holds water after cursory analysis of the evidence presented.
In lieu of any meaningful disagreement, the AGI narrative that’s been taking root since the launch of ChatGPT in 2023 finally secures enough support to be considered canonical.
Within a relatively short period of time, for most of the population, including the highly educated and those working directly in related fields, the notion that AGI exists becomes tantamount to fact and, then, indistinguishable.
Now, let’s adjust the scenario with the following parameters:
You are a godlike being that knows for a fact that AGI isn’t possible on binary architecture.
You know for a fact that the claims being made about AGI are as valid as David Copperfield’s claim to have made the Statue of Liberty vanish. Even though there is an abundance of circumstantial evidence, your godlike nature gives you insight into the true nature of the universe and you’re certain AGI built on classical computers isn’t a real thing.
As you view humanity’s predicament through this lens, consider the following:
The humans think human intelligence, a function ostensibly involving more simultaneous synaptic transmission permutations per thought than there are atoms in the universe, can be distilled to ones and zeroes and then forced to emerge in latent space through classical computations.
If they were right, Occam’s Razor indicates that they are almost certainly living in a computer simulation.
If they are wrong, Occam’s Razor tells us that intelligence almost certainly emerges from quantum mechanics. This would lend credence to the Big Bang Theory and other physics-based theories describing the cosmology of the universe.
At the end of the day, we feel it’s very important that we, as a species, take our time and get all of this stuff right. When the consequences of letting the wrong arbiter dictate the narrative of what human intelligence is could result in our collective perception of base reality becoming grossly distorted, there’s no room for “I know it when I see it” thinking.
We are the Center for Artificial General Intelligence Investigation and we’re just getting started
Art by Nicole Greene