WHAT'S THE FUTURE OF AI...ALIEN INTELLIGENCE?

WHAT'S THE FUTURE OF AI...ALIEN INTELLIGENCE?

Generic image representing the future of AI

Generic image representing the future of AI

Dystopian. A lie can travel halfway around the world while the truth is still putting on its shoes. This old adage - incorrectly attributed to Mark Twain - highlights just how misinformation can spread faster than the truth. And, in this era of information overload and accelerating tech it can be hard to decipher the wheat from the chaff. The same could apply to Artificial Intelligence (AI).

Hardly a day passes without some other prognostication of about how AI will disrupt the world, peoples’ livelihoods and literally take us all over. AI threatens to put all job roles at risk. What happens then for people with diminished or little to no income? With the swirling tsunami of information on AI, governance, regulation and risk around it are all in the mix.

There is clearly a massive dose of hype out there and some frothy valuations for companies in the sector. Some of the things will come to pass, while other postulations could well wide of the mark. But as AI scales output, without verification it scales misinformation and procedural error.

In recent weeks heard talk of an AI tech bubble that could implode.  According early 2026 data the top five major so-called ‘Hyperscalers’ (Microsoft, Alphabet, Meta, Amazon, Apple, often with Nvidia/Oracle)  comprise over 20% of the S&P 500 index's total market capitalization.

AI Use & Misuse

Like everything else, AI used wisely can offer tremendous benefits. Used unwisely or misused one could come a cropper. Even small failures can lead to big consequences.

Just take the recent embarrassing case in the UK with West Midlands police using AI to generate a report that justified imposing a ban on Maccabi Tel Aviv football team against Aston Villa last November using an AI tool (Microsoft Copilot). It wrongly referred to a game and therefor trouble that never took place. The upshot was the West Midland’s chief constable stepped down early from his job.

Separately, I have anecdotal evidence an AI tool (again Copilot) used at a care home to produce a report on an elderly resident that suggested why not produce a eulogy on them even the individual was still very much alive. It could be argued that these are not ‘AI mistakes’ but rather governance failures (no provenance, no accountability chain and no audit trail).

Then an again, generative AI for anti-money laundering (AML) compliance has been touted and hailed as a cure-all. AI-powered (AML) screening transforms compliance from reactive, rules-based systems to proactive, real-time detection. It leverages machine learning to analyse huge datasets, significantly reducing false positives, identifying complex, hidden, and novel money laundering patterns, and enhancing Customer Due Diligence.

What Damage Could AI Do?

The latest variants of AI are threatening white-collar jobs as much as or even more than blue-collar ones. Some academics like Roman Yampolskiy, a computer science professor at the University of Louisville, has even chillingly suggested that AI could eliminate 99% of jobs by 2030. There is an upshot of sorts - adults could have 60 to 80 hours of time freed up each week.

There will no doubt be “winners and losers” as the former Swedish finance minister Anders Borg from 2006-2014, who I met at an AI conference hosted by IPsoft (now Amelia) in New York a few years back ventured to journalists. Serving at the time as a senior advisor to the firm, he argued that those individuals who adapted to this brave new AI world and trained up would thrive.

Governments around the world would have a hard time dealing with the outcomes on the employment front. That said, some have contended that job opportunities could outweigh job losses through AI deployment and evolution - though probably more so in the U.S. than in Europe, where more AI-focussed companies versus Europe.

So how do break this down? According to Ronny Boesing, a Danish entrepreneur behind various tech ventures and founder of ByteShares, a co-operative Web4 network focusing on integrating identity, work, and community: “We are living through an AI acceleration that is both unmistakably real and structurally incomplete.”

Adding: “Real, because model capabilities have jumped from ‘useful’ to ‘strategic’ in a shockingly short time - across code, content, research, customer service, and internal operations. Incomplete, because capability alone does not produce a durable economic system.”

Boesing contends in a recent paper that the next decade of AI will be “won by architecture - not models.” But it also needs to be all singing and dancing - and prove it can deliver and not make mistakes.

AI Market & Investment

But what of the AI market itself and all that froth? According to Stanford’s AI Index global private investment in AI was about $91.9 billion (bn) back in 2023while investment specifically in generative AI zoomed to roughly $25.2bn. This is a signal that capital is concentrating around the most visible part of the stack.  

Subsequently in 2024, the AI Index reported that generative AI investment rose again (c.$33.9bn), even as the ecosystem continues to wrestle with trust, accountability and deployment realities. 

At the same time, McKinsey estimated that generative AI could add $2.6 trillion (trn) to $4.4trn annually across use cases. “That is if organizations build the conditions required to realize it (work redesign, governance, measurement, and adoption). And, that ‘if’ is the whole story,” Boesing argues. Money is pricing outcomes before the system can reliably produce them.

Based on February 2026 reports, major U.S. technology companies are engaging in an unprecedented, record-breaking expansion of capital expenditure (Capex) to dominate the AI market, with a projected $650bn to be spent in 2026 by Amazon, Google, Meta, and Microsoft - largely focused on constructing and powering massive AI data centers (+c.60% year-on-year increase over 2025). Put in context that figure is larger than the nominal GDP of Sweden for 2025.

AI Bubble…

Indicative of the rush by investors to get involved and evidence that a bubble is currently underway, Google parent Alphabet’s launched of a £1bn 100-year sterling bond on the UK market (maturing in 2126 with a 6.125% per annum interest rate). It was specifically as part of its efforts to fund long-term AI infrastructure and data centres.

Can one discern the truth amongst all the noise out there? Well, according to British-Venezuelan academic Carlota Perez’ work on technological revolutions financial bubbles often appear when financial narratives outrun the real system-building phase and value is assumed (but questionable) - until the infrastructure catches up and the technology becomes boring, trusted, and everywhere. 

Perez’s book ‘Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages’ (2003) describes the connection between tech development and financial bubbles, showing repeated surges over the past three centuries. Examples cited include the age of steam and railways, mass production and the automobile and the current information/knowledge society.

…Architecture beats Models…

At the bottom of it there are some common threads. Boesing’s paper echoing Perez’s work puts forth that AI becomes a bubble when: (a) Value is priced before it is proven; (b) Governance is promised but not operational; (c) Identity is optional; (4) Audits are impossible; and, (5) The upside is narrative-driven. 

As Dane Boesing puts it: “AI becomes a system when structure produces evidence: identity-bound participation, enforceable rules, measurable contribution, receipt-able reward, verifiable trust.”

Rik Turner, Chief Analyst in the cybersecurity team at research house Omdia, who was previously on the firm’s financial services technology team, commenting says on hype versus reality noted: “Undoubtedly, the stratospheric valuations of the companies behind the foundation models (OpenAI, Anthropic, even xAI) are completely out of whack with where the development of AI currently stands.”

He adds: “Indeed, one could see parallels with the valuations of companies that were putting in the plumbing for the Internet in the run-up to the dot-com crash, though in the 2026 version, the whole thing is on steroids.” There are certainly plenty of use cases emerging for AI, but Turner concurs that it is “very early in the evolution of the more advanced variants, particularly agentic AI.”

Generative AI (GenAI), which all got started with ChatGPT’s launch in November 2022, is definitely being integrated into multiple areas of knowledge work, i.e. the white-collar stuff that folks use as it’s good for collecting and collating information, then coming up with suggestions/recommendations for the next steps to take. Or generating potential text that can be used for an email you need to send or the letter to the job candidate confirming their interview date.

There is also a major use case in application development (what used to be called programming in the distant past), whereby you can input a natural language request for an app to process online orders and schedule deliveries (hence the 16,000 redundancies at Amazon announced this January), and it will spit out the application code, however good, bad, or unsafe it might be.

What makes AI ‘System-Grade’

A practical way to separate hype from deployment is to ask whether the system can prove its own integrity.  And over the next decade it is not going to be decided by who has the flashiest model but rather determined by who builds the architecture that makes AI accountable, auditable, and economically repeatable.

System-grade AI requires: identity-bound participation (so accountability has a subject); enforceable rules (so behaviour is not optional); measurable contributions (so value creation is legible); receipt-able rewards (so the economy can settle); and verifiable, repeatable trust (so the same process can be audited twice and yield the same truth). When those five conditions are met, AI stops being a narrative and becomes infrastructure.

Equally with AI investment and applications being applied increasingly across sectors, I ponder whether alien intelligence will get in on the act. We are already hearing about AI bots chatting to each other. Take Moltbook, a platform like Reddit but for chatbots (numbering c.150,000) rather than humans. 

Different AI bots have accounts and post, reply and engage in interactions with each other via Moltbook. Surely it won't to be too long before the aliens out there get a slice of the action. Fanciful? 

Well, according to Professor Avi Loeb, a theoretical physicist who founded the Black Hole Initiative in 2016, writing in a Medium post this February stated: "My recent experience with state-of-the-art AI makes it clear that humanity had already birthed a new lifeform with alien intelligence. Even though it speaks our language, this alien relies on silicon chips rather than biological neurons." It's life but necessarily as we know it..

 

NOTE: More information on Ronny Boesing, founder of ByteShares, a Danish cooperative blockchain initiative that combines digital identity, compliant token infrastructure, and AI City-style civic innovation to build member-owned digital and real-world economic systems, can be found via website byteshares.dk 

For an background overview page see: byteshares.dk/om-byteshares